text
stringlengths 4
2.78M
|
---|
---
abstract: 'New [*Chandra*]{} High Resolution Camera pointings on the “non-coronal” red giant Arcturus (HD124897; $\alpha$ Boo: K1.5 III) corroborate a tentative soft X-ray detection in a shorter exploratory exposure sixteen years earlier. The apparent source followed the (large) proper motion of the nearby bright star over the intervening years, and there were null detections at the previous location in the current epoch, as well as at the future location in the earlier epoch, reducing the possibility of chance coincidences with unrelated high-energy objects. The apparent X-ray brightness at Earth, averaged over the 98 ks of total exposure and accounting for absorption in the red giant’s wind, is $\sim 2{\times}10^{-15}$ erg cm$^{-2}$ s$^{-1}$ (0.2–2 keV). Systematic errors in the energy conversion factor, devolving from the unknown spectrum, amount to only about 10%, smaller than the ${\sim}$30% statistical uncertainties in the count rates. The X-ray luminosity is only $3{\times}10^{25}$ erg s$^{-1}$, confirming Arcturus as one of [*Chandra’s*]{} darkest bright stars.'
author:
- 'Thomas R. Ayres'
title: Beyond the Coronal Graveyard
---
INTRODUCTION
============
Arcturus ($\alpha$ Boötis; HD124897; K1.5 III)[^1] is the brightest star at northern declinations, third brightest overall. It is an old, solar mass, slightly metal poor red giant, only 11 pc from the Sun (e.g., Ram[í]{}rez & Allende Prieto 2011). Arcturus is of interest, among other reasons, because it mirrors the evolutionary fate that awaits the Sun some 5 billion years hence. Another curiosity is that the red giant belongs to a well-defined stellar stream (the eponymous Moving Group): possibly the remnants of an ancient dissolved open cluster; more speculatively a tidal tail stripped from a satellite galaxy that wandered too close to the Milky Way eons ago (e.g., Navarro et al. 2004); or perhaps simply a dynamical resonance in the Galactic disk (e.g., Bensby et al. 2014).
In the early days of X-ray astronomy, low-mass ($\gtrsim 1\,M_{\odot}$) red giants like Arcturus rarely were detected in high-energy surveys by pioneering observatories like [*Einstein*]{} (Vaiana et al. 1981; Ayres et al. 1981), and later [*Röntgensatellit*]{} ([*ROSAT*]{}) (Haisch et al. 1991). In contrast, yellow giants in the Hertzsprung gap, such as Capella ($\alpha$ Aurigae: G1 III + G9 III), often were strong coronal ($10^6$–$10^7$ K) emitters. In fact, Linsky & Haisch (1979) earlier had proposed – based on ultraviolet proxies – a dividing line in the giant branch, separating the coronal “haves" from the “have nots."
The “non-coronal” side (redward of spectral type K1 III) later became known as the “coronal graveyard,” after a deep X-ray pointing by [*ROSAT*]{} failed to detect Arcturus, prototype of the class (Ayres, Fleming, & Schmitt 1991). It was well known that an internal magnetic “Dynamo” – relying heavily on stellar rotation – underpins the cycling activity of sunlike stars, so the demise of coronae among the bloated, slowly spinning red giants seemed sensible.
A decade later, in mid-2002, one of the new-generation X-ray facilities, [*Chandra,*]{} turned its sharper gaze on Arcturus (Ayres, Brown, & Harper 2003 \[ABH\]). In a 19 ks exposure with the High Resolution Camera (HRC-I), a mere 3 counts were recorded in a small detect cell centered at the predicted coordinates of the red giant. Nevertheless, thanks to unusually low cosmic background conditions, the few counts represented a moderately significant detection. The estimated X-ray luminosity of Arcturus was more than an order of magnitude lower than that of the average Sun, itself a rather mild coronal source. The $L_{\rm X}$ was especially diminutive given that the surface area of the K giant is more than 600 times that of the G dwarf.
At about that time, the high-sensitivity ultraviolet spectrographs of [*Hubble Space Telescope*]{} uncovered unexpected clues to the apparent coronal disappearing act. The first surprise was the clear presence of coronal proxy 1548 Å in non-coronal giants like Arcturus (Ayres et al. 1997), albeit weak enough to have escaped previous notice. forms at $10^5$ K, hot enough that magnetic heating must be involved. The second surprise was that other hot lines, 1393 Å and 1238 Å, showed stationary, sharp absorptions from cool species such as and (ABH). This implied that the hot emitting structures must be buried under a large overburden of lower temperature ($\sim 6000$ K) chromospheric material, a “cool absorber” if you will. The large column can suppress soft X-rays, but still pass FUV radiation. If the “buried corona” conjecture is correct, deep-seated magnetic activity on the non-coronal giants might be responsible for stirring up their atmospheres and initiating their powerful winds: $10^4$ times the Sun’s mass loss rate (for the specific case of Arcturus), but much cooler than the solar coronal counterpart, $T\sim 10^4$ K versus $\sim 10^6$ K (e.g., O’Gorman et al. 2013). The red giant outflows are important to galactic ecology, but the motive force behind the winds has remained elusive.
The best test of the buried corona hypothesis would be an X-ray spectrum, to judge the extent of the putative chromospheric soft absorption. A minimal CCD-resolution ($E/\Delta{E}\sim 50$) energy distribution in the 0.25–10 keV band generally would require $\sim 10^3$ net counts; out of the question with contemporary instrumentation, at least given the apparent faintness of the Arcturus source in 2002. However, recently the [*Chandra*]{} Observatory offered a special opportunity to carry out observations that might help inform the design of next-generation X-ray facilities. A proposal for a deeper HRC-I exposure of Arcturus – in essence a feasibility assessment for a future spectrum – was among the projects chosen. Here, the results of the new X-ray observations of Arcturus are described, and their implications discussed. To preview the more detailed conclusions presented later, a source at the predicted location of Arcturus (accounting for proper motion) was clearly present in each of the two new observations; and especially the sum (including also the earlier \[2002\] pointing).
OBSERVATIONS
============
[*Chandra’s*]{} HRC-I was the best choice for the project, because the sensor is immune to “optical loading,” an important consideration for observing visually bright stars that are X-ray faint. HRC-I also has excellent low-energy sensitivity, important for low-activity coronal sources, which tend to be soft (taking the Sun as an example). Further, [*Chandra’s*]{} high spatial resolution minimizes source confusion; and dilutes the diffuse cosmic background, as alluded earlier, which is essential to boost detectability of a source that might provide only a dozen counts in a long pointing. The downside is that HRC-I has minimal spectral response: it can deliver the broad-band X-ray flux, but no clues to coronal temperature or soft absorption. That limitation was not a serious concern, for what mainly was a detection experiment.
The new HRC-I observation of Arcturus was carried out in two segments, 50 ks and 30 ks on 2018 June 9 and 10, respectively. Details are provided in Table 1, including the previous pointing from 2002, which was incorporated in the present analysis.
[rrrrr]{} 2555 & 2002-06-19.02 & 18.39 & $+0.3$ & $+0.3$\
20996 & 2018-06-09.40 & 49.43 & $+0.2$ & $+0.1$\
21102 & 2018-06-10.82 & 29.77 & $+0.4$ & $-0.1$\
ANALYSIS
========
The three independent [*Chandra*]{} pointings were considered by themselves, as well as together. Fig. 1 depicts a time-integrated X-ray event map for the full 98 ks exposure: a 20${\times}$20 field centered on coordinates (213.9120, $+$19.1766), approximately the mean position of Arcturus between the two separated epochs. The event lists were concatenated in a fixed reference system, not accounting for the (large) proper motion of the target (which the other, more distant, objects in the vicinity were unlikely to share). Perhaps three dozen X-ray point sources appear in the field, which is at high galactic latitude with a clear view out of the Milky Way. Circled objects are from the [*Chandra*]{} Source Catalog 2[^2]. Those marked in red have [*Gaia*]{} Data Release 2[^3] optical counterparts (most with high X-ray/optical ratios typical of Active Galactic Nuclei, although two of the barely detected objects apparently are distant late-type stars). The three brightest X-ray sources with [*Gaia*]{} counterparts – likely all AGN – were evaluated as checks of the aspect solution. One of these – [*Gaia*]{}1233978433822837888, to the upper right of the central region – was slightly discrepant with respect to the other two (probably because of its vignetted profile owing to its large displacement from the image center) and was discarded. Double circles mark the two remaining sources ([*Gaia*]{}1233964071455847296 and [*Gaia*]{}1233961631914413056) included in the final astrometric vetting. Results are reported in Table 1: corrections were less than 0.5, attesting to the excellent aspect reconstruction of [*Chandra*]{}. A 20${\times}$20 blow-up at upper left shows the central region around Arcturus in a map now accounting for the (large) proper motion of the nearby red giant. A significant X-ray source is present at the co-moving optical position of the bright star.
For faint X-ray objects like Arcturus, the size of the detect cell – to evaluate the number of source events – must be chosen carefully. Too large a cell accumulates more background, which can dilute the true source events and suppress the detection significance. Too small a cell might throw away legitimate source counts, and also could be sensitive to subtle errors in the astrometric correction or knowledge of the encircled energy function (especially for an object of unknown spectral properties). For the case of Arcturus, a 2 diameter detect cell ($\sim$90% encircled energy) was adopted as a balance among these considerations.
-30mm -5mm ![Main panel displays central 20${\times}$20 of [*Chandra*]{} HRC-I field around Arcturus combining all three pointings (one in 2002, two in 2018), binned in 0.5 pixels and smoothed with a double-box-car spatial filter of 5 pixels width. “Sky” coordinates are in arcseconds relative to a fixed reference point (213.9120, $+$19.1766). N is up, E to the left. Gray scale was set to highlight significant sources. Pair of larger concentric red circles represents the annulus in which the diffuse cosmic background was assessed. Two smaller blue circles, close to and on either side of center, mark positions of Arcturus in 2002 (upper left) and 2018 (lower right). Additional small circles are entries from [*Chandra*]{} Source Catalog 2. Those in red have optical counterparts in [*Gaia*]{} Data Release 2. Double-circled objects (likely AGN) served as astrometric checks. Inset panel at upper left is a 20${\times}$20 field, binned in 0.125 pixels and smoothed with a double-box-car spatial filter of 3 pixels width, centered on the proper-motion corrected position of Arcturus. Blue circle is 1 in radius: the detect cell for event measurements. A highly significant source appears at the center of the cell. ](f1.pdf "fig:"){width="\linewidth"} -35mm
An average cosmic background was determined in a source-free annulus centered on the fixed reference coordinates in each epoch, as noted in Table 2 and illustrated in Fig. 1. The background amounted to 4–6 counts in the detect cells of the more recent observations, but just 0.4 counts in the shorter, much lower background 2002 pointing. Events not only were counted at the predicted location of the target in each epoch, but also in the 2002 observation at the coordinates where the target would be in the later 2018 pointing, and vice versa; to evaluate possible accidental sources. The various measurements are summarized in Table 2. Detection significances were based on the “Frequentist" prescriptions described by Ayres (2004), while intrinsic source intensity confidence intervals were determined from the “Bayesian” prescriptions in the same article; reflecting the different statistical philosophies applied to source detection, on the one hand, and source characterization, on the other.
Note that the future location of Arcturus in the 2002 pointing, and its past location in the 2018 observations, have potential sources of much lower significance – essentially null detections – than the respective target cells in the same observations. This suggests that the cumulative apparent source at Arcturus truly is the star, rather than accidental objects that happened to be at the precise stellar locations in the two well-separated epochs. Notice also that the count rate (CR) of the second, shorter 2018 pointing was about 50% higher than that of the first, which might suggest short term variability. However, the two CR’s agree within their 90% confidence intervals, so variability cannot be claimed at a high level of significance.
[lcccccc]{} On Star & 3 & 0.4 & 2.3 & 98.97 & 2.6 & $0.14_{-0.07}^ {+0.20}$\
Future Position & 0 & 0.4 & 0 & & 0.0 & $0.00_{-0.00}^ {+0.14}$\
On Star & 17 & 5.8 & 3.6 & 99.98 & 11.2 & $0.23_{-0.08}^ {+0.13}$\
Past Position & 9 & 5.8 & 1.2 & 88 & 3.2 & $0.06_{-0.05}^ {+0.10}$\
On Star & 13 & 3.5 & 3.7 & 99.99 & 9.5 & $0.32_{-0.12}^ {+0.20}$\
Past Position & 5 & 3.5 & 0.7 & 76 & 1.5 & $0.05_{-0.05}^ {+0.14}$\
On Star & 33 & 9.7 & 5.9 & 99.99 & 23.3 & $0.24_{-0.06}^ {+0.09}$\
Future/Past Position & 14 & 9.7 & 1.3 & 90 & 4.3 & $0.04_{-0.03}^ {+0.06}$\
-3mm
While the count rate of Arcturus, now averaged over multiple epochs, is better established than the more tentative detection in the shorter 2002 pointing, an important – potentially large – uncertainty is the appropriate Energy Conversion Factor (ECF) to apply to the CR, especially lacking a spectrum to provide guidance concerning the source temperature and soft absorption. It is helpful, in this regard, to temporarily ignore the main tenet of the “buried corona” conjecture, namely the possibly large internal soft X-ray absorption within the red giant chromosphere, because the “un-absorbed flux” corrections could be orders of magnitude, and thus essentially unconstrained at present (absent the desired future spectrum). At the same time, it is important to consider the potential absorption effects of the extended red giant wind, outside the chromospheric attenuation zone, but between the star and the X-ray observatory at Earth.
The O’Gorman et al. (2013) study, mentioned earlier, described a wind density model for Arcturus, consistent with centimetric free-free radio emission from the outflow, which implies a hydrogen density at the base of the wind (where $r\sim 1.2\,R_{\star}\sim 2.1{\times}10^{12}$ cm) of $\sim 3.8{\times}10^{7}$ cm$^{-3}$. (By way of reference, for the stellar and wind parameters assumed by those authors, the mass loss rate would be a ${\rm few}{\times}10^{-10} M_{\odot}$ yr$^{-1}$.) For a homogeneous, radially expanding, constant velocity wind, the implied hydrogen column density through the outflow would be $\sim 8{\times}10^{19}$ cm$^{-2}$, much larger than the likely interstellar column in that direction to the nearby star. The effect of the wind absorption is to flatten the ECF for the un-absorbed flux (i.e., the intensity if the outflow were not present) as a function of the source temperature (proxy for the spectral energy distribution).
Over the broad temperature range $\log{T}\sim 6.4$–7.0 K, the average ECF is $8.1{\pm}0.3{\times}10^{-15}$ erg cm$^{-2}$ s$^{-1}$ (cnt ks$^{-1}$)$^{-1}$, based on simulations with a solar abundance APEC model in WebbPIMMS[^4], with the cited wind column, to convert the measured CR (cnt ks$^{-1}$) to apparent (un-absorbed) X-ray flux (0.2–2 keV) at Earth. For a softer spectrum, in the range $\log{T}\sim 6.0$–6.3 K, the ECF is only about 10% higher; and for the “absorbed” flux (i.e., including the wind attenuation), the ECF is only about 10% lower, at least for the warmer temperature interval. The modeled ECF is insensitive to the assumed coronal abundances, over the values (0.2–1 solar) covered by WebPIMMS; and the detector sensitivity declined only slightly over the sixteen years between the two Arcturus observations, leading to a nearly negligible increase in the ECF.
The apparent X-ray flux – as measured at Earth, compensating for the wind absorption – from the epoch-average CR is $f_{\rm X}\sim 2.2{\times}10^{-15}$ erg cm$^{-2}$ s$^{-1}$ (0.2–2 keV) with a formal uncertainty (from the CR alone) of about ${\pm}$30%. The systematic error on the un-absorbed flux, considering that the source might be softer – $\log{T}\sim 6.0$ K compared with $\log{T}\sim 6.4$–7.0 K – is only about 10%. The corresponding wind-free X-ray luminosity, for the 11.26 pc distance of the red giant, is $L_{\rm X}\sim 3.3{\times}10^{25}$ erg s$^{-1}$; while $L_{\rm X}/L_{\rm bol}\sim 5{\times}10^{-11}$. The latter is a remarkable several orders of magnitude below the $\sim 1.5{\times}10^{-7}$ of the long-term average Sun, already close to the lowest activity tier among the G dwarfs of the solar neighborhood.
The new flux values for Arcturus are about 2 times higher than those originally reported for the 2002 [*Chandra*]{} pointing by ABH. This partly is because the count rates in the new pair of 2018 pointings are about twice the earlier levels, but partly because a somewhat smaller ECF was assumed in the previous study. The apparent CR up-tick in 2018 was welcome, given the 5 times higher background levels of those pointings (during solar minimum when external cosmic rays are more able to penetrate the inner heliosphere). In any event, the very low measured X-ray luminosity of Arcturus in the total HRC-I observation still represents a stunning degree of coronal futility, although one should keep in mind that the apparent $L_{\rm X}$ could be the outcome of severe degradation by internal absorption in the extended red giant chromosphere.
DISCUSSION
==========
Early speculations concerning the fading of X-ray activity in the coronal graveyard focused on the likely absence of a solar-like Dynamo in the evolved giants, which not only are rotating slowly, but also have a significantly different internal constitution compared with dwarf stars (a hydrogen burning shell around an inert helium core in first ascent giants; which gives way to core helium burning plus the hydrogen shell source in the post-flash objects, especially those in the long-lived red giant “clump”). However, it has become clear that classical Dynamos are only one part of the complex story of late-type stellar magnetism. For example, much of the magnetic flux in the Sun’s photosphere, especially at the minimum of sunspot activity, apparently is created in the near-surface layers and recycled rather quickly (days), populating what has been called the “magnetic carpet” (Title & Schrijver 2002). The generation mechanism likely is a purely convective process, without the necessity of rotation, so can operate in any star with surface convection, a condition the red giants certainly satisfy.
In fact, Sennhauser & Berdyugina (2011) reported a possible weak, $\lesssim 1$ G, longitudinal field on Arcturus; while, later, Auri[è]{}re et al. (2015) described similar detections on dozens of red giants, including Arcturus itself. Although many of the red giants most similar to Arcturus in chromospheric activity displayed rather weak, sub-Gauss fields, it should be noted that the global longitudinal field of the Sun is only $\sim$2 G, which nevertheless is deceptively small because it represents an average over a sparse distribution of more intense, kilo-Gauss flux tubes.
The key difference between a red giant and a yellow dwarf like the Sun would be the relative vertical scale of the surface magnetic structures, compared with, say, the thickness of the high-opacity outer atmosphere. As described by ABH, the scale of the stellar outer convection zone, as a fraction of the star’s radius, is similar for a giant and a dwarf. Convectively spawned magnetic flux ropes likely would imprint at some characteristic fraction (say, a tenth) of that scale. At the same time, the density scale height for a large diameter giant of similar mass to a small diameter dwarf (e.g., Arcturus and the Sun), will be a significantly larger fraction of the stellar radius, because the latter appears squared in the scale height relation (due to the gravity factor). Because the density scale height controls the thickness of the stellar chromosphere, that layer will be proportionately much thicker in the red giant.
We know from the Sun that the “relentlessly dynamic” chromosphere (de Pontieu et al. 2007) inspires an equally relentless forcing of the corona via hot plasma jets launched by magnetic reconnection in the kinematically stressed, tangled fields of the lower layers; especially in the narrow subduction lanes of the supergranulation pattern, where the magnetic carpet flux tubes accumulate, having been swept there by horizontal flows. Perhaps a similar mechanism operates near the base of the Arcturus chromosphere, but affected by the great thickness of that region. Most of the red giant magnetic loops might be buried well inside the chromosphere. Those intercepting, and trapping, hot gas from reconnection jets at the base of the chromosphere would suffer X-ray attenuation. However, any ballistic plasma jets threading onto open field lines in the chromosphere might burst free from that layer altogether, especially if there was an internal sustaining source of acceleration, for example MHD waves (as de Pontieu and collaborators have proposed for solar spicules). These renegade gas plumes, cooling rapidly by adiabatic expansion in the absence of confinement, might then become the source of the red giant wind.
Of course, all this is bare speculation without further quantitative insight concerning the organization of the outer atmospheres of red giants like Arcturus. Such insight might come from a future advanced X-ray observatory, capable of delivering energy resolved spectra of these intrinsically faint coronal objects. In the interim, alternative approaches could be pursued, such as further exploration of the cool absorptions on top of the red giant FUV hot lines; or possibly even orthogonal forays into other wavebands, such as the mm/sub-mm with ALMA.
One final note: Verhoelst et al. (2005) proposed – from an analysis of infrared interferometric visibilities, and consistent with an earlier suggestion based on [*Hipparcos*]{} – that Arcturus might have a close companion, possibly a subgiant of mid-G spectral type. The mass ratio would have to be very close to unity, with each component $\sim 1\,M_{\odot}$, to have two evolved stars in the same system. Given that the proposed separation of the two nearly equal-mass stars is only 0.2, the lack of periodic radial velocity variations is challenging to explain in the binary hypothesis, requiring a delicate tuning of the orbital configuration. In fact, low amplitude radial velocity oscillations have been recorded in Arcturus (e.g., Merline 1996), but these appear to be stochastic, more closely related to solar pressure-modes than to systematic orbital effects. The existence of a companion has not yet been confirmed by direct AO imaging at large telescopes, so remains in doubt. More significantly, solar neighborhood late-type subgiants tend to have moderate X-ray luminosities ($\gtrsim 10^{28}$ erg s$^{-1}$: Schmitt & Liefke 2004), so the apparent very low $L_{\rm X}$ of Arcturus would seem to further discount the binary hypothesis.
CONCLUSIONS
===========
[*Chandra*]{} pointings on the archetype non-coronal red giant Arcturus have secured moderately significant to very significant detections of an X-ray source at the stellar coordinates, in three epochs. Accidental X-ray objects at the two distinct locations in 2002 and 2018 (well-separated thanks to high proper motion of the bright star) are unlikely, given the lack of significant sources at the future and past positions in the respective epochs. Further, the high spatial resolution of [*Chandra*]{} naturally minimizes source confusion.
Although the apparent Arcturus X-ray source is rather faint, it nevertheless suggests that a future high-energy observatory with $\sim$100 times the contemporary [*Chandra*]{} sensitivity, and similar or better spatial resolution, could collect a diagnostically valuable spectrum in a reasonable exposure ($\sim$100 ks). Such observations could help assess the properties of possibly buried coronae in the extended outer atmospheres of red giants, and perhaps also contribute to resolving the puzzle of their enigmatic winds.
This work was supported by grant SP8-19001X from Smithsonian Astrophysical Observatory, based on observations from [*Chandra*]{} X-ray Observatory, collected and processed at [*Chandra*]{} X-ray Center, operated by SAO under NASA contract. This study also made use of public databases hosted by [SIMBAD]{}, at [CDS]{}, Strasbourg, France; and Data Release 2 from ESA’s [*Gaia*]{} mission (<https://www.cosmos.esa.int/gaia>), processed by [*Gaia*]{} Data Processing and Analysis Consortium (<https://www.cosmos.esa.int/web/gaia/dpac/consortium>), funded by national institutions participating in the [*Gaia*]{} Multilateral Agreement.
Auri[è]{}re, M., Konstantinova-Antova, R., Charbonnel, C., et al. 2015, , 574, A90
Ayres, T. R. 2004, , 608, 957
Ayres, T. R., Brown, A., & Harper, G. M. 2003, , 598, 610 \[ABH\]
Ayres, T. R., Brown, A., Harper, G. M., et al. 1997, , 491, 876
Ayres, T. R., Fleming, T. A., & Schmitt, J. H. M. M. 1991, , 376, L45
Ayres, T. R., Linsky, J. L., Vaiana, G. S., Golub, L., & Rosner, R. 1981, , 250, 293
Bensby, T., Feltzing, S., & Oey, M. S. 2014, , 562, A71
de Pontieu, B., McIntosh, S., Hansteen, V. H., et al. 2007, , 59, S655
Haisch, B., Schmitt, J. H. M. M., & Rosso, C. 1991, , 383, L15
Linsky, J. L., & Haisch, B. M. 1979, , 229, L27
Merline, W. J. 1996, Bulletin of the American Astronomical Society, 28, 28.01
Navarro, J. F., Helmi, A., & Freeman, K. C. 2004, , 601, L43
O’Gorman, E., Harper, G. M., Brown, A., Drake, S., & Richards, A. M. S. 2013, , 146, 98
Ram[í]{}rez, I., & Allende Prieto, C. 2011, , 743, 135
Schmitt, J. H. M. M., & Liefke, C. 2004, , 417, 651
Sennhauser, C., & Berdyugina, S. V. 2011, , 529, A100
Title, A. M., & Schrijver, C. J. 1998, in Tenth Cambridge Workshop on Cool Stars, Stellar Systems and the Sun, Eds. R. A. Donahue & J. A. Bookbinder, ASP Conf. Ser. 154, 345
Vaiana, G. S., Cassinelli, J. P., Fabbiano, G., et al. 1981, , 245, 163
Verhoelst, T., Bord[é]{}, P. J., Perrin, G., et al. 2005, , 435, 289
[^1]: Unless otherwise stated, stellar properties are from SIMBAD.
[^2]: see: http://cxc.harvard.edu/csc/
[^3]: see: https://gea.esac.esa.int/archive/
[^4]: see: http://cxc.harvard.edu/toolkit/pimms.jsp
|
---
abstract: 'The vision of the Internet of Thing is becoming a reality and novel communications technologies such as the upcoming 5G network architecture are designed to support its full deployment. In this scenario, we discuss the benefits that a publish/subscribe protocol such as MQTT or its recently proposed enhancement MQTT+ could bring into the picture. However, deploying pub/sub brokers with advanced caching and aggregation functionalities in a distributed fashion poses challenges in protocol design and management of communication resources. In this paper, we identify the main research challenges and possible solutions to scale up a pub/sub architecture for upcoming IoT applications in 5G networks, and we present our perspective on systems design, optimisation, and working implementations.'
author:
-
-
-
bibliography:
- 'bibfile.bib'
title: 'Towards a Scaled IoT Pub/Sub Architecture for 5G Networks: the Case of Multiaccess Edge Computing'
---
IoT, Pub/Sub, 5G, Multiaccess Edge Computing
Introduction {#sec:introduction}
============
The Internet of Things (IoT) is becoming a reality, and in the last few years we have indeed witnessed to an enormous growth of technologies designed for its wide and capillary implementation. In particular, many efforts have been made in order to design communication solutions adapted to the specific requirements[^1] of IoT devices. Such efforts produced a great variety of different communication technologies tailored to low-power devices, ranging from short-range solutions (IEEE 802.15.4, Bluetooth Low Energy) to dedicated long-range cellular-like networks (LoRa/LoRaWAN, Sigfox, Ingenu). Similarly, other efforts have been also made to adapt traditional mobile cellular networks to machine-type communication typical of the IoT, and solutions like LTE-M or NB-IoT are already available from cellular operators [@cesana2017iot].
Alternatively, it is expected that the advent of the 5th Generation (5G) of mobile cellular networks will boost tremendously the development and implementation of large scale, city-wide IoT applications. In this respect, two main 5G innovation pillars have been designed precisely for accommodating IoT requirements: massive Machine Type Communication (mMTC) and Multi-Access Edge Computing (MEC). The former will enable connection densities in the order of 10$^6$ low-power devices per square kilometre; while the latter will enable (serverless) distributed computing at the edge of the network, opening it to applications and services from third parties.
In MEC scenario, we argue that different protocol standards such as MQTT, MQTT+ and distributed orchestration of brokers, will facilitate the development of large-scale interconnected IoT systems. Referring in particular to the all-IP 5G network infrastructure, our discussion concentrates on the higher layers of the TCP/IP stack. Regarding the transport layer, although it is arguable [@gomez_tcp_2018] solutions based on (lower overhead) UDP seem to better suit the IoT scenario rather than TCP. As for the application layer, two main communication paradigms are available: Representational State Transfer (REST) and publish/subscribe. HTTP (or its lightweight version COAP [@bormann2012coap]) and MQTT (or its enhanced version MQTT+ [@giambona2018mqtt+]) are excellent examples of the two approaches: although both have reached a certain popularity, it is still unclear which one will become a preferred and widely adopted solution in the IoT world.
In this paper, we draw a picture of the future 5G-enabled IoT particularly focusing on the application layer, while reconciling several architectural and protocol related concepts, and identifying operational meeting points among different research areas. In the particular case of Multi-access Edge Computing, we take a position in favour of publish/subscribe approaches at the application layer and propose MQTT-based approaches as candidates for becoming preferred solutions. We identify the main challenges of this vision and propose possible solutions considering past, present and future approaches.
MEC: enabling IoT in 5G networks
================================
Multi-access Edge Computing (MEC) is identified as one of the key technologies required to support low latency for future services in 5G networks. The main idea is to bring computational power, storage resource and services typical of nowadays cloud infrastructures to the edge of the network, at close proximity to the users. By doing this latency is greatly reduced, as well as the amount of traffic to be managed by the core network. MEC use case examples include computation offloading, distributed content delivery and caching, web performance enhancements and, of course, IoT applications.
Regarding the latter, MEC technologies are envisioned to work as IoT gateways, facilitating the management of data in close proximity to their sources, providing computational and storage resources as well as processing, aggregation and filtering functionalities [@salman2015edge; @porambage2018survey]. MEC platforms will be offered and deployed by the network operator at multiple locations (e.g., at the base stations, at cell aggregations sites or at multi-RAT aggregation points), and will be also made open to authorised third parties such as application developers and content providers [@mach19mobile]. Motivated by traffic off-loading and considerable reductions in latency, very recently, the major cloud service providers have started working on edge solutions to move part of their services closer to the final users: Amazon AWS Greengrass/Lambda, Google IoT Edge, IBM Watson Edge Analytics and Microsoft Azure IoT Edge can be intended as efforts of such companies to prepare products for the upcoming 5G network architecture based on MEC.
Figure \[fig:architecture\] briefly illustrates the scenario: IoT devices may be served with different types of connectivity (including WiFi) from the 5G base stations, and communicate through IP and TCP or UDP with the MEC servers and with the Internet, where traditional cloud services are located. As in legacy LTE mobile networks, different base stations may be directly connected to each other through X2 interfaces, facilitating tasks such as devices handovers. At the application layer, we observer a growing dichotomy between RESTful or pub/sub approaches. Indeed, the aforementioned four major edge computing services offer either one or both approaches for connecting IoT devices: Amazon, Google and IBM offer HTTP/HTTPS and MQTT interfaces while Microsoft Azure IoT Edge supports only pub/sub protocols (MQTT or AMQP).
![IoT scenario supported by MEC-enabled 5G architecture[]{data-label="fig:architecture"}](architecture){width="0.8\columnwidth"}
REST or PUB/SUB in IoT? COAP vs MQTT {#sec:MQTT+}
====================================
COAP vs MQTT
------------
HTTP is the most popular application layer protocol in the Internet ecosystem, and does its job efficiently. Therefore, when designing the application layer of the Internet of Things, researchers tried to adapt the REST approach of HTTP to resource constrained devices. The efforts resulted in the Constrained Application Protocol (COAP), standardised in 2014 by the IETF. COAP, based on UDP, provides the same set of primitives of HTTP (GET, PUT, POST, DELETE) with reduced complexity. COAP-enabled IoT devices hold data and measurements in form of resources, identified with URIs, and act as servers. Clients interested in such measurements access them in a standard request-response fashion. However, such a pull-based approach does not fit well the majority of IoT application scenarios, where devices perform measurements autonomously and transmit them to a central collection point. To overcome this issue and avoid the collection point to continuously poll a resource, COAP provides an observation mode. A client (the central collection point) registers to a resource state on an IoT device and gets notified each time it changes. Although COAP is a reference protocol for low-power devices and its implementation is available for several programming languages, to date it is not taken into consideration by any of the major cloud and edge platform services players[^2]. Therefore, we do not expect it to be used in at least the first rollout of 5G MEC solutions.
Conversely, MQTT is living its greatest period of popularity since its proposal in 1999. Standardised by OASIS in 2014, this lightweight publish/subscribe protocol is practically becoming the standard de-facto in M2M and IoT applications. As a matter of fact, all major cloud platforms (e.g., Amazon AWS, Microsoft Azure, IBM Watson) expose their IoT services through MQTT. The reasons of such popularity derive from MQTT’s incredible simplicity at the client-side, which nicely fits in resource-constrained applications, yet supporting reliability and several degrees of quality of service (QoS). MQTT is based on the publish/subscribe communication pattern, and all communications between nodes are made available via a broker. The broker accepts messages published by devices on specific topics and forwards them to clients subscribed to those topics, ultimately controlling all aspects of communication between devices.
MQTT+
-----
Recently, the MQTT protocol has received a lot of attention from the research community. In particular, we mention here MQTT+ [@giambona2018mqtt+], a version that nicely fits with the 5G/MEC scenario under consideration. An MQTT+ broker provides all functionalities of legacy MQTT, but can also perform advanced operations such as spatio/temporal data aggregation, filtering and processing. Such operations are triggered by specific topics, as demonstrated below:
- [**Data filtering:** MQTT+ allows a client to perform a rule-based subscription using ad-hoc prefix operators. As an example, a client subscribing to `$GT;value/topic/` will receive only messages published on `topic` that contain a value greater than `value`. Other comparison operators are defined, such as lower than (`$LT`), equals or not equals (`$EQ` / `$NEQ` ) and contains (`$CONTAINS`)]{}.
- [**Temporal aggregation:** MQTT+ allows a client to subscribe to certain temporal aggregation functions of a topic, using the format `$<TIME><OP>/topic`, with `OP` = `{COUNT,SUM,AVG,MIN,MAX}` and `TIME` = `{DAILY,HOURLY,QUARTERHOURLY}`. This allows a client to obtain e.g., the daily count of messages on a certain topic. The MQTT+ broker handles all operations internally by caching values and computing aggregates for specific time intervals.]{}
- [**Spatial aggregation:** MQTT+ provides a client with the possibility to subscribe to multiple topics at once by using a single-level (+) or multi-level (\#) wildcard. However, a client may be interested in aggregating such topics at once: MQTT+ allows for this possibility. A client may subscribe to `$<OP>/topic/`, where `OP` can assume the same values defined for temporal aggregation and `/topic/` contains one or more wildcards. By doing this a client can obtain, e.g., the average or the sum of the values published by different sensors.]{}
- [**Data processing:** Beside simple temporal and spatial aggregation, MQTT+ allows a client to subscribe to processing operations executed by the broker on multimedia data (audio, images and video) published by sensors. The broker advertises its processing capabilities under a special topic (e.g., `$SYS/capabilities/`). Specific operators (such as the `$CNTPPL` prefix to count people) trigger the broker to run specific algorithms and to return the result to the subscriber. Clients may use such capabilities to obtain processed information from the raw data, avoiding the need to perform processing themselves. As an example, a client may subscribe to the `$CNTPPL/camera_id` to obtain the number of people contained in the images published on the `camera_id` topic. When the MQTT+ broker is implemented on a MEC server run by one of the major cloud operators, such advanced capabilities may be provided by one of the existing cloud processing tools (e.g., Amazon Rekognition, Google Vision).]{}
- [**Composite subscriptions:** One of the strengths of MQTT+ is the capability of allowing composite subscriptions by properly chaining the operators introduced so far, thus enabling even more advanced functions. Indeed, MQTT+ supports spatio-temporal aggregations, spatio-temporal aggregation of processed data and even rule-based spatio-temporal aggregation. To give a concrete example, a subscription to `$DAILAVG$CNTPPL/camera_id`]{} triggers the broker to count the number of people contained in all images published on the `camera_id` topic, returning to the subscriber its daily average.
In the next section we discuss the research challenges of operating an MQTT+ broker in a MEC server, in the context of a 5G network.
MQTT+ on MEC: research challenges
=================================
![IoT use case: vehicle sharing[]{data-label="fig:architecture_2"}](architecture_2){width="0.8\columnwidth"}
We take as reference the example use-case of a vehicle data sharing (cars or bicycles) system implemented in a 5G-enabled smart city, similar to the ones already deployed in many cities worldwide. An architectural sketch of the system is depicted in Figure \[fig:architecture\_2\]: shared vehicles are information producers and periodically publish to the system meaningful information such as their location, service status and data retrieved from a plethora of sensors (both vehicle-related and environmental-related). Such information are received at an MQTT+ broker installed on a MEC server at the closest base station (or multi-RAT aggregation point) and forwarded to two types of data subscribers: local and global subscribers. Local subscribers are data consumers with low-latency requirements, located in close proximity of the producers (e.g., a person willing to search for the closest vehicle), or which even coincide with the producers themselves. The latter case is represented by all those applications of MEC-enabled offloading, such as intense image/video processing to be performed on multimedia streams coming from in-car cameras (e.g., augmented reality). Such type of processing could be enabled by the advanced broker functionalities provided by MQTT+: in the example shown in Figure \[fig:architecture\_2\] a connected car publishes a video from one of its camera sensors on the topic `car_id/video` and subscribes to an advanced service (e.g. augmented reality) on the local MEC broker using the MQTT+ syntax `$PROCESS/car_id/video`. Conversely, global subscribers are consumers located far away from the producers, such as generic users, control operational points, traffic information services and so on. Note that such global subscribers may be more interested in aggregated information rather than raw data (e.g., the amount of cars flowing through an intersection every 5 minutes), again motivating the adoption of an advanced protocol such as MQTT+. As an example, the car sharing a management server in Figure \[fig:architecture\_2\] subscribes to the advanced topic `$COUNT$EQ;int_id/+/location` to directly obtain the count of all cars passing through a particular intersection `int_id`. This scenario shares many similarities with the work presented in [@manzoni_proposal_2017], where *content islands* of things operated by a pub/sub architecture were organised using local and global topics to differentiate how to manage their publications. In this work we observe that the presence of local and global subscribers, possibly in a mobile scenario, dictates several different requirements on the pub/sub architecture, resulting in several research challenges which are listed in the following.
Automatic broker discovery
--------------------------
MQTT, and by inheritance MQTT+, require client devices to know the IP address of the broker (or the load balancer in case of clustered brokers) in order to connect to it. In a mobile scenario, where clients devices move from the area covered by one MEC server to another, it is important to establish automatic and dynamic procedures for disseminating the broker IP address to the clients. Solutions such as Zeroconf[@steinberg2005zero] may be adapted in order to facilitate the task.
Broker vertical clustering
--------------------------
A general issue of MQTT is the central role of the broker. Indeed, if not dimensioned properly, the broker can become the bottleneck and result in a single point of failure, causing the local client devices to be unable to communicate with the network. This problem is even more important with MQTT+ brokers, which require additional resources for data aggregation, filtering and processing. Recent MQTT implementations have the possibility to be vertically clustered (i.e., implemented on several virtual machines on the same physical hardware) to provide some sort of reliability in case of broker failures or overloads. Often, a load balancer is used as a single point of entry for all communications: this creates a single logical broker from the perspective of clients and provides some sort of reliability and vertical scalability [@jutadhamakorn2017scalable; @sen2018highly]. However, such setups are either static and cumbersome to deploy or dynamically implement autoscaling in a central cloud service [@gascon2015dynamoth]. Given the limited amount of resources that will be available at the MEC, solutions where additional brokers may be dynamically created or shut down according to the local load conditions become of primary importance, as well as the development of accurate prediction models for resource provisioning [@tabatabai2017managing]. For MQTT+, this means developing optimisation and design techniques for accepting/rejecting a subscription to an intensive data processing task or to move the corresponding computation elsewhere (e.g. from the MEC to the cloud).
Broker distribution and horizontal clustering
---------------------------------------------
In the scenario depicted by the upcoming 5G architecture, multiple MEC servers, rather than a single central cloud-based server, are deployed in close proximity to the final users. Interconnecting such nodes together is crucial for realising the vision of edge computing, with clear benefits in terms of end latency and use of network resources compared to a cloud-based approach. The interconnections between brokers installed on the MEC servers can be realised either with virtual links based on the S1 interface through the core network or by exploiting the X2 interfaces connecting directly different base stations. In both cases, the main challenge is how to distribute efficiently subscriptions and publications from one broker to other brokers, ultimately interconnecting local publishers with global subscribers. The problem is known as distributed event routing, and has received a lot of attention in the past for what concerns generic publish/subscribe architectures [@baldoni2005distributed; @martins2010routing]. However, to the best of our knowledge, no off-the-shelves solutions are ready to be used for interconnecting MQTT and MQTT+ brokers in a distributed fashion: in the next Section we propose three different alternatives for solving such a problem.
Approaches for distributing brokers
===================================
Static Broker Bridging
----------------------
A naive solution to the problem may be the use of bridging, a functionality already present on some MQTT brokers implementations (e.g., Mosquitto[^3] and HiveMQ[^4]) which allows a broker B to connect to broker A as a standard client and subscribe to some or all messages published on A. Vice versa, A is subscribed and receives messages published on B. Despite its simplicity, such an option has several drawbacks. First, to avoid message looping (A publishes a message on B, which in turn forwards it on A), such method requires specific prefix to be added to topic description on each broker. Second, such a mechanism is static in nature and do not address mobility or changing resource availability, although some recent work proposed dynamic bridging tables [@rausch2018emma]. Third, topic bridging is basically equivalent to event flooding in distributed pub/sub system, a solution which is known to not scale well in large scale distributed scenarios [@baldoni2005distributed].
Selective Event Routing
-----------------------
More efficient solutions may be the ones stemming from the works on event routing in distributed pub/sub systems. In particular, rendezvous-based event routing has the potential to solve the scalability issues arising from massive IoT applications. In rendezvous-based system, publishers and subscribers meet each others at specific nodes in the network, known as rendezvous nodes (RN), which are organised in an overlay network topology. Each RN is responsible for (i) storing the subscriptions to a specific topic or subset of topics and (ii) routing any incoming publication to the RN node in charge of such topics (either directly or through some aggregation function). Subscription and publications therefore meet at the RN node which are both mapped to. Mapping between topics and RN nodes is generally performed through the use of hashing functions, which can also be used to balance the load of subscriptions storage and maintenance [@martins2010routing]. While promising, such an approach has two drawbacks: (i) the subscription language is limited by the chosen mapping between subscriptions and RNs and (ii) mobility of publishers is not well managed by a fixed allocation of subscriptions to RNs [@baldoni2005distributed]. Note that, in principle, any MQTT+ broker can host rendezvous functionalities. An interesting research problem is therefore to select which MQTT+ brokers are more suitable to become RNs, based on specific objective functions such as minimising latency.
ICN-based approach
------------------
As explained before, MQTT+ provides caching of aggregated IoT measurements coming from end devices. Such functionality is useful to deliver essential information to the upper layer services while reducing the amount of data to be managed, ultimately trading off the fine-grained data locations (i.e., the IP address or topic name of each publishing device) with data content (e.g., an aggregation function over the data published). This observation naturally brings into the game the concept of Information Centric Networking (ICN), a novel clean-slate networking paradigm which considers information as the new waist of the Internet communication model. In ICN the focus of the communication becomes *what* it is communicated instead of *where* it is located, i.e., ICN focuses on the naming rather than on the addressing[^5]. Among the different realisation of the ICN concept, the one which fits best in the reference scenario of this work is POINT [@trossen_ip_2015], which offers a convenient ICN implementation framework based on a publish/subscribe architecture. The architecture of POINT relies on three complementary network functions: (1) the Topology Manager (TM) for calculating the delivery tree in a one-to-many communication pattern, (2) the Rendezvous Function (RF) to provide the directory and binding service matching up publishers and subscribers (similar to Rendezvous-based Routing) and (3) the Forwarding function that allows the efficient dissemination of information through the use of Forwarding Nodes (FN). FNs are bespoke devices that live alongside the routers in an overlay topology and provide different specific network services such as in-network data aggregation, redundancy elimination and smart caching. As mentioned in Section \[sec:MQTT+\], MQTT+ brokers have the right characteristics to act as an interface to a POINT name-based ICN world. The translation function between the IP-based world and ICN is implemented in a Network Attachment Point (NAP), which is also in charge of synchronising all MQTT+ brokers distributed in different parts of the network. As shown in Fig. \[fig:icn-architecture\], we expect island of things to be interconnected, and consequently information and services living in different parts of the network. ICN functions will then be in charge of realising the efficient dissemination across the network connecting the publications and subscriptions to and from the edge of the network. Notice that things are oblivious of the location of the service, since the NAP on the MQTT+ broker takes care of the pertinence of the scope of the message.
![ICN-based routing: the NAP implemented on MQTT+ brokers translate between IP address and ICN names[]{data-label="fig:icn-architecture"}](icn-architecture){width="\columnwidth"}
Conclusion
==========
We discussed the challenges associated to the use of a pub/sub protocol such as MQTT+ as enabler of IoT applications in future MEC-enabled 5G networks, and we identified possible future research directions to focus on. Broadening the scope of IoT information dissemination via broker distribution is a key challenge that deserves close attention. It requires the exploration of novel directions on implementation of data managers at the edge. The orchestration strategies for the distribution of these managers over the Internet will also pose further challenges in resource allocation and load balancing. Finally, exploring alternative approaches such as Informaiton Centric Networking for efficient dissemination of information among brokers, promises better tailored communication following many-to-many communication patterns.
Acknowledgment {#acknowledgment .unnumbered}
==============
Andrés Arcia-Moret is under the support of Grant RG90413 NRAG/527.
[^1]: in primis, the power consumption
[^2]: To the best of our knowledge, the only IoT-related initiative based on COAP is the Open Connectivity Foundation (OCF), which mostly targets the smart home scenario and short range communication technologies rather than cellular ones.
[^3]: https://mosquitto.org
[^4]: http://www.hivemq.com
[^5]: See Soch [@shoch_note_1978] for further elaboration on this difference
|
---
abstract: 'Ekedahl, Lando, Shapiro, and Vainshtein announced a remarkable formula ([@elsv]) expressing Hurwitz numbers (counting covers of ${\mathbb P}^1$ with specified simple branch points, and specified branching over one other point) in terms of Hodge integrals. We give a proof of this formula using virtual localization on the moduli space of stable maps, and describe how the proof could be simplified by the proper algebro-geometric definition of a “relative space”.'
address:
- 'Dept. of Mathematics, Harvard University, Cambridge MA 02138'
- 'Dept. of Mathematics, MIT, Cambridge MA 02139'
author:
- Tom Graber
- Ravi Vakil
date: 'February 29, 2000.'
title: 'Hodge integrals and hurwitz numbers via virtual localization [^1]'
---
[^2]
Introduction
============
Hurwitz numbers count certain covers of the projective line (or, equivalently, factorizations of permutations into transposition). They have been studied extensively since the time of Hurwitz, and have recently been the subject of renewed interest in physics ([@ct]), combinatorics ([@d], [@a], and the series starting with [@gj0]), algebraic geometry (recursions from Gromov-Witten theory, often conjectural), and symplectic geometry (e.g. [@lzz]).
Ekedahl, Lando, Shapiro and Vainshtein have announced a remarkable formula ([@elsv] Theorem 1.1; Theorem \[biggie\] below) linking Hurwitz numbers to Hodge integrals in a particularly elegant way.
We prove Theorem \[biggie\] using virtual localization on the moduli space of stable maps, developed in [@gp]. In the simplest case, no complications arise, and Theorem \[biggie\] comes out immediately; Fantechi and Pandharipande proved this case independently ([@fp] Theorem 2), and their approach inspired ours.
We have chosen to present this proof because the formula of Ekedahl et al is very powerful (see Sections \[mushroom\] and \[celery\] for applications), and the program they propose seems potentially very difficult to complete (e.g. [@elsv] Prop. 2.2, where they require a compactification of the space of branched covers, with specified branching at infinity, which is a bundle over ${{\overline{{{\mathcal{M}}}}}}_{g,n}$, such that the branch map extends to the compactification).
In Section \[relative\], we show that the proof would be much simpler if there were a moduli space for “relative maps” in the algebraic category (with a good two-term obstruction theory, virtual fundamental class, and hence virtual localization formula). A space with some of these qualities already exists in the symplectic category (see [@lr] Section 7 and [@ip] for discussion). In the algebraic case, not much is known, although Gathmann has obtained striking results in genus 0 ([@g]).
We are grateful to Rahul Pandharipande, David M. Jackson, and Michael Shapiro for helpful conversations.
Definitions and statement
=========================
\[avocado\] Throughout, we work over ${\mathbb{C}}$, and we use the following notation. Fix a genus $g$, a degree $d$, and a partition $({\alpha}_1,\ldots,{\alpha}_m)$ of $d$ with $m$ parts. Let $b=2d+2g-2$, the “expected number of branch points of a degree $d$ genus $g$ cover of ${\mathbb P}^1$” by the Riemann-Hurwitz formula. We will identify ${\operatorname{Sym}}^b {\mathbb P}^1$ with ${\mathbb P}^b$ throughout. Let $r = d+m+2(g-1)$, so a branched cover of ${\mathbb P}^1$, with monodromy above $\infty$ given by ${\alpha}$, and $r$ other specified simple branch points (and no other branching) has genus $g$. Let $k = \sum_i ({\alpha}_i-1)$, so $r=b-k$. Let $H^g_{{\alpha}}$ be the number of such branched covers that are connected. (We do not take the points over $\infty$ to be labelled.)
\[biggie\] [ *Suppose $g$, $m$ are integers ($g \geq 0$, $m \geq 1$) such that $2g-2+m>0$ (i.e. the functor ${{\overline{{{\mathcal{M}}}}}}_{g,m}$ is represented by a Deligne-Mumford stack). Then $$H^g_{\alpha}= \frac {r!} { \# {\operatorname{Aut}}({\alpha})}
\prod_{i=1}^m \frac {{{\alpha}_i}^{{\alpha}_i}} {{\alpha}_i!}
\int_{{{\overline{{{\mathcal{M}}}}}}_{g,m}} \frac { 1-{\lambda}_1 + \dots \pm {\lambda}_g} {\prod (1-{\alpha}_i \psi_i)}$$ where ${\lambda}_i=c_i({\mathbb{E}})$ (${\mathbb{E}}$ is the Hodge bundle).*]{}
Fantechi and Pandharipande’s argument applies in the case where there is no ramification above $\infty$, i.e. ${\alpha}= (1^d)$.
The reader may check that a variation of our method also shows that $$H^0_{{\alpha}_1} = r! \frac {d^{d-2}} {d!}, \; \; \; H^0_{{\alpha}_1,{\alpha}_2} = \frac {r!}
{\# {\operatorname{Aut}}({\alpha}_1, {\alpha}_2)} \cdot \frac {{\alpha}_1^{{\alpha}_1}} {{\alpha}_1!} \cdot
\frac {{\alpha}_2^{{\alpha}_2}} {{\alpha}_2!} \cdot d^{d-1}.$$ As these formulas are known by other means ([@d] for the first, [@a] for the second, [@gj0] for both), we omit the proof.
\[mushroom\] (i) Theorem \[biggie\] provides a way of computing all Hodge integrals as follows. Define $$\langle {\alpha}_1, \dots, {\alpha}_m \rangle := \int_{{{\overline{{{\mathcal{M}}}}}}_{g,m}}
\frac { 1-{\lambda}_1 + \dots \pm {\lambda}_g} {\prod (1-{\alpha}_i \psi_i)},$$ a symmetric polynomial in the ${\alpha}_i$ of degree $3g-3+m$ whose coefficients are of the form $\int_{{{\overline{{{\mathcal{M}}}}}}_{g,m}} \psi_1^{d_1} \dots
\psi_m^{d_m} {\lambda}_k$. It is straightforward to recover the coefficients of a symmetric polynomial in $m$ variables of known degree from a finite number of values, and $\langle {\alpha}_1, \dots, {\alpha}_m \rangle$ can easily be computed (as Hurwitz numbers are combinatorial objects that are easily computable, see Section \[pea\]). Once these integrals are known, all remaining Hodge integrals (i.e. with more ${\lambda}$-classes) can be computed in the usual way ([@m]). The only other methods known to us are Kontsevich’s theorem, formerly Witten’s conjecture, [@ko], which has no known algebraic proof, and methods of Faber and Pandharipande (making clever use of virtual localization, [@p]). These methods of computation are in keeping with an extension of Mumford’s philosophy, which is that much of the cohomology of ${{\overline{{{\mathcal{M}}}}}}_{g,n}$ is essentially combinatorial.
\(ii) Combinatorially straightforward relations among Hurwitz numbers (e.g. “cut-and-join”, see [@gj0] Section 2) yield nontrivial new identities among Hodge integrals.
\[celery\] There has been much work on the structure of the Hurwitz numbers, including various predictions from physics. Theorem \[biggie\] is the key step in a machine to verify these structures and predictions, see [@gjv].
Background: Maps of curves to curves
====================================
\[chickpea\] Following [@g01pn] Section 4.2, define a [*special locus*]{} of a map $f: X \rightarrow {\mathbb P}^1$ (where $X$ is a nodal curve) as a connected component of the locus in $X$ where $f$ is not [étale ]{}. (Remark: No result in this section requires the target to be ${\mathbb P}^1$.) Then a special locus is of one of the following forms: (i) a nonsingular point of $X$ that is an $m$-fold branch point (i.e. analytically locally the map looks like $x \rightarrow x^m$, $m>1$), (ii) a node of $X$, where the two branches of the node are branch points of order $m_1$, $m_2$, or (iii) one-dimensional, of arithmetic genus $g$, attached to $s$ branches of the remainder of the curve that are $c_j$-fold branch points ($1 \leq j \leq s$). The form of the locus, along with the numerical data, will be called the [*type*]{}. (For convenience, we will consider a point [*not*]{} in a special locus to be of type (i) with $m=1$.) We will use the fact that special loci of type (ii) are smoothable ([@p1] Section 2.2).
\[broccoli\] To each special locus, associate a [*ramification number*]{} as follows: (i) $m-1$, (ii) $m_1 +m_2$, (iii) $2g-2+ 2s + \sum_{j=1}^s(c_j-1)$. (Warning: in case (i), this is one less than what is normally called the ramification index; we apologize for any possible confusion.) The [*total ramification*]{} above a point of ${\mathbb P}^1$ is the sum of the ramification numbers of the special loci mapping to that point. We will use the following two immediate facts: if the map is stable, then the ramification number of each “special locus” is a positive integer, and each special locus of type (iii) has ramification number at least 2.
\[pumpkin\] There is an easy generalization of the Riemann-Hurwitz formula: $$2 p_a(X) - 2 = -2d + \sum r_i$$ where $\sum r_i$ is the sum of the ramification numbers. (The proof is straightforward. For example, consider the complex $f^* \omega^1_{{\mathbb P}^1} \rightarrow \omega^1_X$ as in [@fp] Section 2.3, and observe that its degree can be decomposed into contributions from each special locus. Alternatively, it follows from the usual Riemann-Hurwitz formula and induction on the number of nodes.)
\[yam\] Ramification number is preserved under deformations. Specifically, consider a pointed one-parameter family of maps (of nodal cures). Suppose one map in the family has a special locus $S$ with ramification number $r$. Then the sum of the ramification numbers of the special loci in a general map that specialize to $S$ is also $r$. (This can be shown by either considering the complex $f^* \omega^1_{{\mathbb P}^1} \rightarrow
\omega^1_X$ in the family or by deformation theory.)
Next, suppose $$\begin{array} {cc}
C & \rightarrow {\mathbb P}^1 \\
\downarrow & \\
B &
\end{array}$$ is a family of [*stable*]{} maps parametrized by a nonsingular curve $B$.
\[radish\] [*Suppose there is a point $\infty$ of ${\mathbb P}^1$ where the total ramification number of special loci mapping to $\infty$ is a constant $k$ for all closed points of $B$. Then the type of ramification above $\infty$ is constant, i.e. the number of preimages of $\infty$ and their types are constant.*]{}
For example, if the general fiber is nonsingular, i.e. only has special loci of type (i), then that is true for all fibers.
[*Proof.*]{} Let $0$ be any point of $B$, and let $f: X \rightarrow
{\mathbb P}^1$ be the map corresponding to $0$. We will show that the type of ramification above $\infty$ for $f$ is the same as for the general point of $B$.
First reduce to the case where the general map has no contracted components. (If the general map has a contracted component $E$, then consider the complement of the closure of $E$ in the total general family. Prove the result there, and then show that the statement of Lemma \[radish\] behaves well with respect to gluing a contracted component.)
Similarly, next reduce to the case where general map is nonsingular. (First show the result where the nodes that are in the closure of the nodes in the generic curve are normalized, and then show that the statement behaves well with respect to gluing a 2-section of the family to form a node.)
Pull back to an [étale ]{}neighborhood of 0 to separate special loci of general fiber (i.e. so they are preserved under monodromy), and also the fibers over $\infty$ for the general map.
For convenience of notation, restrict attention to one special locus $E$ of $f$. Assume first that $E$ is of type (iii), so $\dim E = 1$. Let $g_E$ be the arithmetic genus of $E$. Suppose that $r$ preimages of $\infty$ of the general fiber (of type (i) by reductions) meet $E$ in the limit, and that these have ramification numbers $b_1$, …, $b_r$. Let $s$ be the number of other branches of $X$ meeting $E$, and $c_1$, …, $c_s$ the ramification numbers of the branches (as in Section \[chickpea\]).
The ramification number of $E$ is $(2 g_E-2) + 2s + \sum_{j=1}^s (c_j-1)$. The total ramification number of the special loci specializing to $E$ is $\sum_{i=1}^r (b_i-1)$. Also, $$\sum_{i=1}^r b_i = \sum_{j=1}^s c_j.$$ Hence by conservation of ramification number, $$(2 g_E-2+s) + r = 0.$$ But $r>0$, and by the stability condition for $f$, $2g_E-2+s>0$, so we have a contradiction.
If $\dim E= 0$ is 0 (i.e. $E$ is of type (i) or (ii)), then essentially the same algebra works (with the substitution “$g_E=0$”, resulting in $r+s-2=0$, from which $r=s=1$, from which the type is constant).
A similar argument shows:
\[cucumber\] [*Suppose $E$ is a special locus in a specific fiber, and only one special locus $E'$ in the general fiber meets it. Then the types of $E$ and $E'$ are the same.*]{}
For any map $f$ from a nodal curve to a nonsingular curve, the ramification number defines a divisor on the target: $\sum_L r_L f(L)$, where $L$ runs through the special loci, and $r_L$ is the ramification number. This induces a set-theoretic map ${\operatorname{Br}}: {{\overline{{{\mathcal{M}}}}}}_g({\mathbb P}^1,d)
\rightarrow {\operatorname{Sym}}^b {\mathbb P}^1 \cong {\mathbb P}^b$. In [@fp], this was shown to be a morphism.
Let $p$ be the point of ${\operatorname{Sym}}^b {\mathbb P}^1 \cong {\mathbb P}^b$ corresponding to $k(\infty) + (b-k)(0)$, let $L_\infty \subset
{\mathbb P}^b$ be the linear space corresponding to points of the form $k (\infty)+D$ (where $D$ is a divisor of degree $r=b-k$), and let $\iota: L_\infty \rightarrow {\mathbb P}^b$ be the inclusion.
Define $M$ as the stack-theoretic pullback ${\operatorname{Br}}^{-1}L_\infty$. It carries a virtual fundamental class $[M]^{{\operatorname{vir}}} = \iota^! [ {{\overline{{{\mathcal{M}}}}}}_g({\mathbb P}^1,d) ]^{{\operatorname{vir}}}$ of dimension $r=b-k$ (i.e. simply intersect the class $[{{\overline{{{\mathcal{M}}}}}}_g({\mathbb P}^1,d)]^{{\operatorname{vir}}}$ with the codimension $k$ operational Chow class ${\operatorname{Br}}^*[L_\infty]$; the result is supported on ${\operatorname{Br}}^{-1}L_\infty$). Denote the [*restricted branch map*]{} by ${\operatorname{br}}: M \rightarrow L_\infty$. By abuse of notation, we denote the top horizontal arrow in the following diagram by $\iota$ as well. $$\begin{array}{ccc}
M & \rightarrow & {{\overline{{{\mathcal{M}}}}}}_g({\mathbb P}^1,d) \\
{\operatorname{br}}\downarrow & & \downarrow {\operatorname{Br}}\\
L_\infty & \stackrel \iota {\rightarrow} & {\mathbb P}^b
\end{array}$$ By the projection formula, $$\label{leek}
\iota_* ( {\operatorname{br}}^*[p] \cap [M]^{{\operatorname{vir}}}) = {\operatorname{Br}}^*[p] \cap [ {{\overline{{{\mathcal{M}}}}}}_g({\mathbb P}^1,d) ]^{{\operatorname{vir}}}.$$
Define $M^{\alpha}$ as the union of irreducible components of $M$ whose general members correspond to maps from irreducible curves, with ramification above $\infty$ corresponding to ${\alpha}$ with the reduced substack structure. (It is not hard to show that $M^{\alpha}$ is irreducible, by the same group-theoretic methods as the classical proof that the Hurwitz scheme is irreducible. None of our arguments use this fact, so we will not give the details of the proof. Still, for convenience, we will assume irreducibility in our language.)
\[kidneybean\] Note that $M={\operatorname{Br}}^{-1} L_\infty$ contains $M^{\alpha}$ with some multiplicity $m_{\alpha}$, as $M^{\alpha}$ is of the expected dimension $r$. The Hurwitz number $H^g_{\alpha}$ is given by $$\int_{M^{\alpha}} {\operatorname{br}}^* [p].$$ (The proof of [@fp] Proposition 2 carries over without change in this case, as does the the argument of [@g1] Section 3.) This is $1/ m_{\alpha}$ times the cap product of ${\operatorname{br}}^* [p]$ with the part of the class of $[M]^{{\operatorname{vir}}}$ supported on $M^{\alpha}$.
\[lettuce\] [*$m_{\alpha}= k! \prod \left( \frac {{\alpha}_i^{{\alpha}_i-1}} {{\alpha}_i!} \right).$*]{}
\[pea\] In the proof, we will use the combinatorial interpretation of Hurwitz numbers: $H^g_{\alpha}$ is $1/d!$ times the number of ordered $r$-tuples $(\tau_1, \dots, \tau_r)$ of transpositions generating $S_d$, whose product has cycle structure ${\alpha}$.
[*Proof.*]{} Fix $r$ general points $p_1$, …, $p_r$ of ${\mathbb P}^1$. Let $L \subset {\mathbb P}^b$ be the linear space corresponding to divisors of the form $p_1 + \dots + p_r + D$ (where $\deg D = k$). By the Kleiman-Bertini theorem, $({\operatorname{Br}}|_{M_{\alpha}})^{-1} L$ consists of $H^g_{\alpha}$ reduced points.
Now $L_\infty \subset {\operatorname{Sym}}{\mathbb P}^b$ can be interpreted as a ([*real*]{} one-parameter) degeneration of the linear space corresponding to divisors of the form $D'+ \sum_{i=1}^k
q_i$, where $q_1$, …$q_k$ are fixed generally chosen points of ${\mathbb P}^1$ and $D'$ is any degree $r$ divisor on ${\mathbb P}^1$.
Choose branch cuts to the points $p_1$, …, $p_r$, $q_1$, …, $q_k$, $\infty$ from some other point of ${\mathbb P}^1$. Choose a real one-parameter path connecting $q_1$, …, $q_k$, $\infty$ (in that order), not meeting the branch cuts (see the dashed line in Figure \[degenfig\]). Degenerate the points $q_i$ to $\infty$ along this path one at a time (so the family parametrizing this degeneration is reducible). If ${\sigma}_1$, …, ${\sigma}_k$, ${\sigma}_\infty$ are the monodromies around the points $q_1$, …, $q_k$, $\infty$ for a certain cover, then the monodromy around $\infty$ after the branch points $q_i$, …, $q_k$ have been degenerated to $\infty$ (along the path) is ${\sigma}_i \dots {\sigma}_k {\sigma}_\infty$.
At a general point of the family parametrizing this real degeneration (before any of the points $q_i$ have specialized, i.e. the $q_i$ are fixed general points), ${\operatorname{Br}}^{-1} (L \cap L_\infty)$ is a finite number of reduced points. This number is the Hurwitz number $H^g_{(1^d)}$ ([@fp] Prop. 2), i.e. $1/d!$ times the number of choices of $b=r+k$ transpositions $\tau_1$, …, $\tau_r$, ${\sigma}_1$, …, ${\sigma}_k$ in $S_d$ such that $\tau_1 \dots \tau_r {\sigma}_1 \dots
{\sigma}_k$ is the identity and $\tau_1$, …, $\tau_r$, ${\sigma}_1$, …, ${\sigma}_k$ generate $S_d$.
[ degenfig.tex ]{}
As we specialize the $k$ branch points $q_1$, …, $q_k$ to $\infty$ one at a time, some of these points tend to points of $M^{\alpha}$; these are the points for which $\tau_1$, …, $\tau_r$ generate $S_d$, and their product has cycle structure ${\alpha}$. The multiplicity $m_{\alpha}$ is the number of these points that go to each point of $M^{\alpha}$. This is the number of choices of $k$ transpositions ${\sigma}_1$, …, ${\sigma}_k$ whose product is a [*given*]{} permutation $\xi$ with cycle structure ${\alpha}$. (Note that this number is independent of the choice of $\xi$; hence the multiplicity is independent of choice of component of $M^{\alpha}$.)
If $k=\sum({\alpha}_i-1)$ transpositions ${\sigma}_1$, …, ${\sigma}_k$ multiply to a permutation $\xi=(a_{1,1} \dots a_{1,{\alpha}_1}) \dots (a_{m,1} \dots
a_{m, {\alpha}_m})$ (where $\{ a_{1,1}, \dots, a_{m,{\alpha}_m} \} = \{ 1,
\dots, d \}$), then for $1 \leq i \leq m$, ${\alpha}_i-1$ of the transpositions must be of the form $(a_{i,j} a_{i,k})$. (Reason: A choice of $k+1$ points $q_1$, …, $q_k$, $\infty$, of ${\mathbb P}^1$ and the data ${\sigma}_1$, …, ${\sigma}_k$, $\xi$ defines a degree $d$ branched cover of ${\mathbb P}^1$, simply branched above $q_j$ and with ramification type ${\alpha}$ above $\infty$. By the Riemann-Hurwitz formula, the arithmetic genus of this cover is $1-m$; as the pre-image of $\infty$ contains $m$ smooth points, the cover has at most $m$ components. Hence the cover has precisely $m$ components, each of genus 0. The $i$th component is simply branched at ${\alpha}_i-1$ of the points $\{ q_1, \dots, q_k \}$ away from $\infty$.)
The number of ways of factoring an ${\alpha}_i$-cycle into ${\alpha}_i-1$ transpositions is ${\alpha}_i^{{\alpha}_i-2}$ (straightforward; or see [@d] or [@gj0] Theorem 1.1). Hence $m_{\alpha}$ is the number of ways of partitioning the $k$ points $q_1$, …, $q_k$ into subsets of size ${\alpha}_1-1$, …, ${\alpha}_m-1$, times the number of ways of factoring the ${\alpha}_i$-cycles: $$m_{\alpha}= \binom {k} {{\alpha}_1 -1, \dots, {\alpha}_m-1} \prod {\alpha}_i^{{\alpha}_i-2} =
k! \prod \left( \frac {{\alpha}_i^{{\alpha}_i-1}} {{\alpha}_i!} \right).$$
Virtual localization
====================
We evaluate the integral using virtual localization ([@gp]). The standard action of ${\mathbb{C}}^*$ on ${\mathbb P}^1$ (so that the action on the tangent space at $\infty$ has weight 1) induces a natural ${\mathbb{C}}^*$-action on ${{\overline{{{\mathcal{M}}}}}}_g({\mathbb P}^1,d)$, and the branch morphism ${\operatorname{Br}}$ is equivariant with respect to the induced torus action on ${\operatorname{Sym}}^b {\mathbb P}^1
\cong {\mathbb P}^b$. As a result, we can regard ${\operatorname{br}}^*[p]$ as an equivariant Chow cohomology class in $A^r_{{\mathbb{C}}^*} M$. Let $\{ F_l
\}_{l \in L}$ be the set of components of the fixed locus of the torus action on ${{\overline{{{\mathcal{M}}}}}}_g({\mathbb P}^1,d)$, where $L$ is some index set. (Note that the connected components of the fixed locus are also irreducible.)
Define $F_0$ to be the component of the fixed locus whose general point parametrizes a stable map with a single genus $g$ component contracted over 0, and $m$ rational tails mapping with degree ${\alpha}_i$ ($1 \leq i \leq m$) to ${\mathbb P}^1$, totally ramified above 0 and $\infty$. $F_{0}$ is naturally isomorphic to a quotient of ${{\overline{{{\mathcal{M}}}}}}_{g,m}$ by a finite group. See [@k] or [@gp] for a discussion of the structure of the fixed locus of the ${\mathbb{C}}^{*}$ action on ${{\overline{{{\mathcal{M}}}}}}_{g}({\mathbb P}^{1},d).$
By the virtual localization formula, we can explicitly write down classes $\mu_l \in A_*^{{\mathbb{C}}^*}(F_l)_{(1/t)}$ such that $$\sum_l i_*(\mu_l) = [M]^{{\operatorname{vir}}}$$ in $A_*^{{\mathbb{C}}^*}(M)$. Here, and elsewhere, $i$ is the natural inclusion. It is important to note that the $\mu_l$ are uniquely determined by this equation. This follows from the Localization Theorem 1 of [@eg] (extended to Deligne-Mumford stacks by [@kresch]), which says that pushforward gives an isomorphism between the localized Chow group of the fixed locus and that of the whole space.
In order to pick out the contribution to this integral from a single component $F_0$, we introduce more refined classes. We denote the irreducible components of $M$ by $M_n$, and arbitrarily choose a representation $$[M]^{{\operatorname{vir}}} = \sum_n i_* {\Gamma}_n$$ where ${\Gamma}_n \in A_*^{{\mathbb{C}}^*}( M_n)$. For a general component, we can say little about these classes, but for our distinguished irreducible component $M^{\alpha}$ the corresponding ${\Gamma}_{\alpha}$ is necessarily $m_{\alpha}[M^{\alpha}]$. (Note that $M^{\alpha}$ has the expected dimension, so the Chow group in that dimension is generated by the fundamental class).
Next, we localize each of the ${\Gamma}_n$. Define $\eta_{l,n}$ in $A_*^{{\mathbb{C}}^*}(F_l)_{(1/t)}$ by $$\label{rutabaga}
\sum_l i_* \eta_{l,n}= {\Gamma}_n$$ Once again (by [@eg], [@kresch]), the $\eta_{l,n}$ are uniquely defined; this will be used in Lemma \[potato\]. Also, $\sum_n \eta_{l,n} = \mu_l$ (as the $\mu_l$ are uniquely determined).
\[squash\] [*The equivariant class ${\operatorname{br}}^*[p]$ restricts to 0 on any component of the fixed locus whose general map has total ramification number greater than $k$ above $\infty$.*]{}
[*Proof.*]{} Restricting the branch morphism to such a component, we see that it gives a constant morphism to a point in ${\mathbb P}^{b}$ other than $p$. Consequently, the pull-back of the class $p$ must vanish.
\[melanzana\] [ *$\int_{{\Gamma}_n} {\operatorname{br}}^*[p] = 0$ for any irreducible component $M_n$ whose general point corresponds to a map which has a contracted component away from $\infty$.*]{}
[*Proof.*]{} A general cycle $\gamma \in L_\infty$ representing $p$ is the sum of $r$ distinct points plus the point $\infty$ exactly $k$ times. However, a contracted component always gives a multiple component of the branch divisor, (Section \[broccoli\]) so the image of $M_{n}$ cannot meet a general point.
[*$\eta_{l,n} = 0$ if $F_l \cap M_n = \emptyset$.*]{}
\[potato\]
[*Proof.*]{} Since ${\Gamma}_n$ is an element of $A_*^{{\mathbb{C}}^*}(M_n)$, there exist classes $\tilde{\eta}_{l,n}$ in the localized equivariant Chow groups of the fixed loci of $M_n$ satisfying equation (\[rutabaga\]). Pushing these forward to the fixed loci of $M$ gives classes in the Chow groups of the $F_l$ satisfying the same equation. By uniqueness, these must be the $\eta_{l,n}$. By this construction, it follows that they can only be non-zero if $F_l$ meets $M_n$.
[*No irreducible component of $M$ can meet two distinct components of the fixed locus with total ramification number exactly $k$ above $\infty$.*]{}
\[cabbage\]
[*Proof.*]{} To each map $f: X \rightarrow {\mathbb P}^1$ with total ramification number exactly $k$ above $\infty$, associate a graph as follows. The connected components of the preimage of $\infty$ correspond to red vertices; they are labelled with their type. The connected components of $Y=
\overline{X \setminus f^{-1}( \infty )}$ (where the closure is taken in $X$) correspond to green vertices; they are labelled with their arithmetic genus. Points of $Y \cap f^{-1}( \infty)$ correspond to edges connecting the corresponding red and green points; they are labelled with the ramification number of $Y \rightarrow {\mathbb P}^1$ at that point. Observe that this associated graph is constant in connected families where the total ramification over $\infty$ is constant, essentially by Lemma \[radish\].
If an irreducible component $M'$ of $M$ meets a component of the fixed locus with total ramification number exactly $k$ above $\infty$, then the general map in $M'$ has total ramification $k$ above $\infty$. (Reason: the total ramification is at most $k$ as it specializes to a map with total ramification exactly $k$; and the total ramification is at least $k$ as it is a component of $M$.) There is only one component of the fixed locus that has the same associated graph as the general point in $M'$, proving the result.
\[arugula\] [ *The map parametrized by a general point of any irreducible component of $M$ other than $M^{\alpha}$ which meets $F_0$ must have a contracted component not mapping to $\infty$.*]{}
[*Proof.*]{} Let $M'$ be an irreducible component of $M$ other than $M^{\alpha}$. As in the proof of Lemma \[cabbage\], a general map $f: X \rightarrow {\mathbb P}^1$ of $M$ has total ramification exactly $k$ above $\infty$. By Lemma \[radish\], we know the type of the special loci above $\infty$: they are nonsingular points of the source curve, and the ramification numbers are given by ${\alpha}_1, \dots, {\alpha}_m$.
As $M' \neq M$, $X$ is singular. If $f$ has a special locus of type (iii), then we are done. Otherwise, $f$ has only special loci of type (ii), and none of these map to $\infty$. But then these type (ii) special loci can be smoothed while staying in $M$ (Section \[broccoli\]), contradicting the assumption that $f$ is a general map in a component of $M$.
\[onion\] $$m_{\alpha}\int_{M^{{\alpha}}} {\operatorname{br}}^*[p] = \int_{F_0} {\operatorname{br}}^* [p] \cap \mu_0.$$
It is the class $\mu_0$ that the Virtual Localization Theorem of [@gp] allows us to calculate explicitly. Thus this proposition is the main ingredient in giving us an explicit formula for the integral we want to compute.
[*Proof.*]{} Now ${\Gamma}_{\alpha}= m_{\alpha}[M^{\alpha}]$, so by definition of $\eta_{l,{\alpha}}$, $$m_{\alpha}[M^{\alpha}] = \sum_l i_* \eta_{l,{\alpha}}.$$
By Lemma \[melanzana\], $M^{\alpha}$ meets only one component of the fixed locus which has total ramification number $k$, $F_0$. Along with Lemmas \[squash\] and \[potato\], this implies that $$m_{\alpha}\int_{M^{\alpha}} {\operatorname{br}}^*[p] = \int_{F_0} {\operatorname{br}}^*[p] \cap \eta_{0,{\alpha}}.$$ In other words, the only component of the fixed locus which contributes to this integral is $F_0$. Since $\mu_0 = \sum_n
\eta_{0,n}$, the proposition will follow if we can show that $$\int_{F_0} {\operatorname{br}}^* [p] \cap \eta_{0,n} = 0$$ for $n \neq {\alpha}$, i.e. that no other irreducible component of $M$ contributes to the localization term coming from $F_0$.
If $F_0 \cap M_n = \emptyset$, this is true by Lemma \[potato\]. Otherwise, by Lemma \[arugula\], the general map in $M_n$ has a contracted component, so by Lemma \[melanzana\] $\int_{\Gamma_n} {\operatorname{br}}^*[p] = 0$. By equation (\[rutabaga\]), $$\sum_l \int_{F_l} {\operatorname{br}}^*[p] \cap \eta_{l,n} = 0.$$ If $F_l$ generically corresponds to maps that have total ramification number greater than $k$ above $\infty$, then ${\operatorname{br}}^*[p] \cap \eta_{l,n} = 0$ by Lemma \[melanzana\] . If $l \neq 0$ and $F_l$ generically corresponds to maps that have total ramification number $k$ above $\infty$, then ${\operatorname{br}}^*[p] \cap \eta_{l,n} = 0$ by Lemma \[cabbage\], as $M_n$ meets $F_0$. Hence $\int_{F_0} {\operatorname{br}}^*[p] \cap \eta_{0,n} = 0$ as desired.
\[proof\] All that is left is to explicitly write down the right hand side of Proposition \[onion\]. By equation (\[leek\]), this integral can be interpreted as the contribution of $F_0$ to the integral of ${\operatorname{Br}}^*[p]$ against the virtual fundamental class of ${{\overline{{{\mathcal{M}}}}}}_g({\mathbb P}^1,d)$, divided by $m_{\alpha}$. Since this means we are trying to compute an equivariant integral over the entire space of maps to ${\mathbb P}^1$, we are in exactly the situation discussed in [@gp]. Let $\gamma$ be the natural morphism from ${{\overline{{{\mathcal{M}}}}}}_{g,m}$ to $F_0$. The degree of $\gamma$ is $\#
{\operatorname{Aut}}(\alpha)\prod \alpha_i$. The pullback under $\gamma$ of the inverse euler class of the virtual normal bundle is computed to be $$c(E^\vee) \left( \prod \frac{1}{1-\alpha_i\psi_i} \cdot
\frac{(-1)^{\alpha_i}\alpha_i^{2\alpha_i}}{(\alpha_i!)^2} \right).$$ The class ${\operatorname{br}}^*[p]$ is easy to evaluate. Since ${\operatorname{br}}$ is constant when restricted to $F_0$, this class is pure weight, and is given by the product of the weights of the ${\mathbb{C}}^*$ action on $T_p {\mathbb P}^b$. These weights are given by the non-zero integers from $-(b-k)$ to $k$ inclusive. The integral over $F_0$ is just the integral over ${{\overline{{{\mathcal{M}}}}}}_{g,m}$ divided by the degree of $\gamma$. We conclude that $$m_{\alpha}\int_{[M^\alpha]} br^*[p] =
\frac{k!(b-k)!}{\# {\operatorname{Aut}}(\alpha) \prod \alpha_i} \cdot \prod
\frac{\alpha_i^{2\alpha_i}}{(\alpha_i!)^2} \cdot \int_{{{\overline{{{\mathcal{M}}}}}}_{g,m}}
\frac{c({\mathbb{E}}^\vee)}{\prod (1-\alpha_i \psi_i)}.$$ Dividing by $m_\alpha$ (calculated in Lemma \[lettuce\]) yields the desired formula.
A case for an algebraic definition of a space of “relative stable maps” {#relative}
=======================================================================
A space of “relative stable maps” has been defined in the symplectic category (see [@lr] and [@ip]), but hasn’t yet been properly defined in the algebraic category (with the exception of Gathmann’s work in genus 0, [@g]).
The proof of Theorem \[biggie\] would become quite short were such a space ${{\mathcal{M}}}$ to exist with expected properties, namely the following. Fix $d$, $g$, ${\alpha}$, $m$, $k$, $r$ as before (see Section \[avocado\]).
1. ${{\mathcal{M}}}$ is a proper Deligne-Mumford stack, which contains as an open substack $U$ the locally closed substack of ${{\overline{{{\mathcal{M}}}}}}_g({\mathbb P}^1,d)$ corresponding to maps to ${\mathbb P}^1$ where the pre-image of $\infty$ consists of $m$ smooth points appearing with multiplicity ${\alpha}_1$, …, ${\alpha}_m$.
2. There is a Fantechi-Pandharipande branch map ${\operatorname{Br}}: {{\mathcal{M}}}\rightarrow
{\operatorname{Sym}}^b {\mathbb P}^1$. The image will be contained in $L_\infty$, so we may consider the induced map ${\operatorname{br}}$ to $L_\infty \cong {\operatorname{Sym}}^r {\mathbb P}^1$. Under this map, the set-theoretic fiber of $k(\infty) +r(0)$ is precisely $F_{0}$.
3. There is a ${\mathbb{C}}^{*}$-equivariant perfect obstruction theory on ${{\mathcal{M}}}$ which when restricted to $U$ is given (relatively over ${\mathfrak{M}}_g$) by $R\pi _{*}(f^{*}(T{\mathbb P}^{1} \otimes {{\mathcal{O}}}(- \infty)))$, where $\pi$ is the structure morphism from the universal curve to ${{\mathcal{M}}}$.
With these axioms, the proof would require only Section \[proof\].
All of these requirements are reasonable. However, as a warning, note that the proof of Proposition \[onion\] used special properties of the class ${\operatorname{br}}^* [p]$ (Lemmas \[squash\]–\[arugula\]).
One might expect this space to be a combination of Kontsevich’s space ${{\overline{{{\mathcal{M}}}}}}_g({\mathbb P}^1,d)$ and the space of twisted maps introduced by Abramovich and Vistoli (see [@av] Section 3).
[\[ELSV\]]{} , [*Complete moduli for families over semistable curves*]{}, preprint 1998, math.AG/9811059. , [*Topological classification of trigonometric polynomials and combinatorics of graphs with an equal number of vertices and edges*]{}, Functional Analysis and its Applications [**30**]{} no. 1 (1996), 1–17. , [*Large N phases of chiral $QCD_2$*]{}, Nuclear Phys. [**B 437**]{} (1995), 3–24. , [*The representation of a permutation as the product of a minimal number of transpositions and its connection with the theory of graphs*]{}, Publ. Math. Ins. Hungar. Acad. Sci. [**4**]{} (1959), 63–70. , [*Localization in equivariant intersection theory and the Bott residue formula*]{}, Amer. J. Math. [**120**]{} (1998), no. 3, 619–636. , [*On Hurwitz numbers and Hodge integrals*]{}, C. R. Acad. Sci. Paris, t. 328, Série I, p. 1171–1180, 1999. , [*Stable maps and branch divisors*]{}, preprint 1999, math.AG/9905104. , [*Absolute and relative Gromov-Witten invariants of very ample hypersurfaces*]{}, preprint 1999, math.AG/9908054. , [*Transitive factorisations into transpositions and holomorphic mappings on the sphere*]{}, Proc. A.M.S., [**125**]{} (1997), 51–60. , [ *The Gromov-Witten potential of a point, Hurwitz numbers, and Hodge integrals*]{}, preprint 1999, math.AG/9910004. , [*Localization of virtual classes*]{}, Invent. Math. [**135**]{} (1999), no. 2, 487–518. , [*Relative Gromov-Witten invariants*]{}, preprint 1999, math.SG/9907155. , [*Intersection theory on the moduli space of curves and the matrix Airy function*]{}, Comm. Math. Phys. [**147**]{} (1992), no. 1, 1–23. , [*Enumeration of rational curves via torus actions*]{}, in [*The moduli space of curves*]{}, R. Dijkgraaf, C. Faber, and G. van der Geer, eds., Birkhauser, 1995, 335-368. , [*Cycle groups for Artin stacks*]{}, Invent. Math. [**138**]{} (1999), no. 3, 495–536. , [*Symplectic surgery and Gromov-Witten invariants of Calabi-Yau 3-folds I*]{}, preprint 1998, math.AG/9803036. , [*The number of ramified covering of a Riemann surface by Riemann surface*]{}, preprint 1999, math.AG/9906053 v2. , personal communication. , [*Towards an enumerative geometry of the moduli space of curves*]{}, in [*Arithmetic and Geometry*]{} (M. Artin and J. Tate, eds.), Part II, Birkhäuser, 1983, 271–328. , [*The enumerative geometry of rational and elliptic curves in projective space*]{}, submitted for publication, available at http://www-math.mit.edu/\~vakil/preprints.html, rewritten version of much of math.AG/9709007. , [*Recursions for characteristic numbers of genus one plane curves*]{}, available at http://www-math.mit.edu/\~vakil/preprints.html, to appear in Arkiv för Matematik. , [*Recursions, formulas, and graph-theoretic interpretations of ramified coverings of the sphere by surfaces of genus 0 and 1*]{}, submitted for publication, math.CO/9812105.
[^1]: 1991 Mathematics Subject Classification: Primary 14H10, Secondary 14H30, 58D29
[^2]: The second author is partially supported by NSF Grant DMS–9970101
|
---
abstract: 'We demonstrate the experimental feasibility of incompressible fractional quantum Hall-like states in ultra-cold two dimensional rapidly rotating dipolar Fermi gases. In particular, we argue that the state of the system at filling fraction $\nu =1/3$ is well-described by the Laughlin wave function and find a substantial energy gap in the quasiparticle excitation spectrum. Dipolar gases, therefore, appear as natural candidates of systems that allow to realize these very interesting highly correlated states in future experiments.'
author:
- 'M.A. Baranov$^{1,2}$, Klaus Osterloh$^{1}$, and M. Lewenstein$^{1}$'
title: 'Fractional Quantum Hall States in Ultracold Rapidly Rotating Dipolar Fermi Gases.'
---
During the recent years, cold atom systems with strongly pronounced interparticle correlations have become a subject of intensive studies, both theoretically and experimentally. There are several ways to increase the role of interparticle interactions in gaseous trapped systems and to reach the strongly correlated regime. One of the possibilities is to employ an optical lattice where the tunneling strength between sites is smaller than the Hubbard-like on-site interaction [@Jaksch:1998aa]. This approach has led to a spectacular experimental observation of the Mott-Hubbard transition in atomic lattice Bose gases [@Greiner:2002aa] and is nowadays a main tool to create strongly correlated systems. Another way to enhance the effects of interparticle interactions is to use a quasi-2D rotating harmonic trap [@Dalibard:2000aa; @Cornell:2001aa; @Dalibard:2004aa; @Cornell:2004aa]. When the rotational frequency approaches the trap frequency, i.e. in the limit of critical rotation, the single particle energy spectrum becomes highly degenerate, and hence, the role of interparticle interactions becomes dominant. The Hamiltonian of the system in the rotating frame of reference is formally equivalent to the one of charged particles moving in a constant perpendicular magnetic field. This opens a remarkable possibility to establish a link with physics of the quantum Hall effect and to realize a large variety of strongly correlated states proposed in the context of the fractional quantum Hall effect (FQHE) [@Prange:1987aa], in a completely different experimental setup. Recently, the idea of composite bosons – bound states of vortices and bosonic atoms – has been successfully used to describe the ground state of a rotating Bose-Einstein condensate in a parabolic trap in the regime of large coherence length [@Wilkin:2000aa; @Wilkin:2001aa]. In Ref. [@Cirac:2001aa], a method of creating, manipulating and detecting anyonic quasi-particle excitations for fractional quantum Hall bosons at filling fraction $\nu =1/2$ in rotating Bose-Einstein condensates has been proposed. However, it was found that because of the short-range character of interparticle interactions, fractional quantum Hall states are only feasible for a small number of particles. This is due to the fact that Laughlin-like states do not play any specific role, when the interaction is short-ranged. Indeed, the Jastrow prefactor in the corresponding wave functions, $\prod_{i<j}(z_{i}-z_{j})^{p}$, where $z_{j}=x_{j}+\mathrm{i}y_{j}
$ is the coordinate of the $j$-th particle, and $p$ is an integer (even for bosons and odd for fermions), makes the effects of a short-range interaction irrelevant. As a consequence, excitations are gapless and the states themselves are compressible. This contrasts to the case of electrons where the Coulomb interaction favors fractional quantum Hall phases by lifting the degeneracy of the ground state by providing a gap for single-particle excitations [@Prange:1987aa]. It should be noted that in some cases (when more than one Landau level is occupied in the composite particle description of the fractional quantum Hall effect [@Jain]) the situation can be improved [@Jolicoeur:2004aa] by using the recently observed Feshbach resonance in the $p$-wave channel [@Bohn]. This resonance, however, is accompanied by dramatic losses that make its experimental application questionable.
In this letter, we demonstrate that rotating quasi-2D gaseous systems with dipole-dipole interactions could provide all necessary ingredients for the observation of fractional quantum Hall-like states. In particular, the dipole-dipole interaction favors fractional quantum Hall phases by creating a substantial gap in the single-particle excitation spectrum and makes them incompressible. We demonstrate this for the case of a quasihole excitation in the most famous Laughlin state at filling $\nu =1/3$ in a homogeneous quasi-2D dipolar rotating Fermi gas with dipolar moments polarized perpendicular to the plane of motion. Furthermore, we discuss the possibility of providing the rotating reference frame with a quenched disorder that ensures the robust creation and observation of fractional quantum Hall states in experiments with trapped gases.
We consider a system of $N$ dipolar fermions rotating in an axially symmetric harmonic trapping potential with a strong confinement along the axis of rotation, the $z$-axis. With respect to the latter, the dipoles are assumed to be aligned. Various ways of experimental realizations of ultracold dipolar gases are discussed in the review [@nobel]. Assuming that the temperature $T$ and the chemical potential $\mu $ are much smaller than the axial confinement, $T,\,\mu \ll \omega _{z}$, the gas is effectively two-dimensional, and the Hamiltonian of the system in the rotating reference frame reads $${\mathcal{H}}\!\!=\!\!\sum_{j=1}^{N}\Big(\!\frac{p_{j}^{2}}{2m}+\frac{m}{2}\omega _{0}^{2}r_{j}^{2}-\omega L_{jz}\!\Big)+\sum_{j<k}^{N}\frac{d^{2}}{\left\vert \mathbf{r}_{j}-\mathbf{r}_{k}\right\vert ^{3}}\,.
\label{Hamiltonian}$$Here $\omega _{0}\ll \omega _{z}$ is the radial trap frequency, $\omega $ is the rotation frequency, $m$ is the mass of the particles, $d$ their dipolar moment, and $L_{jz}$ is the projection of the angular momentum with respect to the $z$-axis of the $j$-th particle located at $\mathbf{r}_{j}=x_{j}\mathbf{e}_{x}+y_{j}\mathbf{e}_{y}$. The above Hamiltonian can be conveniently rewritten in the form $${\mathcal{H}}\!\!=\!\!\!\sum_{j=1}^{N}\!\bigg[\,\underset{{\mathcal{H}}_{\mathrm{Landau}}}{\underbrace{\frac{1}{2m}\left( \mathbf{p}_{j}-m\omega _{0}\mathbf{e}_{z}\times \mathbf{r}_{j}\right) ^{2}\,}}\!\!+\underset{{\mathcal{H}}_{\Delta }}{\underbrace{(\omega _{0}-\omega )L_{jz}}}\bigg]\!+V_{\mathrm{d}}\,, \label{modifHamilton}$$where ${\mathcal{H}}_{\mathrm{Landau}}$ is formally equivalent to the [Landau]{} Hamiltonian of particles with mass $m$ and charge $e$ moving in a constant perpendicular magnetic field with the vector potential $\mathbf{A}=(cm\omega
_{0}/e)\mathbf{e}_{z}\times \mathbf{r}$, $V_{\mathrm{d}}$ is the dipole-dipole interaction (the last term in Eq. (\[Hamiltonian\])), and ${\mathcal{H}}_{\Delta }$ describes the shift of single-particle energy levels as a function of their angular momentum and the difference of the frequencies $\Delta \omega =\omega _{0}-\omega $.
In the limit of critical rotation $\omega \rightarrow \omega _{0}$, one has ${\mathcal{H}}_{\Delta }\!\!\ll \!\!\{{\mathcal{H}}_{\mathrm{Landau}},\,V_{\mathrm{d}}\}$, and the [Hamiltonian]{} describes the motion of dipolar particles in a constant perpendicular magnetic field with cyclotron frequency $\omega _{\mathrm{c}}=2\omega _{0}$ [@Wilkin:2000aa]. The spectrum of ${\mathcal{H}}_{\mathrm{Landau}}$ is well-known and consists of equidistantly spaced [Landau]{} levels with energies $\varepsilon
_{n}=\hbar \omega _{c}(n+1/2)$. Each of these levels is highly degenerate and contains $N_{\mathrm{LL}}=1/2\pi l_{0}^{2}$ states per unit area, where $l_{0}=\sqrt{\hbar /m\omega _{c}}$ is the magnetic length. For a given fermionic surface density $n$ one can introduce the filling factor $\nu
=2\pi l_{0}^{2}n$ that denotes the fraction of occupied Landau levels. Note that under the condition of critical rotation, the density of the trapped gas is uniform except at the boundary provided by an external confining potential.
For filling fractions $\nu \leq 1$, particles solely occupy the lowest Landau level and the corresponding many-body eigenfunction of ${\mathcal{H}}_{\mathrm{Landau}}$ takes the form $$\Psi (z_{j})=\mathcal{N}P[z_{1},\,\ldots ,\,z_{N}]\exp
(-\!\sum_{j=1}^{N}\left\vert z_{j}\right\vert ^{2}/4l_{0}^{2})\,,$$where $\mathcal{N}$ is the normalization factor and $P[\left\{ z_{j}\right\}
]$ is a totally antisymmetric polynomial in the coordinates $z_{j}=x_{j}+\mathrm{i}y_{j}$ of the particles. The corresponding eigenenergy is independent of the specific choice of $P[\left\{ z_{j}\right\} ]$ and equals $N\hbar \omega _{c}/2$, where $N$ is the total number of particles. This degeneracy is removed if the dipole-dipole interaction $V_{d}$ is considered. In the following, we limit ourselves to a system at filling $\nu
=1/3$, where interparticle interaction effects are most pronounced.
In this case, the trial wave functions for the ground and quasi-hole excited states can be taken in the form proposed by Laughlin [@Laughlin:1983aa] $$\begin{aligned}
\Psi _{\mathrm{L}}(\left\{ z_{j}\right\} ) &\!\!\!=\!\!\!&{\mathcal{N}}\prod_{k<l}^{N}(z_{k}-z_{l})^{3}\!\exp \!\bigg(-\sum_{i}^{N}\left\vert
z_{i}\right\vert ^{2}/4l_{0}^{2}\bigg)\,, \label{psilaugh} \\
\Psi _{\mathrm{qh}}(\left\{ z_{j}\right\} \!,\,\zeta _{0}) &\!\!\!=\!\!\!&{\mathcal{N}}_{0}\prod_{j=1}^{N}(z_{j}-\zeta _{0})\Psi _{\mathrm{L}}\,,
\label{psiqh}\end{aligned}$$where $\zeta _{0}$ is the position of the quasi-hole. The choice of these wave functions in the case of the considered system with dipole-dipole interactions can be justified as follows. These are exact eigenfunctions for short-range $\delta $-like potentials and are proven to be very good trial wave functions for the Coulomb interaction problem. Actually, as it was shown by Haldane (see the corresponding contribution in [@Prange:1987aa]), the Laughlin states in the fractional quantum Hall effect are essentially unique and rigid at the corresponding filling factors ($\nu =1/3$ in our case). They are favored by strong short-range pseudopotential components that are particularly pronounced in the case of a dipolar potential.
Another possible candidate for the ground state of our system could be a crystalline state similar to the 2D Wigner electron crystal in a magnetic field [@Wigner]. For a non-rotating dipolar Fermi gas in a 2D trap, this state has lower energy than the gaseous state for sufficiently high densities. The estimate of the stability region can be obtained from the Lindemann criterion: the ratio $\gamma $ of the mean square difference of displacements in neighbouring lattice sites to the square of the interparticle distance, $\gamma =\left\langle \left( \mathbf{u}_{i}-\mathbf{u}_{i-1}\right) ^{2}\right\rangle /a^{2}$, should be less than some critical value $\gamma _{c}$ (see, e.g., Ref. [@lindemann]). The results Ref. [@Lozovik] indicate that $\gamma _{c}\approx 0.07$. For zero temperature, $\gamma $ could be estimated as $\gamma \sim \hbar
/ma^{2}\omega _{D}$, where $\omega _{D}$ is the characteristic (Debye) frequency of the lattice phonons, $\omega _{D}^{2}\sim 36d^{2}/ma^{5}$. As a result, the dipolar crystal in a non-rotating gas is stable if the interparticle distance $a=(\pi n)^{-1/2}$ satisfies the condition $a<a_{d}(6\gamma _{c})^{2}\ll a_{d}$, i.e., the gas is in the strongly correlated regime, $V_{d}\sim d^{2}/a^{3}\sim (a_{d}/a)(\hbar
^{2}/ma^{2})\gtrsim \varepsilon _{F}/(6\gamma _{c})^{2}\gg \varepsilon _{F}$.
A strong magnetic field with the cyclotron frequency larger than the Debye frequency, $\omega _{c}>\omega _{D}$, favors the crystalline state by modifying the vibrational spectrum of the crystal. In this case, $\gamma
\sim \hbar /ma^{2}\omega _{c}$ [@Wigner], and the corresponding critical value is $\gamma _{c}\approx 0.08$ [@Lozovik]. Therefore, the crystalline state is stable if $\gamma \sim \nu /2<\gamma _{c}$. This limits the filling factor $\nu $ to small values $\nu <1/6$. As a result, the ground state of the system at filling factor $\nu =1/3$ is indeed well-described by the Laughlin wave function (\[psilaugh\]).
In order to proof the incompressibility of the state $\Psi _{\mathrm{L}}$, we calculate the energy gap $\Delta \varepsilon _{\mathrm{qh}}$ for the quasihole excitation. This gap can be expressed in terms of the pair correlation functions of the ground state $g_{0}(z_{1},\,z_{2})$ and the quasihole state $g_{\mathrm{qh}}(z_{1},\,z_{2})\!$, respectively, as
$$\Delta \varepsilon _{\mathrm{qh}}\!\!\!=\!\!\!\!\!\int \!\!\!\mathrm{d}^{2}z_{1}\mathrm{d}^{2}z_{2}V_{\mathrm{d}}(z_{1},\,z_{2})\left( g_{\mathrm{qh}}(z_{1},\,z_{2})\!-\!g_{0}(z_{1},\,z_{2})\right) \,. \label{gapexpression}$$
For the states (\[psilaugh\]) and (\[psiqh\]), the functions $g_{\mathrm{0}}$ and $g_{\mathrm{qh}}$ have been calculated using the Monte Carlo method [@MonteCarlo]; they were approximated by [Girvin ]{}[@Girvin:1984aa] in the thermodynamic limit as
[$$\begin{aligned}
g_{0}(z_1,\,z_2){&\!\!\!=\!\!\!&}\frac{\nu^2}{(2\pi)^2}\!\!\Big(1-{\rm
e}^{-\frac{{\left| z_1-z_2 \right|}^2}{2}}
-2\sum_{j}^{\rm \scriptscriptstyle odd}\frac{C_j}{4^jj!}{\left| z_1-z_2 \right|}^{2j}{\rm
e}^{-\frac{{\left| z_1-z_2 \right|}^2}{4}}\!\!\Big)\\
\label{gqh}
g_{\rm qh}(z_1,\,z_2){&\!\!\!=\!\!\!&}\frac{\nu^2}{(2\pi)^2}\bigg[\prod_{j=1}^2
\Big(1-{\rm
e}^{-\frac{{\left| z_j \right|}^2}{2}}\Big)
-{\rm e}^{-\frac{{\left| z_1 \right|}^2+{\left| z_2 \right|}^2}{2}}
\bigg(\Big|{\rm e}^{\frac{z_1z^\star_2}{2}}-1\Big|^2+
2\sum_{j}^{\rm \scriptscriptstyle odd}\frac{C_j}{4^jj!}\sum_{k=0}^\infty
\frac{{\left| F_{j,\,k}(z_1,\,z_2) \right|}^2}{4^kk!}
\bigg)\bigg]\,,\\
&&F_{j,\,k}(z_1,\,z_2)=\frac{z_1z_2}{2}
\sum_{r=0}^{j}\sum_{s=0}^{k}
{j \choose r}{k \choose s}
\frac{(-1)^rz_1^{r+s}z_2^{j+k-(r+s)}}{\:\:\sqrt{(r+s+1)(j+k+1-(r+s))}\:\:}\:. \end{aligned}$$]{}
With respect to $\nu =1/3$, it was argued that an accuracy better than $2\%$ is already achieved when only the first two coefficients $C_{1}\!=\!1$ and $C_{3}\!=\!-1/2$ are taken into account. In Fig.1 we plot the difference $g_{\mathrm{qh}}\!-\!g_{0}$ for the particular choice $z_{1}=3
$ and $\zeta _{0}=0$. After substituting these expressions into Eq. ([gapexpression]{}) and integrating numerically, we obtain $$\Delta \varepsilon _{\mathrm{qh}}\!\!\!=\!\!\!(0.9271\pm
0.019)\,d^{2}/l_{0}^{3} \label{gapnumber}$$for the energy gap in the spectrum of quasiholes. Naturally, a gap of the same order of magnitude is to be expected in the spectrum of quasiparticles (quasielectrons, in the language of the fractional quantum Hall effect), although calculations in this case are much more difficult because , to the best of our knowledge, the closed or even approximate expression for the corresponding pair correlation function does not exist. The gap ([gapnumber]{}) can also be written in the form $$\Delta \varepsilon _{\mathrm{qh}}\!\!\!=\!\!\!(0.9271\pm 0.019)\hbar \omega
_{c}(a_{d}/l_{0}),$$where $a_{d}=md^{2}/\hbar ^{2}$ can be considered as a characteristic size of the dipole interaction. For a dipole moment of the order of $0.5\mathrm{Debye}$, mass $m\sim 30$atomic mass units, the value of $a_{d}$ is of the order of $10^{3}\mathring{A}$, and for the trap frequency $\omega
_{0}\sim 2\pi 10^{3}\mathrm{Hz}$, one obtains the gap $\Delta \varepsilon _{\mathrm{qh}}\sim 30\mathrm{nK}$ and the ratio $\Delta \varepsilon _{\mathrm{qh}}\!\!\!/\hbar \omega _{c}<1$. This result shows (see Fig. 2) that on the one hand, the interparticle interaction does not mix different Landau levels, and thus the lowest Landau level approximation used in the construction of the Laughlin wave function (\[psilaugh\]) is reliable. On the other hand, it guarantees that the neglected term ${\mathcal{H}}_{\Delta
}$ in the Hamiltonian (\[modifHamilton\]), which is inevitably present in some experimental realizations (see below), is indeed small and does not influence the trial wave functions.
![The difference $g_{\mathrm{qh}}-g_{0}$ as a function of $z\equiv
z_{2}-z_{1}$ for $\protect\zeta _{0}=0$ and $z_{1}=3$. Both particles strongly avoid each other, and rotational invariance is broken by the quasi-hole.](fig1.eps){width="45.00000%"}
![Single-particle nergy levels of the Hamiltonian ${\mathcal{H}}={\mathcal{H}}_{\mathrm{Landau}}+{\mathcal{H}}_{\Delta }$ versus their angular momentum $L$.](fig2.eps){width="40.00000%"}
Let us now discuss possible ways of experimental realization and detection of the above states. At present, there exist two experimental methods to create rapidly rotating gas clouds. In the JILA experiments [Cornell:2004aa,Cornell:2001aa]{}, a rotating condensate in the harmonic trap was created by evaporation of one of the spin components, and rotational rates $\omega >99\%$ of the centrifugal limit were achieved. In this case, the term ${\mathcal{H}}_{\Delta }$ is inevitably present in the Hamiltonian, and limits the total number of particles $N$. Namely, the condition ${\mathcal{H}}_{\Delta }<\Delta \varepsilon _{\mathrm{qh}}\lesssim \hbar
\omega _{c}$ and the fact that single-particle states with angular momenta up to $L_{z}=3N(N-1)/2$ contribute to the states (\[psilaugh\]) and ([psiqh]{}), impose the constraint $3N(N-1)/2<\Delta \varepsilon _{\mathrm{qh}}/\Delta \omega $. For $\Delta \omega /\omega _{0}=10^{-3}$, it gives $N<30$. Fortunately, this bound is large enough to expect the validity of our calculations, which were performed for a homogeneous gas in the thermodynamic limit.
In the experiments of the ENS group [@Dalibard:2004aa; @Dalibard:2000aa], the bosonic gas sample was brought into rotation by stirring it with an additional laser. In addition to the harmonic potential of the optical trap, there is an extra (quartic) confining potential that allows to reach and even exceed the critical value $\omega _{c}$. In the case of critical rotation, the term ${\mathcal{H}}_{\Delta }$ can be neglected and the number of particles is only limited by the radial size of the gas cloud. We would like to point out that in experiments of this type, it is possible to impose a quenched disorder potential in the rotating frame, generated by speckle radiation from a rotating diffractive mask [@speckle]. The rotation of the mask should be synchronized with the stirring laser. This quenched disorder potential localizes single-particle excitations that appear in the system when the filling factor $\nu $ deviates from the value $1/3$. Therefore, it provides fractional quantum Hall states with the robustness necessary for experimental observation.
Let us finally discuss possible ways of experimental detection of the fractional quantum Hall states. One of them could be the measurement of the statistics of quasiholes using the Ramsey-type interferometric method proposed in Ref. [@Cirac:2001aa]. Another possibility would be to study the properties of the surface (edge) modes, which are analogous to the chiral edge states of electrons in quantum Hall effect. The corresponding analysis for a rotating bosonic cloud was recently performed in Ref. [Cazalilla]{}. Finally, we could propose the detection of collective modes that are similar to magnetorotons and magnetoplasmons collective modes in electron quantum Hall systems (see, e.g., Girvin’s contribution to Ref. [Prange:1987aa]{}).
We are indebted to W. Apel for valuable advice and help, and we thank M. Leduc, C. Salomon, L. Sanchez-Palencia, and G.V. Shlyapnikov for helpful discussions. We acknowledge support from the Deutsche Forschungsgemeinschaft SPP1116 and SFB 407, the RTN Cold Quantum Gases, ESF PESC BEC2000+, the Russian Foundation for Basic Research, and the Alexander von Humboldt Foundation.
[99]{} D. Jaksch, C. Bruder, J.I. Cirac, C.W. Gardiner, and P. Zoller, Phys. Rev. Lett. **81**,3108 (2000).
M. Greiner, O. Mandel, T. W. Ha"nsch, and I. Bloch, Nature **415**, 39 (2002).
V. Bretin, S. Stock, Y. Seurin, and J. Dalibard, Phys. Rev. Lett. **92**, 050403 (2004).
K.W. Madison, F. Chevy, W. Wohlleben, and J. Dalibard, Phys. Rev. Lett. **84**, 806 (2000).
P.C. Haljan, I. Coddington, P. Engels, and E.A. Cornell, Phys. Rev. Lett. **87**, 210403 (2000).
V. Schweikhard, I. Coddington, P. Engels, V.P. Mogendorff, and E.A. Cornell, Phys. Rev. Lett. **92**, 040404 (2004).
R.E. Prange and S.M. Girvin (editors), *The Quantum Hall Effect*, New York: Springer Verlag (1987).
N.K. Wilkin and J.M.F. Gunn, Phys. Rev. Lett. **84**, 6 (2000).
N.R. Cooper, N.K. Wilkin, and J.M.F. Gunn, Phys. Rev. Lett. **87**, 120405 (2001).
B. Paredes, P. Fedichev, J.I. Cirac, and P. Zoller, Phys. Rev. Lett. **87**, 010402 (2001).
J.K. Jain, Phys. Rev. Lett. **63**, 199 (1989).
T. Jolicoeur and N. Regnault, e-print cond-mat/0404093.
C.A. Regal, C. Ticknor, J.L. Bohn, and D.S. Jin, Phys. Rev. Lett. **90**, 053201 (2003).
M. Baranov, L. Dobrek, K. Goral, L. Santos, and M. Lewenstein, Physica Scripta T **102**, 74 (2002).
R.B. Laughlin, Phys. Rev. Lett. **50**, 1395 (1983).
H. Fukuyama, Solid State Commun. **19**, 551 (1976); M. Jonson and G. Srinivasan, Solid State Commun. **24**, 61 (1977).
J.M. Ziman, *Principles of the Theory of Solids*, Cambridge University Press, 1972.
Yu.E. Lozovik and V.M. Farztdinov, Solid State Commun. **54**, 725 (1985).
S.M. Girvin and T. Jach, Phys. Rev. B **29**, 5617 (1984).
S.M. Girvin, Phys. Rev. B **30**, 558 (1984).
P. Horak, J.-Y. Courtois, and G. Grynberg, Phys. Rev. A **58**, 3953 (1998); G. Grynberg, P. Horak, C. Mennerat-Robilliard, Europhys. Lett. **49**, 424 (2000).
M.A. Cazalilla, e-print cond-mat/0207715.
|
---
address:
- '$^{\sharp}$Maastricht University, Department of Quantitative Economics, P.O. Box 616, NL6200 MD Maastricht, The Netherlands.'
- '$^{\star}$Univ. Grenoble Alpes, CNRS, Inria, LIG, 38000, Grenoble, France.'
- '$^{\ddag}$Criteo AI Lab.'
- '$^{\diamond}$Maastricht University, Department of Data Science and Knowledge Engineering, P.O. Box 616, NL6200 MD Maastricht, The Netherlands'
author:
- 'Benoit Duvocelle$^{\sharp}$'
- 'Panayotis Mertikopoulos$^{\star,\ddag}$'
- |
\
Mathias Staudigl$^{\diamond}$
- 'Dries Vermeulen$^{\sharp}$'
bibliography:
- 'IEEEabrv.bib'
- 'Bibliography.bib'
title: 'Learning in time-varying games'
---
[^1]
\[l.s.c.\][lower semi-continuous]{} \[NE\][Nash equilibria]{} \[i.i.d.\][independent and identically distributed]{}
Introduction {#sec:introduction}
============
Preliminaries {#sec:prelims}
=============
Problem setup {#sec:setup}
=============
No-regret learning {#sec:learning}
==================
Equilibrium tracking and convergence analysis {#sec:equilibrium}
=============================================
Learning with payoff-based information {#sec:bandit}
======================================
Concluding remarks {#sec:conclusion}
==================
Basic properties of prox-mappings {#app:Bregman}
=================================
Regret minimization {#app:regret}
===================
[^1]:
|
---
abstract: 'In this paper we prove a conjecture of B. Shoikhet which claims that two quantization procedures arising from Fourier dual constructions actually coincide.'
address:
- 'D.C.: Institut Camille Jordan, Université Claude Bernard Lyon 1, 43 boulevard du 11 novembre 1918, F-69622 Villeurbanne Cedex France'
- 'G. F: Department of mathematics, ETH Zurich, 8092 Zurich, Switzerland'
- 'C. A. R.: Centro de Análise Matemática, Geometria e Sistemas Dinâmicos, Departamento de Matemática, Instituto Superior Técnico, Av. Rovisco Pais, 1049-001 Lisboa, Portugal'
author:
- Damien Calaque
- Giovanni Felder
- 'Carlo A. Rossi'
title: Deformation quantization with generators and relations
---
Introduction
============
There are two ways to quantize a polynomial Poisson structure $\pi$ on the dual $V^*$ of a finite dimensional complex vector space $V$, using Kontsevich’s formality as a starting point.
The first (obvious) way is to consider the image $\mathcal U(\pi_\hbar)$ of $\pi_\hbar=\hbar\pi$ through Kontsevich’s $L_\infty$-quasi-isomorphism $$\mathcal U:\mathrm T_{\rm poly} (V^*)\longrightarrow \mathrm D_{\rm poly}(V^*)\,,$$ and to take ${\rm m}_\star:={\rm m}+\mathcal U(\pi_\hbar)$ as a $\star$-product quantizing $\pi$, $\mathrm m$ being the standard product on $\mathrm S(V)=\mathcal O_{V^*}$.
The main idea, due to B. Shoikhet [@Sh], behind the second (less obvious) way is to deform the relations of $\mathrm S(V)$ instead of the product ${\rm m}$ itself.
Consider for example a constant Poisson structure $\pi$ on $V^*$: the deformation quantization of $\mathrm S(V)$ w.r.t. $\hbar\pi$ is the Moyal–Weyl algebra $\mathrm S(V)[\![\hbar]\!]$ with Moyal product $\star$ given by $$f_1\star f_2=\mathrm m\!\left(\exp{\frac{\hbar\pi}2}(f_1\otimes f_2)\right),$$ where $\pi$ is understood here as a bidifferential endomorphism of $\mathrm S(V)\otimes\mathrm S(V)$. On the other hand, it is well-known that the Moyal–Weyl algebra associated to $\pi$ is isomorphic to the free associative algebra over $\mathbb C[\![\hbar]\!]$ with generators $x_i$ (for $\{x_i\}$ a basis of $V$) by the relations $$x_i\star x_j-x_j\star x_i=\hbar \pi_{ij}.$$ The construction we are interested in generalizes to any polynomial Poisson structure $\pi$ on $V^*$ the two ways of characterizing the Moyal–Weyl algebra associated to $\pi$.
More conceptually, $\mathrm S(V)$ is a quadratic Koszul algebra of the form $T(V)/\langle R\rangle$, where $R$ is the subspace of $V^{\otimes 2}$ spanned by vectors of the form $x_i\otimes x_j-x_j\otimes x_i$, $\{x_i\}$ as in the previous paragraph. The right-hand side of the identity $\mathrm S(V)=\mathrm T(V)/\langle R\rangle$ can be viewed as the $0$-th cohomology of the free associative dg (short for differential graded from now on) algebra $\mathrm T(\wedge^-(V))$ over $\mathbb C$, where $\wedge^-(V)$ is the graded vector space $\wedge^-(V)=\bigoplus_{p=-d+1}^0 \wedge^-(V)_p=\bigoplus_{p=-d+1}^0 \wedge^{-p+1}(V)$ and differential $\delta$ on generators $\{x_{i_1},x_{i_1,i_2},\dots\}$ of $\wedge^-(V)$ specified by $$\delta (x_{i_1})=0,\ \delta (x_{i_1,i_2})=x_{i_1}\otimes x_{i_2}-x_{i_2}\otimes x_{i_1},\ \text{etc.}$$ Observe that the differential $\delta$ dualizes the product of the graded commutative algebra $\wedge(V^*)$: in fact, $\wedge(V^*)$ is the Koszul dual of $\mathrm S(V)$, and the above complex comes from the identification $\mathrm S(V)=\mathrm{Ext}_{\wedge(V^*)}(\mathbb C,\mathbb C)$ by explicitly computing the cohomology on the right-hand side w.r.t. the bar resolution of $\mathbb C$ as a (left) $\wedge(V^*)$-module (the above dg, short for differential graded, algebra is the cobar construction of $\mathrm S(V)$, and $\delta$ is the cobar differential). The above dg algebra is acyclic except in degree $0$; the $0$-th cohomology is readily computed from the above formulæ and equals precisely $\mathrm T(V)/\langle R\rangle$.
Therefore, the idea is to prove that the property of being Koszul and Koszul duality between $\mathrm S(V)$ and $\wedge(V^*)$ is preserved (in a suitable sense, which will be clarified later on) by deformation quantization.
Namely, one makes use of the graded version [@CF] of Kontsevich’s formality theorem, applied to the Fourier dual space $V[1]$. We then have an $L_\infty$-quasi-isomorphism $$\mathcal V:T_{\rm poly}(V^*)\cong T_{\rm poly}(V[1])\longrightarrow D_{\rm poly}(V[1])\,,$$ and the image $\mathcal V(\widehat{\pi_\hbar})$ of $\widehat{\pi_\hbar}$, where $\widehat{\bullet}$ is the isomorphism $\mathrm T_{\rm poly}(V^*)\cong \mathrm T_{\rm poly}(V[1])$ of dg Lie algebras (graded Fourier transform), defines a deformation quantization of the graded commutative algebra $\wedge(V^*)$ as a (possibly curved) $A_\infty$-algebra.
In the context of the Formality Theorem with $2$ branes [@CFFR], the deformation quantization of $\wedge(V^*)$ is Koszul dual (in a suitable sense) w.r.t. the first deformation quantization of $\mathrm S(V)$, and the (possibly curved) $A_\infty$-structure on the deformation quantization of $\wedge(V^*)$ induces a deformation $\delta_\hbar$ of the cobar differential $\delta$, which in turn produces a deformation $\mathcal I_\star$ of the two-sided ideal $\mathcal I=\langle R\rangle$ in $\mathrm T(V)$ of defining relations of $\mathrm
S(V)$.
We are then able to prove the following result, first conjectured by Shoikhet in [@Sh2 Conjecture 2.6]:
\[main\] Given a polynomial Poisson structure $\pi$ on $V^*$ as above, the algebra $A_\hbar:=\big(\mathrm S(V)[\![\hbar]\!],{\rm m}_\star\big)$ is isomorphic to the quotient of $\mathrm T(V)[\![\hbar]\!]$ by the two-sided ideal $\mathcal I_\star$; the isomorphism is an $\hbar$-deformation of the standard symmetrization map from $\mathrm
S(V)$ to $\mathrm T(V)$.
\[r-formal\] We mainly consider here a formal polynomial Poisson structure of the form $\hbar\pi$, but all the arguments presented here apply as well to any formal polynomial Poisson structure $\pi_\hbar=\hbar\pi_1+\hbar^2\pi_2+\cdots$, where $\pi_i$ is a polynomial bivector field.
The paper is organized as follows. In Section 2 we start with a recollection on $A_\infty$-algebras and bimodules. We then formulate the formality theorem with two branes of [@CFFR] in a form suitable for the application at hand. After this we describe the deformation of the cobar complex obtained from $\mathcal
V(\widehat{\pi_\hbar})$ and prove Theorem \[main\]. We conclude the paper with three examples, see Section \[s-3\]: the cases of constant, linear, and quadratic Poisson structures.
We express our gratitude to the anonymous referee for the careful reading of the manuscript and for many useful comments and suggestions, which have helped us improve the paper.
A deformation of the cobar construction of the exterior coalgebra {#s-2}
=================================================================
$A_\infty$-algebras and (bi)modules of finite type {#ss-2-1}
--------------------------------------------------
We first recall the basic notions of the theory of $A_\infty$-algebras and modules, see [@Keller; @CFFR] to fix the conventions and settle some finiteness issues. Note that we allow non-flat $A_\infty$-algebras in our definition. Let $\mathrm{T}(V)=\mathbb
C\oplus V\oplus V^{\otimes 2}\oplus\cdots$ be the tensor coalgebra of a $\mathbb Z$-graded complex vector space $V$ with coproduct $\Delta(v_1,\dots,v_n)=\sum_{i=0}^n(v_1,\dots,v_i)\otimes(v_{i+1},\dots,v_n)$ and counit $\eta(1)=1$, $\eta(v_1,\dots,v_n)=0$ for $n\geq 1$. Here we write $(v_1,\dots,v_n)$ as a more transparent notation for $v_1\otimes\cdots\otimes v_n\in \mathrm{T}(V)$ and set $()=1\in\mathbb C$. Let $V[1]$ be the graded vector space with $V[1]^i=V^{i+1}$ and let the suspension $s\colon V\to V[1]$ be the map $a\mapsto a$ of degree $-1$. Then an $A_\infty$-algebra over $\mathbb C$ is a $\mathbb Z$-graded vector space $B$ together with a codifferential $\mathrm{d}_B\colon \mathrm{T}(B[1])\to \mathrm{T}(B[1])$, namely a linear map of degree 1 which is a coderivation of the coalgebra and such that $\mathrm{d}_B\circ \mathrm{d}_B=0$. A coderivation is uniquely given by its components $\mathrm{d}_B^k\colon B[1]^{\otimes
k}\to B[1]$, $k\geq0$ and any set of maps $\colon B[1]^{\otimes
k}\to B[1]$ of degree $1$ uniquely extends to a coderivation. This coderivation is a codifferential if and only if $\sum_{j+k+l=n}
\mathrm{d}_B^n\circ(\mathrm{id}^{\otimes j}\otimes
\mathrm{d}_B^k\otimes \mathrm{id}^{\otimes l})=0$ for all $n\geq0$. The maps $\mathrm{d}_B^k$ are called [*Taylor components*]{} of the codifferential $\mathrm{d}_B$. If $\mathrm{d}_B^0=0$, the $A_\infty$-algebra is called [*flat*]{}. Instead of $\mathrm{d}_B^k$ it is convenient to describe $A_\infty$-algebras through the product maps $\mathrm{m}^k_B=s^{-1}\circ \mathrm{d}_B^k\circ
s^{\otimes k}$ of degree $2-k$. If $\mathrm{m}_B^k=0$ for all $k\neq
1,2$ then $B$ with differential $\mathrm{m}^1_B$ and product $\mathrm{m}^2_B$ is a differential graded algebra. A [*unital*]{} $A_\infty$-algebra is an $A_\infty$-algebra $B$ with an element $1\in B^0$ such that $$\begin{array}{ll}
\mathrm{m}_B^2(1,b)=\mathrm{m}_B^2(b,1)=b,& \forall b\in B,\\
\mathrm{m}_B^j(b_1,\dots,b_j)=0,& \text{if $b_i=1$ for some $1\leq
i\leq j$ and $j\neq 2$.}
\end{array}$$ The first condition translates to $\mathrm{d}_B^2(s1,b)=b=(-1)^{|b|-1}\mathrm{d}_B^2(b,s1)$, if $b\in
B[1]$ has degree $|b|$. A [*right $A_\infty$-module $M$*]{} over an $A_\infty$-algebra $B$ is a graded vector space $M$ together with a degree one codifferential $\mathrm{d}_M$ on the cofree right $\mathrm{T}(B[1])$-comodule $M[1]\otimes \mathrm{T} (B[1])$ cogenerated by $M$. The Taylor components are $\mathrm{d}_M^j\colon
M[1]\otimes B[1]^{\otimes j}\to M[1]$ and in the unital case we require that $\mathrm{d}_M^1(m,s1)=(-1)^{|m|-1}m$ and $\mathrm{d}_M^j(m,b_1,\dots,b_j)=0$ if some $b_j$ is $s1$. Left modules are defined similarly. An $A_\infty$-$A$-$B$-bimodule $M$ over $A_\infty$-algebras $A$, $B$ is the datum of a codifferential on the $\mathrm{T}(A[1])$-$\mathrm{T}(B[1])$-bicomodule $\mathrm{T}(A[1])\otimes M[1]\otimes \mathrm{T}(B[1])$, given by its Taylor components $\mathrm{d}_M^{j,k}\colon A[1]^{\otimes j}\otimes
M[1]\otimes B[1]^{k}\to M[1]$. The following is a simple but important observation.
If $M$ is an $A_\infty$-$A$-$B$-bimodule and $A$ is a flat $A_\infty$-algebra then $M$ with Taylor components $\mathrm{d}_M^{0,k}$ is a right $A_\infty$-module over $B$.
Morphisms of $A_\infty$-algebras ($A_\infty$-(bi)modules) are (degree 0) morphisms of graded counital coalgebras (respectively, (bi)comodules) commuting with the codifferentials. Morphisms of tensor coalgebras and of free comodules are again uniquely determined by their Taylor components. For instance a morphism of right $A_\infty$-modules $M\to N$ over $B$ is uniquely determined by the components $f_j\colon M[1]\otimes B[1]^{\otimes j}\to N[1]$.
A morphism between cofree (left-, right-, bi-) comodules over the cofree tensor coalgebra is said to be of [*finite type*]{} if all but finitely many of its Taylor components vanish. Therefore, by abuse of terminology, we may speak of a morphism of finite type between (left-, right-, bi-) $A_\infty$-modules over an $A_\infty$-algebra.
The identity morphism is of finite type and the composition of morphisms of finite type is again of finite type.
The unital algebra of endomorphisms of finite type of a right $A_\infty$-module $M$ over an $A_\infty$-algebra $B$ is the $0$-th cohomology of a differential graded algebra $\underline{\mathrm{End}}_{-B}(M)=\oplus_{j\in\mathbb
Z}\underline{\mathrm{End}}^j_{-B}(M)$. The component of degree $j$ is the space of endomorphisms of degree $j$ of finite type of the comodule $M[1]\otimes \mathrm{T}(B[1])$. The differential is the graded commutator $\delta f=[\mathrm{d}_M,f]=\mathrm{d}_M\circ
f-(-1)^jf\circ \mathrm{d}_M$ for $f\in
\underline{\mathrm{End}}^j_{-B}(M)$. If $M$ is an $A_\infty$-$A$-$B$-bimodule and $A$ is flat, then $\underline{\mathrm{End}}_{-B}(M)$ is defined and the left $A$-module structure induces a [*left action*]{} $\mathrm{L}_A$, which is a morphism of $A_\infty$-algebras $A\to
\underline{\mathrm{End}}_{-B}(M)$: its Taylor components are $\mathrm{L}^j_A(a)^k(m\otimes b)=\mathrm{d}_M^{j,k}(a\otimes
m\otimes b)$, $a\in A[1]^{\otimes j}$, $m\in M[1]$, $b\in
B[1]^{\otimes k}$.
Let $M$ be a right $A_\infty$-module over a unital $A_\infty$-algebra $B$. Then the subspace $\underline{\mathrm{End}}_{-B^+}(M)$ of endomorphisms $f$ such that $f_j(m,b_1,\dots,b_j)=0$ whenever $b_i=s1$ for some $i$, is a differential graded subalgebra.
We call this differential graded subalgebra the subalgebra of [*normalized*]{} endomorphisms.
It is clear from the formula for Taylor components of the composition that normalized endomorphisms form a graded subalgebra: $(f\circ g)^k=\sum_{i+j=k} f^j\circ (g^i\otimes
\mathrm{id}_{B[1]}^{\otimes j})$. The formula for the Taylor components of the differential of an endomorphism $f$ is $$\begin{aligned}
(\delta f)^k&=&
\sum_{i+j=k}
\bigl(
\mathrm{d}_M^j\circ
(f^i\otimes\mathrm{id}_{B[1]}^{\otimes j})
-(-1)^{|f|}
f^i\circ
(\mathrm{d}_M^j\otimes\mathrm{id}_{B[1]}^{\otimes i})
\\
&&-(-1)^{|f|} f^{k-j+1}\circ
(\mathrm{id}_{M[1]}\otimes
\mathrm{id}_{B[1]}^{\otimes i}
\otimes \mathrm{d}_B^j\otimes
\mathrm{id}_{B[1]}^{\otimes(k-i-j)})\bigr).\end{aligned}$$ If $f$ is normalized and $b_i=s1$ for some $i$, then only two terms contribute non-trivially to $(\delta f)^k(m,b_1,\dots,b_k)$, namely $f^{k-1}(m,b_1,\dots,\mathrm{d}_B^2(s1,b_{i+1}),\dots)$ (or $\mathrm{d}_M^1(f^{k-1}(m,b_1,\dots,b_{k-1}),s1)$ if $i=k$) and $f^{k-1}(m,b_1,\dots, \mathrm{d}_B^2(b_{i-1},s1),\dots)$ (or $f^{k-1}(\mathrm{d}_M^1(m,s1),b_2,\dots)$ if $i=1$). Due to the unital condition these two terms are equal up to sign, hence cancel together.
The same definitions apply to $A_\infty$-algebras and $A_\infty$-bimodules over $\mathbb C[\![\hbar]\!]$ with completed tensor products and continuous homomorphisms for the $\hbar$-adic topology, so that for vector spaces $V,W$ we have $V[\![\hbar]\!]\otimes_{\mathbb{C}[\![\hbar]\!]}W[\![\hbar]\!]=(V\otimes_{\mathbb
C} W)[\![\hbar]\!]$ and $\mathrm{Hom}_{\mathbb
C[\![\hbar]\!]}(V[\![\hbar]\!],W[\![\hbar]\!])=\mathrm{Hom}_{\mathbb
C}(V,W)[\![\hbar]\!]$. A flat deformation of an $A_\infty$-algebra $B$ is an $A_\infty$-algebra $B_\hbar$ over $\mathbb C[\![\hbar]\!]$ which, as a $\mathbb C[\![\hbar]\!]$-module, is isomorphic to $B[\![\hbar]\!]$ and such that $B_\hbar/\hbar B_\hbar\simeq B$. Similarly we have flat deformations of (bi)modules. A right $A_\infty$-module $M_\hbar$ over $B_\hbar$ which is a flat deformation of $M$ over $B$ is given by Taylor coefficients $\mathrm{d}_{M_\hbar}^j\in\mathrm{Hom}_{\mathbb
C}(M[1]\otimes B[1]^{\otimes j},M[1])[\![\hbar]\!]$. The differential graded algebra $\underline{\mathrm{End}}_{-B_\hbar}(M_\hbar)$ of endomorphism of finite type is then defined as the direct sum of the homogeneous components of $\mathrm{End}^{\mathrm{
finite}}_{\mathrm{comod}-\mathrm{T}(B[1])}(M[1]\otimes
\mathrm{T}(B[1]))[\![\hbar]\!]$ with differential $\delta_\hbar=[\mathrm{d}_{M_\hbar},\ ]$. Thus its degree $j$ part is the $\mathbb C[\![\hbar]\!]$-module $$\underline{\mathrm{End}}^j_{B_\hbar}(M_\hbar)=
\left(\oplus_{k\geq0}\mathrm{Hom}^j(M[1]\otimes B[1]^{\otimes
k},M[1])\right)\![\![\hbar]\!],$$ where $\mathrm{Hom}^j$ is the space of homomorphisms of degree $j$ between graded vector spaces over $\mathbb C$.
Finally, the following notation will be used: if $\phi\colon
V_1[1]\otimes\cdots V_n[1]\to W[1]$ is a linear map and $V_i,W$ are graded vector spaces or free $\mathbb C[\![\hbar]\!]$-modules, we set $$\phi(v_1|\cdots|v_n)=s^{-1}\phi(sv_1\otimes\cdots\otimes
sv_n),\qquad v_i\in V_i.$$
Formality theorem for two branes and deformation of bimodules {#ss-2-2}
-------------------------------------------------------------
Let $A=\mathrm{S}(V)$ be the symmetric algebra of a finite dimensional vector space $V$, viewed as a graded algebra concentrated in degree 0. Let $B=\wedge (V^*)=\mathrm{S}(V^*[-1])$ be the exterior algebra of the dual space with $\wedge^i(V^*)$ of degree $i$ [^1]. For any graded vector space $W$, the augmentation module over $\mathrm S(W)$ is the unique one-dimensional module on which $W$ acts by $0$. Let $A_\hbar=(A[\![\hbar]\!],\star)$ be the Kontsevich deformation quantization of $A$ associated with a polynomial Poisson bivector field $\hbar\pi$. It is an associative algebra over $\mathbb
C[\![\hbar]\!]$ with unit $1$. The graded version of the formality theorem, applied to the same Poisson bracket (more precisely, to the image of $\hbar\pi$ w.r.t. the isomorphism of dg Lie algebras $T_\mathrm{poly}(A)\cong T_\mathrm{poly}(B)$), also defines a deformation quantization $B_\hbar$ of the graded commutative algebra $B$. However $B_\hbar$ is in general a unital $A_\infty$-algebra with non-trivial Taylor components $\mathrm{d}_{B_\hbar}^k$ for all $k$ including $k=0$. Still, the differential graded algebra $\underline{\mathrm{End}}_{-B_\hbar}(M_\hbar)$ is defined since $A_\hbar$ is an associative algebra and thus a flat $A_\infty$-algebra. The following result is a consequence of the formality theorem for two branes (=submanifolds) in an affine space, in the special case where one brane is the whole space and the other a point, and is proved in [@CFFR]. It is a version of the Koszul duality between $A_\hbar$ and $B_\hbar$.
\[p-kosz\] Let $A=\mathrm{S}(V)$, $B=\wedge (V^*)$ for some finite dimensional vector space $V$ and let $A_\hbar$, $B_\hbar$ be their deformation quantizations corresponding to a polynomial Poisson bracket.
1. There exists a one-dimensional $A_\infty$-$A$-$B$-bimodule $K$, which, as a left $A$-module and as a right $B$-module, is the augmentation module, and such that $\mathrm{L}_A\colon A\to
\underline{\mathrm{End}}_{-B}(K)$ is an $A_\infty$-quasi-isomorphism.
2. The bimodule $K$ admits a flat deformation $K_\hbar$ as an $A_\infty$-$A_\hbar$-$B_\hbar$-bimodule such that $\mathrm{L}_{A_\hbar}\colon A_\hbar\to
\underline{\mathrm{End}}_{-B_\hbar}(K_\hbar)$ is an $A_\infty$-quasi-isomorphism.
3. The $A_\infty$-$A_\hbar$-$B_\hbar$-bimodule $K_\hbar$ is in particular a right $A_\infty$-module over the unital $A_\infty$-algebra $B_\hbar$. The first Taylor component $\mathrm{L}_{A_\hbar}^1$ sends $A_\hbar$ to the differential graded subalgebra $\underline{\mathrm{End}}_{-B^+_\hbar}(K_\hbar)$ of normalized endomorphisms.
The proof of (i) and (ii) is contained in [@CFFR]. The claim (iii) follows from the explicit form of the Taylor components $\mathrm{d}_{K_\hbar}^{1,j}$, given in [@CFFR], appearing in the definition of $\mathrm{L}^1_A$: $$\mathrm{L}^1_{A_\hbar}(a)^j(1|b_1|\cdots|b_j)=\mathrm{d}_{K_\hbar}^{1,j}(a|1|b_1|\dots|b_j).$$ Namely $\mathrm{d}^{1,j}_{K_\hbar}$ is a power series in $\hbar$ whose term of degree $m$ is a sum over certain directed graphs with $m$ vertices in the complex upper half-plane (vertices of the first type) and $j+2$ ordered vertices on the real axis (vertices of the second type). To each vertex of the first type is associated a copy of $\hbar\pi$; to the first vertex of the second type is associated $a$, to the second $1$, and to the remaining $j$ vertices the elements $b_i$. An example of such a graph is depicted in Figure 4, Subsection \[ss-3-2\].
Each graph contributes a multidifferential operator acting on $a,b_1,\dots,b_j$ times a weight, which is an integral of a differential form on a compactified configuration space of $m$ points in the complex upper half-plane and $j+2$ ordered points on the real axis modulo dilations and real translations. The convention is that to each directed edge of such a graph is associated a derivative acting on the element associated to the final point of the said edge and a $1$-form on the corresponding compactified configuration space.
Therefore, since each $b_i$ may be regarded as a constant polyvector field on $V^*$, there is no edge with final point at a vertex of the second type where a $b_i$ sits (and obviously also where the constant function $1$ sits). If $j\geq 1$ and $b_i$ belongs to $\mathbb C$ for some $1\leq i\leq j$, the vertex of the second type where $b_i$ sits is neither the starting nor the final point of any directed edge: since $j\geq 1$, the dimension of the corresponding compactified configuration space is strictly positive. We may use dilations and real translations to fix vertices (of the first and/or second type) distinct from the one where $b_i$ sits: thus, there would be a $1$-dimensional submanifold (corresponding to the interval, where the vertex corresponding to $b_i$ sits), over which there is nothing to integrate, hence the corresponding weight vanishes.
We turn to the description of the differential graded algebra $\underline{\mathrm{End}}^j_{-B^+_\hbar}(K_\hbar)$. Let $B^+=\oplus_{j\geq1}\wedge^j(V^*)=\wedge(V^*)/\mathbb C$. We have $$\underline{\mathrm{End}}^j_{-B^+_\hbar}(K_\hbar)= (\oplus_{k\geq
0}\mathrm{Hom}^j(K[1]\otimes B^+[1]^{\otimes k},K[1]))[\![\hbar]\!],$$ with product $$(\phi\cdot\psi)\!(1|b_1|\cdots|b_n)=\sum_k
\psi(1|b_1|\dots|b_k)\phi(1|b_{k+1}|\cdots|b_n).$$ It follows that the algebra $\underline{\mathrm{End}}^j_{-B^+_\hbar}(K_\hbar)$ is isomorphic to the tensor algebra $\mathrm{T} (B^+[1]^*)[\![\hbar]\!]$ generated by $\mathrm{Hom}(K[1]\otimes B^+[1],K[1])\simeq B^+[1]^*$. In particular it is concentrated in non-positive degrees.
The restriction $\delta_\hbar\colon B^+[1]^*\to
\mathrm{T}(B^+[1]^*)[\![\hbar]\!]$ of the differential of $\underline{\mathrm{End}}_{-B^+_\hbar}(K_\hbar) \simeq \mathrm{T}
(B^+[1]^*)[\![\hbar]\!]$ to the generators is dual to the $A_\infty$-structure $\mathrm d_{B_\hbar}$ in the sense that $$(\delta_\hbar f)^k(z\otimes b)=(-1)^{|f|}f(z\otimes
\mathrm{d}^k_{B_\hbar}(b)),\ z\in K[1], \ b\in
B[1]^{\otimes k},$$ for any $f\in\mathrm{Hom}(K[1]\otimes B^+[1],K[1])\simeq B^+[1]^*$
The $A_\infty$-structure of $B_\hbar$ is given by Taylor components $\mathrm{d}^k_{B_\hbar}\colon B[1]^{\otimes k}\to
B[1]$. By definition the differential on $\underline{\mathrm{End}}^j_{-B^+_\hbar}(K_\hbar)$ is the graded commutator $\delta_\hbar f=[\mathrm{d}_{K_\hbar},f]$. In terms of Taylor components, $$\begin{aligned}
(\delta_\hbar
f)^k(z\otimes b_1\otimes\cdots\otimes b_k)&=\mathrm{d}^{k-1}_{K_\hbar}(f(z\otimes b_1)\otimes b_{2}\otimes \cdots\otimes b_k)-(-1)^{|f|}f(\mathrm{d}^{k-1}_{K_\hbar}(z\otimes b_1\otimes \cdots\otimes b_{k-1})\otimes b_k)+\\
&\phantom{=}+(-1)^{|f|}f(z\otimes \mathrm{d}^k_{B_\hbar}(b_{1}\otimes
\cdots\otimes b_{k})).\end{aligned}$$ The first two terms vanish if $b_i\in B^+[1]$ for degree reasons.
Thus $\mathrm{L}_{A_\hbar}$ induces an isomorphism from $A_\hbar$ to the cohomology in degree $0$ of $\underline{\mathrm{End}}_{-B^+_\hbar}(K_\hbar)\simeq \mathrm{T}
(B^+[1]^*)[\![\hbar]\!]$.
For $\hbar=0$ this complex is Adam’s cobar construction of the graded coalgebra $B^*$, which is a free resolution of $\mathrm
{S}(V)$.
\[t-sh\] The composition $$\mathrm{L}^1_{A_\hbar}\colon A_\hbar\to
\underline{\mathrm{End}}_{-B^+_\hbar}(K_\hbar)\stackrel\simeq\to
\mathrm{T}(B^+[1]^*)[\![\hbar]\!],$$ induces on cohomology an algebra isomorphism $$\mathrm{L}^1_{A_\hbar}\colon A_\hbar\to \mathrm{T}(V)/\left(\mathrm{T}(
V)\otimes \delta_{\hbar}((\wedge^2V^*)^*) \otimes \mathrm{T}(V)\right),$$ where $\delta_{\hbar}\colon (\wedge^2V^*)^*\to
\mathrm{T}(V)[\![\hbar]\!]$ is dual to $\oplus_{k\geq 0}
\mathrm{d}^k_{B_\hbar}\colon (B^+[1]^0)^{\otimes k}=V^{\otimes k}\to
B^+[1]^1=\wedge^2 V^*$.
The fact that the map is an isomorphism follows from the fact that it is so for $\hbar=0$, by the classical Koszul duality. As the cohomology is concentrated in degree $0$ it remains so for the deformed differential $\delta_\hbar$ over $\mathbb C[\![\hbar]\!]$.
As a graded vector space, $B^+[1]^*=V\oplus
(\wedge^2V^*)^*\oplus\cdots$, with $(\wedge^iV^*)^*$ in degree $1-i$. Therefore the complex $\mathrm{T}(B^+[1]^*)[\![\hbar]\!]$ is concentrated in non-positive degrees and begins with $$\cdots\to \left(\mathrm{T}(V)\otimes (\wedge^2 V^*)^*\otimes
\mathrm{T}(V)\right)[\![\hbar]\!]\to \mathrm{T}(V)[\![\hbar]\!]\to 0.$$ Thus to compute the degree $0$ cohomology we only need the restriction of the Taylor components $\mathrm{d}^k_{B_\hbar}$ on $\mathrm{T}(V^*)=\mathrm{T} (B^+[1])^0$, whose image is in $B[1]^1=\wedge^2V^*$.
This theorem gives a presentation of the algebra $A_\hbar$ by generators and relations. Let $x_1,\dots,x_d\in V$ be a system of linear coordinates on $V^*$ dual to a basis $e_1,\dots,e_d$. Let for $I=\{i_1<\cdots<i_k\}\subset\{1,\dots,d\}$, $x_I\in(\wedge^kV^*)^*$ be dual to the basis $e_{i_1}\wedge\cdots\wedge e_{i_k}$. Then $A_\hbar$ is isomorphic to the algebra generated by $x_1,\dots,x_d$ subject to the relations $\delta_\hbar(x_{ij})=0$. Up to order $1$ in $\hbar$ the relations are obtained from the cobar differential and the graph of Figure 1. $$\delta_\hbar(x_{ij})=x_i\otimes x_j-x_j\otimes
x_i-\hbar\mathrm{Sym}(\pi_{ij})+O(\hbar^2).$$ Here $\mathrm{Sym}$ is the symmetrization map $\mathrm{S}(V)\to
\mathrm{T}(V)$.
\
\
The lowest order of the isomorphism induced by $\mathrm{L}^1_A$ on generators $x_i\in V$ of $A_\hbar=\mathrm{S}(V)[\![\hbar]\!]$ was computed in [@CFFR]: $$\mathrm{L}^1_A(x_i)=x_i+O(\hbar).$$ The higher order terms $O(\hbar)$ are in general non-trivial (for example in the case of the dual of a Lie algebra, see below).
By comparing our construction with the arguments in [@Sh2], we see that the differential $\mathrm{d}_\hbar$ corresponds to the image of $\mathcal V(\widehat{\pi_\hbar})$, where the notations are as in the introduction, by the quasi-isomorphism $\Phi_1$ in [@Sh2 Subsection 1.4]. Hence, Theorem \[t-sh\] provides a proof of [@Sh2 Conjecture 2.6] with the amendment that the isomorphism $A_\hbar\to \mathrm{T}(V)/ \mathcal I_\star$ is not just given by the symmetrization map but has non-trivial corrections.
Examples {#s-3}
========
We now want to examine more closely certain special cases of interest. We assume here that the reader has some familiarity with the graphical techniques of [@K; @CF; @CFFR]. To obtain the relations $\delta_\hbar(x_{ij})$ we need $\mathrm
d^m_{B_\hbar}(b_1|\cdots|b_m)\in\wedge^2V^*[\![\hbar]\!]$, for $b_i\in
V^*\subset B^+$. The contribution at order $n$ in $\hbar$ to this is given by a sum over the set $\mathcal G_{n,m}$ of admissible graphs with $n$ vertices of the first type and $m$ of the second type.
The Moyal–Weyl product on $V$ {#ss-3-1}
-----------------------------
Let $\pi_\hbar=\hbar \pi$ be a constant Poisson bivector on $V^*$, which is uniquely characterized by a complex, skew-symmetric matrix $d\times d$-matrix $\pi_{ij}$.
In this case, Kontsevich’s deformed algebra $A_\hbar$ has an explicit description: the associative product on $A_\hbar$ is the Moyal–Weyl product $$(f_1\star f_2)=\mathrm m \circ \exp{\frac{1}2 \pi_\hbar},$$ where $\pi_\hbar$ is viewed here as a bidifferential operator, the exponential has to be understood as a power series of bidifferential operators, and $\mathrm m$ denotes the ($\mathbb C[\![\hbar]\!]$-linear) product on polynomial functions on $V^*$. On the other hand, it is possible to compute explicitly the complete $A_\infty$-structure on $B_\hbar$.
\[l-weyl\] For a constant Poisson bivector $\pi_\hbar$ on $V^*$, the $A_\infty$-structure on $B_\hbar$ has only two non-trivial Taylor components, namely $$\label{eq-weyl}
\mathrm d_{B_\hbar}^0(1)=\hbar \pi,\quad \mathrm d_{B_\hbar}^2(b_1|b_2)=(-1)^{|b_1|}b_1\wedge b_2,\quad b_i\in B_\hbar,\quad i=1,2.$$
We consider $\mathrm d^m_{B_\hbar}$ first in the case $m=0$. Admissible graphs contributing to $\mathrm d_{B_\hbar}^0$ belong to $\mathcal
G_{n,0}$, for $n\geq 1$. For $n\geq 2$, all graphs give contributions involving a derivative of $\pi_{ij}$ and thus vanish. There remains the only graph in $\mathcal G_{1,0}$, whence the first identity in .
By the same reasons, $\mathrm d_{B_\hbar}^m$ is trivial, if $m\geq
1$ and $m\neq 2$: in the case $m=1$, we have to consider contributions coming from admissible graphs in $\mathcal G_{n,1}$, with $n\geq 1$, which vanish for the same reasons as in the case $m=0$.
For $m\geq 3$, contributions coming from admissible graphs in $\mathcal G_{n,m}$, $n\geq 1$, are trivial by a dimensional argument.
Finally, once again, the only possibly non-trivial contribution comes from the unique admissible graph in $\mathcal G_{0,2}$ which gives the product.
As a consequence, the differential $\delta_\hbar$ can be explicitly computed, namely $$\delta_{\hbar}(x_{ij})=x_i\otimes x_j-x_j\otimes
x_i-\hbar\pi_{ij}.$$ This provides the description of the Moyal–Weyl algebra as the algebra generated by $x_i$ with relations $[x_i,x_j]=\hbar\pi_{ij}$.
We finally observe that the quasi-isomorphism $\mathrm L_{A_\hbar}^1$ coincides, by a direct computation, with the usual symmetrization morphism.
The universal enveloping algebra of a finite-dimensional Lie algebra $\mathfrak g$ {#ss-3-2}
----------------------------------------------------------------------------------
We now consider a finite-dimensional complex Lie algebra $V=\mathfrak g$: its dual space $\mathfrak g^*$ with Kirillov–Kostant-Souriau Poisson structure. With respect to a basis $\{x_i\}$ of $\mathfrak g$, we have $$\pi=f_{ij}^k x_k\partial_i\wedge \partial_j,$$ where $f_{ij}^k$ denote the structure constant of $\mathfrak g$ for the chosen basis.
It has been proved in [@K Subsubsection 8.3.1] that Kontsevich’s deformed algebra $A_\hbar$ is isomorphic to the universal enveloping algebra $\mathrm U_\hbar(\mathfrak g)$ of $\mathfrak g[\![\hbar]\!]$ for the $\hbar$-shifted Lie bracket $\hbar[\
,\ ]$.
On the other hand, we may, once again, compute explicitly the $A_\infty$-structure on $B_\hbar$.
\[l-CE\] The $A_\infty$-algebra $B_\hbar$ determined by $\pi_\hbar$, where $\pi$ is the Kirillov–Kostant–Souriau Poisson structure on $\mathfrak g^*$, has only two non-trivial Taylor components, namely $$\label{eq-CE}
\mathrm d_{B_\hbar}^1(b_1)=\mathrm d_{\mathrm{CE}}(b_1),\quad \mathrm d_{B_\hbar}^2(b_1|b_2)=(-1)^{|b_1|}b_1\wedge b_2,\quad b_i\in B_\hbar,\quad i=1,2,$$ where $\mathrm d_{\mathrm{CE}}$ denotes the Chevalley–Eilenberg differential of $\mathfrak g$, endowed with the rescaled Poisson bracket $\hbar[\bullet,\bullet]$.
By dimensional arguments and because of the linearity of $\pi_\hbar$, there are only two admissible graphs in $\mathcal
G_{1,0}$ and $\mathcal G_{2,0}$, which may contribute non-trivially to the curvature of $B_\hbar$, namely,
\
\
The operator $\mathcal O_\Gamma^B$ for the graph in $\mathcal
G_{1,0}$ vanishes, when setting $x=0$. On the other hand, $\mathcal
O_\Gamma^B$ vanishes in virtue of [@K Lemma 7.3.1.1].
We now consider the case $m\geq 1$. We consider an admissible graph $\Gamma$ in $\mathcal G_{n,m}$ and the corresponding operator $\mathcal O_\Gamma^B$: the degree of the operator-valued form $\omega_\Gamma^B$ equals the number of derivations acting on the different entries associated to vertices either of the first or second type. Thus, the operator $\mathcal O_\Gamma^B$ has a polynomial part (since all structures are involved are polynomial on $\mathfrak g^*$): since the polynomial part of any of its arguments in $B_\hbar$ has degree $0$, the polynomial degree of $\mathcal
O_\Gamma^B$ must be also $0$. A direct computation shows that this condition is satisfied if and only if $n+m=2$, because $\pi_\hbar$ is linear.
Obviously, the previous identity is never satisfied if $m\geq 3$, which implies immediately that the only non-trivial Taylor components appear when $m=1$ and $m=2$. When $m=1$, the previous equality forces $n=1$: there is only one admissible graph $\Gamma$ in $\mathcal G_{1,1}$, whose corresponding operator is non-trivial, namely,
\
\
The weight is readily computed, and the identification with the Chevalley–Eilenberg differential is then obvious.
Finally, when $m=2$, the result is clear by previous computations.
Thus $\delta_\hbar$ is given by $$\delta_\hbar(x_{ij})= x_{i}\otimes x_{j}-x_{j}\otimes x_{i}-\hbar
\sum_{k}f_{ij}^k x_k.$$ Hence we reproduce the result that $A_\hbar$ is isomorphic to $\mathrm U_\hbar(\mathfrak g)$. We now want to give an explicit expression for the isomorphism $\mathrm L^1_{A_\hbar}$.
We consider the expression $\mathrm L^1_{A_\hbar}(a)^m(1|b_1|\cdots|b_m)=\mathrm d_{K_\hbar}^{1,m}(a|1|b_1|\cdots|b_m)$. Degree reasons imply that the sum of the degrees of the elements $b_i$ equals $m$; furthermore, if the degree of some $b_i$ is strictly bigger than $1$, the previous equality forces a different $b_j$ to have degree $0$, whence the corresponding expression vanishes by Proposition \[p-kosz\], $(iii)$. Hence, the degree of each $b_i$ is precisely $1$. We now consider a general graph $\Gamma$ with $n$ vertices of the first type and $m+2$ ordered vertices of the second type; to each vertex of the first type is associated a copy of $\hbar\pi$, while to the ordered vertices of the second type are associated $a$, $1$ and the $b_i$s in lexicographical order. We denote by $p$ the number of edges departing from the $n$ vertices of the first type and hitting the first vertex of the second type (observe that in this situation edges departing from vertices of the first type can only hit vertices of the first type or the first vertex of the second type): in the present framework, edges have only one color (we refer to [@CFFR Section 7] and [@CRT Subsection 3.2] for more details on the $4$-colored propagators and corresponding superpropagators entering the $2$ brane Formality Theorem), thus there can be [**at most**]{} one edge hitting the first vertex of the second type, whence $p\leq n$. We now compute the polynomial degree of the multidifferential operator associated to the graph $\Gamma$: it equals $n-j-(2n-p)=p-j-n$, where $0\leq j\leq m$ is the number of edges from the last $m$ vertices of the second type hitting vertices of the first type. The first $n$ comes from the fact that $\pi$ is a linear bivector field. As $p-j-n\geq 0$ and $p\leq n$, it follows immediately $p=n$ and $j=0$, [*i.e.*]{} the edges departing from the last $m$ vertices of the second type all hit the first vertex of the second type, and from each vertex of the first type departs exactly one edge hitting the first vertex of the second type; the remaining $n$ edges must hit a vertex of the first type.
In summary, a general graph $\Gamma$ appearing in $\mathrm L_{A_\hbar}^1(a)(1|b_1|\cdots|b_m)$ is the disjoint union of wheel-like graphs $\mathcal W_n$, $n\geq 1$, and of the graph $\beta_m$, $m\geq 0$; such graphs are depicted in Figure 4.
Observe that the $1$-wheel $\mathcal W_1$ appears here explicitly because of the presence of short loops in the $2$ brane Formality Theorem [@CFFR]: the integral weight of the $1$-wheel has been computed in [@CRT] and equals $-1/4$, while the corresponding translation invariant differential operator is the trace of the adjoint representation of $\mathfrak g$. Any multiple of $c_1=\mathrm{tr}_\mathfrak g\circ\mathrm{ad}$ defines a constant vector field on $\mathfrak g^*$: either as an easy consequence of the Formality Theorem of Kontsevich [@K] or by an explicit computation using Stokes’ Theorem, $c_1$ is a derivation of $(A_\hbar,\star)$, where $\star$ is the deformed product on $A_\hbar$ [*via*]{} Kontsevich’s deformation quantization.
The integral weight of the graph $\beta_m$ is $1/m!$ and the corresponding multidifferential operator is simply the symmetrization morphism; the integral weight of the wheel-like graph $\mathcal W_n$, $n\geq 2$, has been computed in [@W; @VdB] (observe that, except the case $n=1$, the integral weights of $\mathcal W_n$ for $n$ odd vanish) and equal the modified Bernoulli numbers, and the corresponding translation-invariant differential operators are $c_n=\mathrm{tr}_\mathfrak g(\mathrm{ad}^n(\bullet))$.
\
\
Therefore, the isomorphism $\mathrm L_{A_\hbar}^1$ (for $\hbar=1$) equals the composition of the PBW isomorphism from $\mathrm S(\mathfrak g)$ to $\mathrm U(\mathfrak g)$ with Duflo’s strange automorphism; the derivation $-1/4\ c_1$ of the deformed algebra $(A,\star)$ is exponentiated to an automorphism of the same algebra. (The fact that $\pi$ is linear permits to set $\hbar=1$, see also [@K Subsubsection 8.3.1] for an explanation.)
Quadratic algebras {#ss-3-3}
------------------
Here we briefly discuss the case where $V^*$ is endowed with a quadratic Poisson bivector field $\pi$: this case has been already considered in detail in [@CFFR Section 8], see also [@Sh], where the property of the deformation associated $\pi_\hbar$ of preserving the property of being Koszul has been proved.
The main feature of the quadratic case is the degree $0$ homogeneity of the Poisson bivector field, which reflects itself in the homogeneity of all structure maps. In particular the Kontsevich star-product on a basis of linear functions has the form $$x_i\star x_j=x_ix_j +\sum_{k,l} S^{kl}_{ij}(\hbar)x_kx_l,$$ for some $S^{kl}_{ij}\in\hbar\mathbb C[\![\hbar]\!]$. Our results implies that this algebra is isomorphic to the quotient of the tensor algebra in generators $x_i$ by relations $$x_i\otimes x_j-x_j\otimes x_i=\sum_{k,l}R^{kl}_{ij}(\hbar)x_k\otimes
x_l,$$ for some $R^{kl}_{ij}(\hbar)\in\hbar\mathbb C[\![\hbar]\!]$. The isomorphism sends $x_i$ to $$\mathrm L_{A_\hbar}(x_i)=x_i+\sum_j L^j_i(\hbar)x_j,$$ for some $L^j_i(\hbar)\in\hbar\mathbb C[\![\hbar]\!]$.
A final remark {#ss-3-4}
--------------
We point out that, in [@BG], the authors construct a flat $\hbar$-deformation between a so-called non-homogeneous quadratic algebra and the associated quadratic algebra: the characterization of the non-homogeneous quadratic algebra at hand is in terms of two linear maps $\alpha$, $\beta$, from $R$ onto $V$ and $\mathbb C$ respectively, which satisfy certain cohomological conditions. In the case at hand, it is not difficult to prove that the conditions on $\alpha$ and $\beta$ imply that their sum defines an affine Poisson bivector on $V^*$: hence, instead of considering $\alpha$ and $\beta$ separately, as in [@BG], we treat them together. Both deformations are equivalent, in view of the uniqueness of flat deformations yielding the PBW property, see [@BG].
[^1]: In the case at hand, $V$ is a graded vector space concentrated in degree $0$ and the identification $\wedge(V^*)=\mathrm S(V^*[-1])$ as [**graded algebras**]{} is canonical. For a more general graded vector space $V$, $\mathrm S(V^*[-1])$ and $\wedge(V^*)$ are different objects; still, $\mathrm S^n(V^*[-1])$ is canonically isomorphic to $\wedge^n(V^*)[-n]$ for every $n$ by the [*décalage*]{} isomorphism, which is simply the identity when $V$ is concentrated in degree $0$.
|
=-3cm =-3cm
‘=11 addtoreset[equation]{}[section]{}‘=12
**INTEGRABLE MULTIDIMENSIONAL CLASSICAL AND QUANTUM COSMOLOGY FOR INTERSECTING P-BRANES\
**M.A. Grebeniuk\[1\]\[1\][mag@gravi.phys.msu.su]{}\
*Moscow State University, Physical Faculty, Department of Theoretical Physics, Moscow 117234, Russia\
**V.D. Ivashchuk\[2\]\[2\][melnikov@fund.phys.msu.su]{} and V.N. Melnikov\[2\]\
*Center for Gravitation and Fundamental Metrology, VNIIMS, Moscow 117313, Russia********
Multidimensional cosmological model describing the evolution of one Einstein space of non-zero curvature and $n$ Ricci-flat internal spaces is considered. The action contains several dilatonic scalar fields $\varphi^I$ and antisymmetric forms $A^I$. When forms are chosen to be proportional of volume forms of $p$-brane submanifolds of internal space manifold, the Toda-like Lagrange representation is obtained. Wheeler–De Witt equation for the model is presented. The exact solutions in classical and quantum cases are obtained when dimensions of $p$-branes and dilatonic couplings obey some orthogonality conditions.
Introduction
============
In this paper we continue our investigations of multidimensional gravitational model governed by the action containing several dilatonic scalar fields and antisymmetric forms [@IM]. The action is presented below (see (\[2.1\])). Such form of action is typical for special sectors of supergravitational models [@CJS; @SS] and may be of interest when dealing with superstring and M-theories [@GSW; @HTW; @D; @S].
Here we consider a cosmological sector of the model from [@IM]. We recall that this model treats generalized intersecting $p$-brane solutions. Using the $\sigma$-model representation of [@IM] we reduce the equations of motion to the pseudo-Euclidean Toda-like Lagrange system [@IM3] with zero-energy constraint. After separating one-dimensional Liouville subsystem corresponding to negative mode (logarithm of quasivolume [@IM3]) we are led to Euclidean Toda-like system We consider the simplest case of orthogonal vectors in exponents of the Toda potential and obtain exact solutions. In this case we deal with $a_1+\dots+a_1$ Euclidean Toda lattice (sum of $n$ Liouville actions). Recently analogous reduction for forms of equal rank was done in [@LPX].
In this paper we consider also quantum aspects of the model. Using the $\sigma$-model representation and the standart prescriptions of quantization we are led to the multidimensional Wheeler–De Witt equation. This equation is solved in “orthogonal” case.
The model
=========
Here like in [@IM] we consider the model governed by the action \[2.1\] S=\_[M]{}d\^[D]{}z{[R]{}\[g\]- 2-\_[I]{}}+S\_[GH]{}, where $g=g_{MN}dz^{M}\otimes dz^{N}$ is the metric, $\varphi^I$ is a dilatonic scalar field, \[2.2\] F\^I=dA\^I= F\^I\_[M\_1 …M\_[n\_I]{}]{} dz\^[M\_1]{}…dz\^[M\_[n\_I]{}]{} is $n_I$-form ($n_I\ge2$) on $D$-dimensional manifold $M$, $\Lambda$ is a cosmological constant and $\lambda_{JI}\in{\bf R}$, $I,J\in\Omega$. In (\[2.1\]) we denote $|g|=|\det(g_{MN})|$, \[2.3\] (F\^I)\^2\_g=F\^I\_[M\_1…M\_[n\_I]{}]{}F\^I\_[N\_1…N\_[n\_I]{}]{} g\^[M\_1 N\_1]{}…g\^[M\_[n\_I]{}N\_[n\_I]{}]{}, and $S_{\rm GH}$ is the standard Gibbons-Hawking boundary term [@GH]. This term is essential for a quantum treatment of the problem. Here $\Omega$ is a non-empty finite set. (The action (\[2.1\]) with $\Lambda=0$ and equal $n_I$ was considered recently in [@KKLP]. For supergravity models with different $n_I$ see also [@LPS1]).
The equations of motion corresponding to (\[2.1\]) have the following form \[2.4\] R\_[MN]{}-g\_[MN]{}R=T\_[MN]{}-g\_[MN]{}, T\_[MN]{}=\_[I]{}, \[2.5\] \[g\]\^J-\_[I]{} (2\_[KI]{}\^K)(F\^I)\^2\_g = 0, \_[M\_1]{}\[g\]((2 \_[KI]{} \^K) F\^[I,M\_1…M\_[n\_I]{}]{})=0, where \[2.6\] T\_[MN]{}\^[\^I]{}=\_[M]{}\^I\_[N]{}\^I- g\_[MN]{}\_[P]{}\^I\^[P]{}\^I, \[2.7\] T\_[MN]{}\^[F\^I]{}=, $I,J \in \Omega$. In (\[2.5\]) ${{\bigtriangleup}}[g]$ and ${{\bigtriangledown}}[g]$ are Laplace-Beltrami and covariant derivative operators respectively corresponding to $g$.
Here we consider the manifold $M={\bf R}\times M_{0}\times\ldots\times M_{n}$, with the metric \[2.8\] g=w[\^[2(u)]{}]{}dudu+\_[i=0]{}\^[n]{}[\^[2\^i(u)]{}]{}g\^i, where $w=\pm1$, $u$ is a time variable and $g^i=g_{m_{i}n_{i}}(y_i)dy_i^{m_{i}} \otimes dy_i^{n_{i}}$ is a metric on $M_{i}$ satisfying the equation $R_{m_{i}n_{i}}[g^i]=\lambda_{i}g^i_{m_{i}n_{i}}$, $m_{i},n_{i}=1,\ldots,d_{i}$; $\lambda_{i}={\rm const}$, $i=0,\ldots,n$. The functions $\gamma,\phi^{i}:{\bf R_\bullet}\rightarrow{\bf R}$ (${\bf R_\bullet}$ is an open subset of ${\bf R}$) are smooth.
We claim any manifold $M_i$ to be oriented and connected, $i=0,\ldots,n$. Then the volume $d_i$-form \[2.9\] \_i= dy\_i\^[1]{}…dy\_i\^[d\_i]{}, and signature parameter ${\varepsilon}(i)={\rm sign}(\det(g^i_{m_{i}n_{i}}))=\pm1$ are correctly defined for all $i=0,\ldots,n$.
Let $\Omega$ from (\[2.1\]) be a set of all non-empty subsets of $\{0,\ldots,n\}$. The number of elements in $\Omega$ is $|\Omega|=2^{n+1}-1$. For any $I=\{i_1,\ldots,i_k\}\in\Omega$, $i_1<\ldots<i_k$, we put in (\[2.2\]) \[2.10\] A\^I=\^I\_[i\_1]{}…\_[i\_k]{}, where functions $\Phi^I:{\bf R_\bullet}\rightarrow{\bf R}$ are smooth, and $\tau_{i}$ are defined in (\[2.9\]). In components relation (\[2.10\]) reads \[2.11\] A\^[I]{}\_[P\_1…P\_[d(I)]{}]{}(u,y)=\^[I]{}(u) … \_[P\_1…P\_[d(I)]{}]{}, where $d(I)\equiv d_{i_1}+\ldots+d_{i_k}=\sum_{i\in I}d_i$ is the dimension of the oriented manifold $M_{I}=M_{i_1}\times\ldots\times M_{i_k}$, and indices $P_1,\ldots,P_{d(I)}$ correspond to $M_I$. It follows from (\[2.10\]) that \[2.12\] F\^I=dA\^I=d\^I\_[i\_1]{}…\_[i\_k]{}, or, in components, \[2.13\] F\^[I]{}\_[uP\_1…P\_[d(I)]{}]{}=-F\^[I]{}\_[P\_1u…P\_[d(I)]{}]{}= …=\^[I]{}…\_[P\_1…P\_[d(I)]{}]{}, and $n_I=d(I)+1$, $I\in\Omega$.
Thus dimensions of forms $F^I$ in the considered model are fixed by the subsequent decomposition of the manifold.
$\sigma$-model representation
=============================
For dilatonic scalar fields we put $\varphi^I=\varphi^I(u)$. Let \[3.1\] f=\_0-, \_[i=0]{}\^[n]{}d\_i\^i\_0.
It is not difficult to verify that the field equations (\[2.4\])–(\[2.5\]) for the field configurations from (\[2.8\]), (\[2.10\]) may be obtained as the equations of motion corresponding to the action \[3.2\] S\_=du[\^[f]{}]{}{-wG\_[ij]{}\^i\^j-w\_[IJ]{}\^I\^J -w\_[I]{}(I)(2\_I- 2\_[iI]{}d\_i\^i)(\^I)\^2-2V[\^[-2f]{}]{}}, where $\vec\varphi=(\varphi^I)$, $\vec\lambda_I=(\lambda_{JI})$, $\dot\varphi\equiv d\varphi(u)/du$; $G_{ij}=d_i\delta_{ij}-d_id_j$, are component of “pure cosmological” minisuperspace metric and \[3.3\] V=[V]{}()=[\^[2[\_0]{}()]{}]{}-\_[i =1]{}\^[n]{} \_id\_i[\^[-2\^i+2[\_0]{}()]{}]{} is the potential. In (\[3.2\]) ${\varepsilon}(I)\equiv{\varepsilon}(i_1)\times
\ldots\times{\varepsilon}(i_k)=\pm1$ for $I= \{i_1,\ldots,i_k \} \in\Omega$.
For finite internal space volumes $V_i$ (e.g. compact $M_i$) the action (\[3.2\]) coincides with the action (\[2.1\]) if $\kappa^{2}=\kappa^{2}_0\prod_{i=0}^{n}V_i$.
The representation (\[3.2\]) follows from more general $\sigma$-model action from [@IM4], that may be written in the following form \[3.4\] S\_=2du{(-w)[N]{}[G]{}\_[AB]{}() \^[A]{}\^[B]{}-2[N]{}\^[-1]{}V()}, where $(\sigma^{\hat A})=(\phi^i,\varphi^I,\Phi^{I'})\in{\bf R}^{n+1+2m}$, $m=|\Omega|$, $\mu=1/(2k_0^2)$; ${\cal N}=\exp(\gamma_0-\gamma)>0$ is the lapse function and \[3.5\] ([G]{}\_[AB]{})=(
[ccc]{} G\_[ij]{}&0&0\
0&\_[IJ]{}&0\
0&0&(I’)(2\_[I’]{} -2\_[iI’]{}d\_i\^i)\_[I’J’]{}
) is matrix of minisupermetric of the model (target space metric), $i,j=0,\dots,n$; $I,J,I',J'\in\Omega$.
Let us fix the gauge in (\[3.1\]): $f = f(\sigma)$, where $f(\sigma)$ is smooth. We call this gauge as $f$-gauge. >From (\[3.4\]) we get the Lagrange system with the Lagrangian and the energy constraint \[3.6\] L=2[\^[f]{}]{}[G]{}\_[AB]{}() \^[A]{}\^[B]{}+w[\^[-f]{}]{}V(), \[3.7\] E=2[\^[f]{}]{}[G]{}\_[AB]{}() \^[A]{}\^[B]{}-w[\^[-f]{}]{}V()=0. We note that the minisupermetric ${\cal G}=
{\cal G}_{\hat A\hat B}d\sigma^{\hat A}\otimes d\sigma^{\hat B}$ is not flat. Here the problem of integrability of Lagrange equations for the Lagrangian (\[3.6\]) arises.
The minisuperspace metric (\[3.5\]) may be also written in the form \[3.8\] [G]{}=|G+\_[I]{}(I)[\^[-2U\^I(x)]{}]{} d\^Id\^I, U\^I(x)=\_[iI]{}d\_i\^i-\_I, where $x=(x^A)=(\phi^i,\varphi^I)$, $\bar G=\bar G_{AB}dx^A\otimes dx^B=
G_{ij}d\phi^i\otimes d\phi^j+\delta_{IJ}d\varphi^I\otimes d\varphi^J$, \[3.9\] (|G\_[AB]{})=(
[cc]{} G\_[ij]{}&0\
0&\_[IJ]{}
) $i,j\in\{0,\dots,n\}$, $I,J\in\Omega$, and the potential $(-w)V$ may be presented in the following form \[3.10\] (-w)V=\_[j=0]{}\^n(w2\_id\_i)+ (-w), where \[3.11\] U\^(x)=U\_A\^x\^A=\_[j=0]{}\^nd\_j\^j, U\^i(x)=-\^i+\_[j=0]{}\^nd\_j\^j, or in components \[3.12\] (U\_A\^i)=(-\_j\^i+d\_j,0), (U\_A\^)=(d\_j,0), (U\_A\^I)=(\_[iI]{}\_j\^id\_i,-\_[JI]{}), $i,j\in\{0,\dots,n\}$, $I, J\in\Omega$.
Let $(x,y)\equiv\bar G_{AB}x^Ay^B$ define a quadratic form on ${\cal V}= {\bf R}^{n+1+m}$, $m=2^{n+1}-1$. The dual form defined on dual space ${\cal V}^*$ is following \[3.13\] (U,U’)\_\*=|G\^[AB]{}U\_AU’\_B, where $U,U'\in{\cal V}^*$, $U(x)=U_Ax^A$, $U'(x)=U'_Ax^A$ and $(\bar G^{AB})$ is the matrix inverse to the matrix $(\bar G_{AB})$. Here, like in [@IMZ], \[3.14\] G\^[ij]{}=+1[2-D]{} $i,j=0,\dots,n$.
The integrability of the Lagrange system crucially depends upon the scalar products (\[3.13\]) for vectors $U^i$, $U^\Lambda$, $U^I$ from (\[3.8\]), (\[3.11\]). Here we present these scalar products \[3.15\] (U\^i,U\^j)\_\*=-1, (U\^i,U\^)\_\*=-1, (U\^,U\^)\_\*=-, \[3.16\] (U\^I,U\^i)\_\*=-, (U\^I,U\^)\_\*=, \[3.17\] (U\^I,U\^J)\_\*=q(I,J)+\_I\_J, q(I,J)d(IJ)+, $I,J\in\Omega$, $i,j=0,\dots,n$.
The relations (\[3.15\]) were calculated in [@GIM], the relations (\[3.17\]) were obtained in [@IM] ($U_A^I=-L_{AI}$ in notations of [@IM]).
Classical exact solutions
=========================
Here we will integrate the Lagrange equations corresponding to the Lagrangian (\[3.6\]) with the energy-constraint (\[3.7\]). We put $f=0$, i.e. the harmonic time gauge is considered.
The problem of integrability may be simplified if we integrate the Maxwell equations \[5.1\] d[du]{}((2\_I- 2\_[iI]{}d\_i\^i)\^I)=0, \^I=Q\^I(-2\_I+ 2\_[iI]{}d\_i\^i), where $Q^I$ are constant, $I\in\Omega$.
Let $Q^I\ne0\Leftrightarrow I\in\Omega_*$, where $\Omega_*\subset\Omega$ is some non-empty subset of $\Omega$. For fixed $Q=(Q^I,I\in\Omega_*)$ the Lagrange equations corresponding to $\phi^i$ and $\varphi^I$, when equations (\[5.1\]) are substituted, are equivalent to the Lagrange equations for the Lagrangian \[5.4\] L\_Q=12 |G\_[AB]{}x\^Ax\^B-V\_Q, where $x=(x^A)=(\phi^i,\varphi^I)$, $i=0, \ldots, n $, $I \in \Omega$ and \[5.6\] V\_Q= (-w)V+\_[I\_\*]{}12(I)(Q\^I)\^2, (for $(-w)V$ see (\[3.10\])). Thus, we are led to the pseudo-Euclidean Toda-like system (see [@IMZ; @IM3]) with the zero-energy constraint: \[5.4a\] E\_Q=12|G\_[AB]{}x\^Ax\^B+V\_Q=0.
The case $\Lambda=\lambda_i=0$, $i=1,\dots,n$.
----------------------------------------------
Here we put $\Lambda=0$, $\lambda_i=0$ for $i=1,\dots,n$; and $\lambda_0\ne0$. In this case the potential (\[5.6\]) \[5.7\] V\_Q=(w2\_0d\_0)+ \_[I\_\*]{}2(Q\^I)\^2is governed by time-like vector $U^0$ $(d_0>1)$ and $m_*=|\Omega_*|$ space-like vectors $U^I$: \[5.8\] (U\^0,U\^0)=1[d\_0]{}-1<0, (U\^I,U\^I)= d(I)+(\_I)\^2>0
We also put $0\notin I, \quad \forall I\in\Omega_*$. This condition means that all $p$-branes do not contain the manifold $M_0$. It follows from (\[3.16\]) that \[5.9\] (U\^I,U\^0)\_\*=0, for all $I\in\Omega_*$. In this case the Lagrangian (\[5.4\]) with the potential (\[5.7\]) may be diagonalized by linear coordinate transformation $z^a=S_i^ax^i$, $a=0,\dots,n$, satisfying $\eta_{ab}S_i^aS_j^b=G_{ij}$, where $\eta_{ab}={\mathop{\rm diag}\nolimits}(-1,+1,\dots,+1)$. There exists the diagonalization such that $U_i^0x^i=q_0z^0$, where \[5.10\] q\_0== is a parameter, $q_0<1$.
In $z$-variables the Lagrangian (\[5.4\]) reads $L_Q=L_0+L_Q^E$, where \[5.11\] L\_0=-12(z\^0)\^2-A\_0(2q\^0z\^0), \[5.12\] L\_Q\^E=12-\_[I\_\*]{}A\_I(2q\_Iz- 2\_[JI]{}\^J). In (\[5.11\]), (\[5.12\]) $\vec z=(z^1,\dots,z^{n})$, $\vec q_I=(q_{I,1},\dots,q_{I,n})$, $A_0\equiv(w/2)\lambda_0d_0$, $A_I=(1/2){\varepsilon}(I)(Q^I)^2$ and $\vec q_I\cdot\vec q_J=q(I,J)$, $I,J\in\Omega_*$.
Thus the Lagrangian (\[5.4\]) is splitted into sum of two independent parts (\[5.11\]) and (\[5.12\]). The latter may be written as \[5.13\] L\_Q\^E=12()\^2-\_[I\_\*]{}A\_I(2B\_IZ) where $\vec Z=(z^1,\dots,z^{n},\varphi^I)$, $\vec B_I=(\vec q_I,\lambda_{JI})$. Thus the equations of motion for the considered cosmological model are reduced to the equations of motion for the Lagrange systems with the Lagrangians (\[5.11\]) and (\[5.12\]) and the energy constraint $E=E_0+E_Q^E$ = 0, where \[5.14\] E\_0=-12(z\^0)\^2+A\_0(2q\_0z\^0), E\_Q\^E=12()\^2+\_[I\_\*]{}A\_I(2B\_IZ).
The vectors $\vec B_I$ in (\[5.13\]) satisfy the relations \[5.15\] B\_IB\_J=q(I,J)+\_I\_J, $I,J\in\Omega_*$, where $p$-brane “overlapping index” $q(I,J)$ is defined in (\[3.17\]).
The case of orthogonal $\vec B_I$
---------------------------------
The simplest situation arises when the vectors $\vec B_I$ are orthogonal, i.e. \[5.16\] B\_IB\_J=(U\^I,U\^J)\_\*=d(IJ)++ \_I\_J=0, for all $I\ne J$, $I,J\in\Omega_*$. In this case the Lagrangian (\[5.13\]) may be splitted into the sum of $|\Omega_*|$ Lagrangians of the Liouville type and $n$ “free” Lagrangians.
Using relations from [@GIM] we readily obtain exact solutions for the Euler-Lagrange equations corresponding to the Lagrangian (\[5.4\]) with the potential (\[5.7\]) when the orthogonality conditions (\[5.9\]) and (\[5.16\]) are satisfied.
The solutions for $(x^A)=(\phi^i,\varphi^I)$ read \[5.17\] x\^A(u)=-|f\_0(u-u\_0)|- \_[I\_\*]{}|f\_I(u-u\_I)|+ \^A u+\^A, where $u_0,u_I$ are constants, $U^{sA}\equiv\bar G^{AB}U_B^s$ are contravariant components of $U^s$, $s\in\{0\}\sqcup\Omega_*$, $\bar G^{AB}$ is the matrix inverse to the matrix (\[3.9\]). Functions $f_0$, $f_I$ in (\[5.17\]) are the following \[5.18\] f\_0()= ||\^[12]{} (), C\_0>0, \_0w>0; ||\^[12]{} (), C\_0<0, \_0w>0; ||\^[12]{} (), C\_0>0, \_0w<0; |\_0(d\_0-1)|\^[12]{} , C\_0=0, \_0w>0; and \[5.19\] f\_I()= (), C\_I>0, (I)<0; (), C\_I<0, (I)<0; (), C\_I>0, (I)>0; , C\_I=0, (I)<0, where $C_0$, $C_I$ are constants, and \[5.20\] \_I\^[-1]{}=>0, $I\in\Omega_*$.
Vectors $\alpha=(\alpha^A)$ and $\beta=(\beta^A)$ in (\[5.17\]) satisfy the linear constraint relations: \[5.21\] U\^0()=-\^0+\_[j=0]{}\^nd\_j\^j=0; U\^0()=-\^0+\_[j=0]{}\^nd\_j\^j=0; \[5.22\] U\^I()=\_[iI]{}d\_i\^i-\_[JI]{}\^J=0; U\^I()=\_[iI]{}d\_i\^i-\_[JI]{}\^J=0.
Calculations of contravariant components $U^{sA}$ give the following relations \[5.23\] U\^[0i]{}=-, U\^[0I]{}=0, U\^[Ii]{}= \_[kI]{}\^[ik]{}+, U\^[IJ]{}=-\_[JI]{}, $i=0,\dots,n$; $J,I\in\Omega_*$. Substitution of (\[5.23\]) and (\[5.8\]) into the solution (\[5.17\]) leads us to the following relations for the logarithms of scale fields \[5.24\] \^i=|f\_0|+\_[I\_\*]{} \_I\^i|f\_I|+\^iu+\^i, \^J=\_[I\_\*]{}\_[JI]{}\_I\^2|f\_I|+ \^Ju+\^J, where $\alpha_I^i=-\left(\sum_{k\in I}\delta^{ik}+d(I)/(2-D)\right)\nu_I^2$, $i=0,\dots,n$; $I\in\Omega_*$.
For harmonic gauge $\gamma=\gamma_0(\phi)$ we get from (\[5.24\]) \[5.25\] =\_[i=0]{}\^nd\_i\^i=|f\_0|+ \_[I\_\*]{}\_I\^2|f\_I|+ \^0u+\^0, where $\alpha^0$ and $\beta^0$ are given by (\[5.21\]).
The zero-energy constraint \[5.26\] E=E\_0+\_[I\_\*]{}E\_I+12(,)=0, where $\alpha=(\alpha^i,\alpha^I)$, $(\alpha,\alpha)=\bar G_{AB}
\alpha^A\alpha^B=G_{ij}\alpha^i\alpha^j+\delta_{IJ}\alpha^I\alpha^J$ and $C_s=2E_s (U^s,U^s)_*$, $s=0,I$, (see [@GIM]) may be written in the following form \[5.27\] C\_0=\_[I\_\*]{}\[C\_I\_I\^2+ (\^I)\^2\]+\_[i=1]{}\^nd\_i(\^i)\^2+ 1[d\_0-1]{}(\_[i=1]{}\^nd\_i\^i)\^2.
The substitution of relations (\[5.24\]) into (\[5.1\]) implies $\dot\Phi^I=Q_I/f_I^2$ and hence we get for forms \[5.28\] F\^I=d\^Id\_I=dud\_I, where $\tau_I\equiv\tau_{i_1}\wedge\dots\wedge\tau_{i_k}$, $I=\{i_1,\dots,i_k\}\in\Omega_*$, $i_1<\dots<i_k$.
The relation for the metric may be readily obtained using the formulas (\[5.24\]), (\[5.25\]). \[5.29\] g= (\_[I\_\*]{}\[f\_I\^2(u-u\_I)\]\^[d(I)\_I\^2/(D-2)]{}) {\[f\_0\^2(u-u\_0)\]\^[d\_0/(1-d\_0)]{}[\^[2\^0u+2\^0]{}]{} + \_[i0]{}(\_[I\_\* Ii]{} \[f\_I\^2(u-u\_I)\]\^[-\_I\^2]{})[\^[2\^iu+2\^i]{}]{}g\^i}.
Wheeler–De Witt equation
========================
Let us consider the Lagrangian system with Lagrangian (\[3.6\]). Using the standard prescriptions of quantization (see, for example, [@IMZ]) we are led to the Wheeler-DeWitt equation \[4.1\] \^f\^f(-+ R+e\^[-f]{}(-w)V)\^f=0, where \[4.2\] a=, N=n+1+2||. Here $\Psi^f=\Psi^f(\sigma)$ is the so-called “wave function of the universe” corresponding to the $f$-gauge, $\Delta[{\cal G}_1]$ and $R[{\cal G}_1]$ denote the Laplace-Beltrami operator and the scalar curvature correponding to ${\cal G}_1$. For the scalar curvature we get \[4.3\] R\[[G]{}\]=-\_[I]{}(U\^I,U\^I)-\_[I,I’]{}(U\^I,U\^[I’]{}). For the Laplace operator we obtain \[4.4\] 1\_[A]{} ([G]{}\^[AB]{} \_[B]{})= [\^[U(x)]{}]{}(|G\^[AB]{} [\^[-U(x)]{}]{})+\_[I]{}(I) [\^[2U\^I(x)]{}]{}()\^2, where \[4.5\] U(x)=\_[I]{}U\^I(x).
[**Harmonic-time gauge.**]{} The WDW equation (\[4.1\]) for $f=0$ \[4.6\] H(-+ R\[[G]{}\]+(-w) V)=0, may be rewritten, using relations (\[4.3\]), (\[4.4\]) as follows \[4.7\] 2H={-G\^[ij]{} -\^[IJ]{} - \_[I]{}(I)[\^[2U\^I(x)]{}]{} ()\^2 +\_[I]{}+ 2aR\[[G]{}\]+ 2 \^2 (-w) V}=0. Here $\hat H\equiv\hat H^{f=0}$ and $\Psi=\Psi^{f=0}$.
Quantum solutions
=================
Let us now consider the solutions of the Wheeler–De Witt equation (\[4.6\]) for the case of harmonic-time gauge. We also note that the orthogonality conditions (\[5.9\]) and (\[5.16\]) are satisfied. In $z$-variables $z = (z^A)= (S^A_B x^B) = (z^0, \vec{z}, z^I)$ satisfying $q_0 z^0 = U^0(x)$, $q_I z^I = U^I(x)$, $q_I=\nu_I^{-1}$, we get \[6.1\] =-()\^2+ ()\^2+ \_[I]{}[\^[q\_Iz\^I]{}]{} ([\^[-q\_Iz\^I]{}]{})+ \_[I]{}(I)[\^[2q\_Iz\^I]{}]{} ()\^2. The relation (\[4.3\]) in the orthogonal case reads as \[6.2\] R\[[G]{}\]=-2\_[I]{}(U\^I,U\^I)=-2\_[I]{}q\_I\^2.
We are seeking the solution to WDW equation (\[4.6\]) by the method of separation of variables, i.e. we put \[6.3\] \_\*(z)=\_0(z\^0) (\_[I]{}\_I(z\^I) ) [\^[[[i]{}]{}P\_I \^I]{}]{} [\^[[[i]{}]{}pz]{}]{}. It follows from (\[6.1\]) that $\Psi_*(z)$ satisfies the WDW equation (\[4.6\]) if \[6.4\] 2H\_0\_0{()\^2+ \^2w\_0d\_0[\^[2q\_0z\^0]{}]{}}\_0=2[E]{}\_0\_0; \[6.5\] 2H\_I\_I{-[\^[q\_Iz\^I]{}]{} ([\^[-q\_Iz\^I]{}]{}) + (I)P\_I\^2[\^[2q\_Iz\^I]{}]{}}\_I=2[E]{}\_I\_I, $I\in\Omega$, and \[6.6\] 2[E]{}\_0+(p)\^2+2\_[I]{}[E]{}\_I+2aR\[[G]{}\]=0, with $a$ and $R[{\cal G}]$ from (\[4.2\]) and (\[6.2\]) respectively.
Solving (\[6.4\]), (\[6.5\]) we obtain \[6.7\] \_0(z\^0)=B\_[\_0([E]{}\_0)]{}\^0 ( ), \_I(z\^I)=[\^[q\_Iz\^I/2]{}]{}B\_[\_I([E]{}\_I)]{}\^I ( ), where $\omega_0({\cal E}_0)=\sqrt{2{\cal E}_0}/q_0$, $\omega_I({\cal E}_I)=
\sqrt{1/4-{\varepsilon}(I)2{\cal E}_I\nu_I^2}$, $I\in\Omega$ $(\nu_I=q_I^{-1})$ and $B_\omega^0,B_\omega^I=I_\omega,K_\omega$ are modified Bessel function.
The general solution of the WDW equation (\[4.6\]) is a superposition of the “separated” solutions (\[6.3\]): \[6.8\] (z)=\_BdpdPd[E]{}C(P,p,[E]{},B) \_\*(z|P,p,[E]{},B), where $p=(\vec p)$, $P=(P_I)$, ${\cal E}=({\cal E}_I)$, $B=(B^0,B^I)$, $B^0,B^I=I,K$, $\Psi_*=\Psi_*(z|P,p,{\cal E},B)$ are given by relation (\[6.3\]), (\[6.7\]) with ${\cal E}_0$ from (\[6.6\]).
Conclusion
==========
Thus we obtained exact classical and quantum solutions for multidimensional cosmology, describing the evolution of $(n+1)$ spaces $(M_0,g_0),\dots,(M_n,g_n)$, where $(M_0,g_0)$ is an Einstein space of non-zero curvature, and $(M_i,g^i)$ are “internal” Ricci-flat spaces, $i=1,\dots,n$; in the presence of several scalar fields and forms.
The classical solution is given by relations (\[5.24\]), (\[5.28\]), (\[5.29\]) with the functions $f_0$, $f_I$ defined in (\[5.18\])–(\[5.19\]) and the relations on the parameters of solutions $\alpha^s$, $\beta^s$, $C_s$ $(s=i,I)$, $\nu_I$, (\[5.21\])–(\[5.22\]), (\[5.20\]), (\[5.27\]) imposed. The quantum solutions are presented by relations (\[6.3\]), (\[6.6\])-(\[6.8\]).
These solutions describe a set of charged (by forms) overlapping $p$-branes “living” on submanifolds not containing internal space $M_0$. The solutions are valid if the dimensions of $p$-branes and dilatonic coupling vector satisfy the orthogonality restrictions (\[5.16\]).
The special case $w=+1$, $M_0 = S^{d_0}$ corresponds to spherically-symmetric configurations containing black hole solutions (see, for example [@CT; @AIV; @O]). It may be interesting to apply the relations from Sect. 6 to minisuperspace quantization of black hole configurations in string models, M-theory etc.
[99]{}
E. Cremmer, B.Julia, and J. Scherk, [*Phys. Lett.*]{} [**B76**]{}, 409 (1978).
A. Salam and E. Sezgin, eds., “Supergravities in Diverse Dimensions”, reprints in 2 vols., World Scientific (1989).
M.B. Green, J.H. Schwarz, and E. Witten, “Superstring Theory” in 2 vols. (Cambridge Univ. Press, 1987).
C. Hull and P. Townsend, “Unity of Superstring Dualities”, [*Nucl. Phys.*]{} [**B 438**]{}, 109 (1995), [*Preprint*]{} hep-th/9410167;\
P. Horava and E. Witten, [*Nucl. Phys.*]{} [**B 460**]{}, 506 (1996), [*Preprint*]{} hep-th/9510209; [*Preprint*]{} hep-th/9603142;
M.J. Duff, “M-theory (the Theory Formerly Known as Strings)”, [*Preprint*]{} CTP-TAMU-33/96, hep-th/9608117;
K.S. Stelle, “Lectures on Supergravity p-branes ”, hep-th/9701088.
H. Lü, C.N. Pope, and K.S.Stelle, “Weyl Group Invariance and p-brane Multiplets”, [*Nucl. Phys.*]{} [**B 476**]{}, 89 (1996).
H. Lü, C.N. Pope, and K.W. Xu, “Liouville and Toda solitons im M-theory”, [*Preprint*]{} hep-th/9604058.
N. Khvengia, Z. Khvengia, H. Lü, C.N. Pope, “Intersecting M-Branes and Bound States”, [*Preprint*]{} hep-th/9605082.
M.Cvetic and A.A.Tseytlin, [*Nucl. Phys.*]{} [**B 478**]{}, 181 (1996). [*Preprint*]{} hep-th/9606033.
I.Ya. Aref’eva, M.G. Ivanov and I.V. Volovich, “Non-extremal Intersecting p-branes in Various Dimensions”, [*Preprint*]{} SMI-06-97, hep-th/9702079.
N.Ohta, “Intersection rules for non-extreme p-branes”, [*Preprint*]{} OU-HET 258, hep-th/9702164.
V.D. Ivashchuk and V.N. Melnikov, [*Gravitation and Cosmology*]{} [**2**]{}, No 4 (8), 297 (1996); hep-th/9612054;\
V.D. Ivashchuk and V.N. Melnikov, to appear in [*Phys. Lett.*]{} [**B**]{}.
V.D. Ivashchuk, V.N. Melnikov, [*Phys. Lett.*]{} [**A135**]{}, 465 (1989).
V.D. Ivashchuk, V.N. Melnikov and A. I. Zhuk, [*Nuovo Cimento*]{} [**B104**]{}, 575 (1989).
G.W. Gibbons and S.W. Hawking, [*Phys. Rev.*]{} [**D 15**]{}, 2752 (1977).
V.D. Ivashchuk and V.N. Melnikov, [*Int. J. Mod. Phys. D*]{} [**3**]{}, No 4, 795 (1994);\
V.D. Ivashchuk and V.N. Melnikov, [*Gravitation and Cosmology*]{} [**1**]{}, No 3, 204 (1995);\
V.N. Melnikov, “Multidimensional Cosmology and Gravitation”, [*Preprint*]{} CBPF, Rio de Janeiro, 1995, 210p.
V.R. Gavrilov, V.D. Ivashchuk and V.N. Melnikov, [*J. Math. Phys*]{} [**36**]{}, 5829 (1995).
V.D. Ivashchuk and V.N. Melnikov, “Sigma-model for the Generalized Composite p-branes”, hep-th/9705036.
|
---
abstract: 'In 2003, Alladi, Andrews and Berkovich proved an identity for partitions where parts occur in eleven colors: four primary colors, six secondary colors, and one quaternary color. Their work answered a longstanding question of how to go beyond a classical theorem of Göllnitz, which uses three primary and three secondary colors. Their main tool was a deep and difficult four parameter $q$-series identity. In this paper we take a different approach. Instead of adding an eleventh quaternary color, we introduce forbidden patterns and give a bijective proof of a ten-colored partition identity lying beyond Göllnitz’ theorem. Using a second bijection, we show that our identity is equivalent to the identity of Alladi, Andrews, and Berkovich. From a combinatorial viewpoint, the use of forbidden patterns is more natural and leads to a simpler formulation. In fact, in Part II of this series we will show how our method can be used to go beyond Göllnitz’ theorem to any number of primary colors.'
author:
- Isaac KONAN
title: 'Beyond Göllnitz’ Theorem I: A Bijective Approach'
---
Introduction and Statements of Results
======================================
History
-------
A partition of a positive integer $n$ is a non-increasing sequence of positive integers whose sum is equal to $n$. For example, the partitions of $7$ are $$(7),(6,1),(5,2),(5,1,1),(4,3),(4,2,1),(4,1,1,1),(3,3,1),(3,2,2),(3,2,1,1),$$ $$(3,1,1,1,1),(2,2,2,1),(2,2,1,1,1),(2,1,1,1,1,1)\,\,\text{and}\,\,(1,1,1,1,1,1,1)\,\cdot$$ The study of partition identities has a long history, dating back to Euler’s proof that there are as many partitions of $n$ into distinct parts as partitions of $n$ into odd parts. The corresponding identity is $$(-q;q)_\infty = \frac{1}{(q;q^2)_\infty}\,,$$ where $$(x;q)_m = \prod_{k=0}^{m-1} (1-xq^k)\,\,,$$ for any $m\in \mathbb{N}\cup \{\infty\}$ and $a,q$ such that $|q|<1$.\
\
One of the most important identities in the theory of partitions is Schur’s theorem [@Sc26].
\[scr\] For any positive integer $n$, the number of partitions of $n$ into distinct parts congruent to $\pm 1 \mod 3$ is equal to the number of partitions of $n$ where parts differ by at least three and multiples of three differ by at least six.
There have been a number of proofs of Schur’s result over the years, including a $q$-difference equation proof of Andrews [@AN68] and a simple bijective proof of Bressoud [@BR80].\
\
Another important identity is Göllnitz’ theorem [@GO67].
\[goll\] For any positive integer $n$, the number of partitions of $n$ into distinct parts congruent to $2,4,5 \mod 6$ is equal to the number of partitions of $n$ into parts different from $1$ and $3$, and where parts differ by at least six with equality only if parts are congruent to $2,4,5 \mod 6$.
Like Schur’s theorem, Göllnitz’s identity can be proved using $q$-difference equations [@AN69] and an elegant Bressoud-style bijection [@PRS04; @ZJ15].\
\
Seminal work of Alladi, Andrews, and Gordon in the 90’s showed how the theorems of Schur and Göllnitz emerge from more general results on colored partitions [@AAG95].\
\
In the case of Schur’s theorem, we consider parts in three colors $\{a,b,ab\}$ and order them as follows: $$1_{ab}<1_a<1_b<2_{ab}<2_a<2_b<3_{ab}<\cdots\,\cdot$$ We then consider the partitions with colored parts different from $1_{ab}$ and satisfying the minimal difference conditions in the table $$\label{scrrr}
\begin{array}{|c|cc|c|}
\hline
_{\lambda_i}\setminus^{\lambda_{i+1}}&a&b&ab\\
\hline
a&1&2&1\\
b&1&1&1\\
\hline
ab&2&2&2\\
\hline
\end{array}\,\cdot$$ Here, the part ${\lambda}_i$ with color in the row and the part ${\lambda}_{i+1}$ with color in the column differ by at least the corresponding entry in the table. An example of such a partition is $(7_{ab},5_{b},4_{a},3_{ab},1_b)$. The Alladi-Gordon refinement of Schur’s partition theorem [@AG93] is stated as follows:
\[ag\] Let $u,v,n$ be non-negative integers. Denote by $A(u,v,n)$ the number of partitions of $n$ into $u$ distncts parts with color $a$ and $v$ distinct parts with color $b$, and denote by $B(u,v,n)$ the number of partitions of $n$ satisfying the conditions above, with $u$ parts with color $a$ or $ab$, and $v$ parts with color $b$ or $ab$. We then have $A(u,v,n)=B(u,v,n)$ and the identity $$\sum_{u,v,n\geq 0} B(u,v,n)a^ub^vq^n = \sum_{u,v,n\geq 0} A(u,v,n)a^ub^vq^n = (-aq;q)_\infty (-bq;q)_\infty\,\cdot$$
Note that a transformation implies Schur’s theorem : $$\left\lbrace
\begin{array}{l r c l }
\text{dilation :} &q &\mapsto&q^3\\
\text{translations :} &a,b &\mapsto&q^{-2},q^{-1}\\
\end{array}
\right. \,\cdot$$ In fact, the minimal difference conditions given in give after these transformations the minimal differences in Schur’s theorem.\
\
In the case of Göllnitz’ theorem, we consider parts that occur in six colors $\{a,b,c,ab,ab,bc\}$ with the order $$1_{ab}<1_{ac}<1_a<1_{bc}<1_b<1_c<2_{ab}<2_{ac}<2_a<2_{bc}<2_b<2_c<3_{ab}<\cdots\,,$$ and the partitions with colored parts different from $1_{ab},1_{ac},1_{bc}$ and satisfying the minimal difference conditions in $$\begin{array}{|c|ccc|ccc|}
\hline
_{\lambda_i}\setminus^{\lambda_{i+1}}&a&b&c&ab&ac&bc\\
\hline
a&1&2&2&1&1&2\\
b&1&1&2&1&1&1\\
c&1&1&1&1&1&1\\
\hline
ab&2&2&2&2&2&2\\
ac&2&2&2&1&2&2\\
bc&1&2&2&1&1&2\\
\hline
\end{array}\,\cdot$$ The Alladi-Andrews-Gordon refinement of Göllnitz’s partition theorem can be stated as follows:
\[aag\] Let $u,v,w,n$ be non-negative integers. Denote by $A(u,v,w,n)$ the number of partitions of $n$ into $u$ distncts parts with color $a$, $v$ distinct parts with color $b$ and $w$ distinct parts with color $c$, and denote by $B(u,v,w,n)$ the number of partitions of $n$ satisfying the conditions above, with $u$ parts with color $a,ab$ or $ac$, $v$ parts with color $b,ab$ or $bc$ and $w$ parts with color $c,ac$ or $bc$. We then have $A(u,v,w,n)=B(u,v,w,n)$ and the identity $$\sum_{u,v,w,n\geq 0} B(u,v,w,n)a^ub^vc^wq^n = \sum_{u,v,w,n\geq 0} A(u,v,w,n)a^ub^vc^wq^n = (-aq;q)_\infty (-bq;q)_\infty(-cq;q)_\infty\,\cdot$$
Note that a transformation implies Göllnitz’ theorem : $$\left\lbrace
\begin{array}{l r c l }
\text{dilation :} &q &\mapsto&q^6\\
\text{translations :} &a,b,c &\mapsto&q^{-4},q^{-2},q^{-1}\\
\end{array}
\right. \,\cdot$$\
Observe that while Schur’s theorem is not a direct corollary of Göllnitz’ theorem, [**Theorem \[ag\]**]{} *is* implied by [**Theorem \[aag\]**]{} by setting $c= 0$. Therefore Göllnitz’ theorem may be viewed as a level higher than Schur’s theorem, since it requires three primary colors instead of two.\
\
Following the work of Alladi, Andrews, and Gordon, it was an open problem to find a partition identity beyond Göllnitz’ theorem, in the sense that it would arise from four primary colors. This was famously solved by Alladi, Andrews, and Berkovich [@AAB03]. To describe their result, we consider parts that occur in eleven colors $\{a,b,c,d,ab,ab,ad,bc,bd,cd,abcd\}$ and ordered as follows: $$\label{quat}
1_{abcd}<1_{ab}<1_{ac}<1_{ad}<1_a<1_{bc}<1_{bd}<1_b<1_{cd}<1_c<1_d<2_{abcd}<\cdots\,\cdot$$ Let us consider the partitions with the length of the secondary parts greater than one and satisfying the minimal difference conditions in $$\label{tab2}
\begin{array}{|c|cccc|ccc|cc|c|}
\hline
_{\lambda_i}\setminus^{\lambda_{i+1}}&ab&ac&ad&a&bc&bd&b&cd&c&d\\
\hline
ab&2&2&2&2&2&2&2&2&2&2\\
ac&1&2&2&2&2&2&2&2&2&2\\
ad&1&1&2&2&2&2&2&2&2&2\\
a&1&1&1&1&2&2&2&2&2&2\\
\hline
bc&1&1&1&1&2&2&2&2&2&2\\
bd&1&1&1&1&1&2&2&2&2&2\\
b&1&1&1&1&1&1&1&2&2&2\\
\hline
cd&1&1&1&1&1&1&1&2&2&2\\
c&1&1&1&1&1&1&1&1&1&2\\
\hline
d&1&1&1&1&1&1&1&1&1&1\\
\hline
\end{array}\,,$$ and such that parts with color $abcd$ differ by at least $4$, and the smallest part with color $abcd$ is at least equal to $4+2\tau-\chi(1_a\text{ is a part})$, where $\tau$ is the number of primary and secondary parts in the partition. The theorem is then stated as follows.
\[th2\] Let $u,v,w,t,n$ be non-negative integers. Denote by $A(u,v,w,t,n)$ the number of partitions of $n$ into $u$ distncts parts with color $a$, $v$ distinct parts with color $b$, $w$ distinct parts with color $c$ and $t$ distinct parts with color $d$, and denote by $B(u,v,w,t,n)$ the number of partitions of $n$ satisfying the conditions above, with $u$ parts with color $a,ab,ac,ad$ or $abcd$, $v$ parts with color $b,ab,bc,bd$ or $abcd$, $w$ parts with color $c,ac,bc,cd$ or $abcd$ and $t$ parts with color $d,ad,bd,cd$ or $abcd$. We then have $A(u,v,w,t,n)=B(u,v,w,t,n)$ and the identity $$\sum_{u,v,w,t,n\geq 0} B(u,v,w,t,n)a^ub^vc^wd^tq^n =
(-aq;q)_\infty (-bq;q)_\infty(-cq;q)_\infty(-dq;q)_\infty\,\cdot$$
Note that result of Alladi-Andrews-Berkovich uses four primary colors, the full set of secondary colors, along with one quaternary color $abcd$. When $d=0$, we recover [**Theorem \[aag\]**]{}. Their main tool was a difficult $q$-series identity: $$\begin{aligned}
\label{identity}
\sum_{i,j,k,l-constraints}&
\frac{q^{T_{\tau}+T_{AB}+T_{AC}+T_{AD}+T_{CB}+T_{BD}+T_{CD}-BC-BD-CD+4T_{Q-1}+3Q+2Q\tau}}{
(q)_A(q)_B(q)_C(q)_D(q)_{AB}(q)_{AC}(q)_{AD}(q)_{BC}(q)_{BD}(q)_{CD}(q)_Q}\nonumber\\
\cdot\{&(1-q^A) + q^{A+BC+BD+Q}(1-q^B) + q^{A+BC+BD+Q+B+CD}\}\nonumber\\
=& \sum_{i,j,k,l-constraints} \frac{q^{T_i+T_j+T_k+T_l}}{(q)_i(q)_j (q)_k(q)_l}\end{aligned}$$ where $A,B,C,D,AB,AC,AD,BC,BD,CD,Q$ are variables which count the number of parts with respectively color $a,b,c,d,ab,ab,ad,bc,bd,cd,abcd$, $$\left\lbrace
\begin{array}{l}
i=A+AB+AC+AD+Q\\
j=B+AB+BC+BD+Q\\
k=C+AC+BC+CD+Q\\
l=D+AD+BD+CD+Q\\
\tau = A+B+C+D+AB+AC+AD+BC+BD+CD+Q
\end{array}\right.\,,$$ $T_n = \frac{n(n+1)}{2}$ is the $n^{th}$ triangular number and $(q)_n = (q;q)_n$. While this identity is difficult to prove, it is relatively straightforward to show that it is equivalent to the statement in [**Theorem \[th2\]**]{}.\
\
In this paper we give a bijective proof of [**Theorem \[th2\]**]{} (and therefore a bijective proof of the identity ). Our proof is divided into two steps. First we prove [**Theorem \[th1\]**]{} below, which arises more naturally from our methods than [**Theorem \[th2\]**]{}. Instead of adding a quaternary color, we lower certain minimum differences and add some forbidden patterns. Then, we show how [**Theorem \[th1\]**]{} is equivalent to [**Theorem \[th2\]**]{}.
Statement of Results
--------------------
Suppose that the parts occur in only primary colors $a,b,c,d$ and secondary colors $ab,ac,ad,cb,cd$, and are ordered as in by omitting quaternary parts: $$\label{cons}
1_{ab}< 1_{ac}< 1_{ad} < 1_{a}< 1_{bc} < 1_{bd}< 1_b< 1_{cd}< 1_c<1_d< 2_{ab}<\cdots\,\cdot$$ Let us now consider the partitions with the length of the secondary parts greater than one and satisfying the minimal difference conditions in $$\label{tab1}
\begin{array}{|c|cccc|ccc|cc|c|}
\hline
_{\lambda_i}\setminus^{\lambda_{i+1}}&ab&ac&ad&a&bc&bd&b&cd&c&d\\
\hline
ab&2&2&2&2&2&2&2&2&2&2\\
ac&1&2&2&2&2&2&2&2&2&2\\
ad&1&1&2&2&\underline{1}&2&2&2&2&2\\
a&1&1&1&1&2&2&2&2&2&2\\
\hline
bc&1&1&1&1&2&2&2&2&2&2\\
bd&1&1&1&1&1&2&2&2&2&2\\
b&1&1&1&1&1&1&1&2&2&2\\
\hline
cd&\underline{0}&1&1&1&1&1&1&2&2&2\\
c&1&1&1&1&1&1&1&1&1&2\\
\hline
d&1&1&1&1&1&1&1&1&1&1\\
\hline
\end{array}\,,$$ and which avoid the forbidden patterns $$((k+2)_{cd},(k+2)_{ab},k_c),((k+2)_{cd},(k+2)_{ab},k_d), ((k+2)_{ad},(k+1)_{bc},k_{a})\,,$$ except the pattern $(3_{ad},2_{bc},1_a)$ which is allowed. An example of such a partition is $$(11_{ad},10_{bc},8_a,7_{cd},7_{ab},4_c,3_{ad},2_{bc},1_a)\,\cdot$$ We can now state the main theorem of this paper.
\[th1\] Let $u,v,w,t,n$ be non-negative integers. Denote by $A(u,v,w,t,n)$ the number of partitions of $n$ into $u$ distncts parts with color $a$, $v$ distinct parts with color $b$, $w$ distinct parts with color $c$ and $t$ distinct parts with color $d$, and denote by $B(u,v,w,t,n)$ the number of partitions of $n$ satisfying the conditions above, with $u$ parts with color $a,ab,ac$ or $ad$, $v$ parts with color $b,ab,bc$ or $bd$, $w$ parts with color $c,ac,bc$ or $cd$ and $t$ parts with color $d,ad,bd$ or $cd$. We then have $A(u,v,w,t,n)=B(u,v,w,t,n)$, and the corresponding $q$-series identity is given by $$\label{series}
\sum_{u,v,w,t,n\in {\mathbb{N}}} B(u,v,w,t,n)a^ub^vc^wd^tq^n =(-aq;q)_\infty (-bq;q)_\infty(-cq;q)_\infty(-dq;q)_\infty\,\cdot$$
By specializing the variables in [**Theorem \[th1\]**]{}, one can deduce many partition identities. For example, by considering the following transformation in $$\label{dila}
\left\lbrace
\begin{array}{l r c l }
\text{dilation :} &q &\mapsto&q^{12}\\
\text{translations :} &a,b,c,d &\mapsto&q^{-8},q^{-4},q^{-2},q^{-1}\\
\end{array}
\right. \,,$$ we obtain a corollary of [**Theorem \[th1\]**]{}.
For any positive integer $n$, the number of partitions of $n$ into distinct parts congruent to $-2^3,-2^2,-2^1,-2^0\mod 12$ is equal to the number of partitions of $n$ into parts not congruent to $1,5\mod 12$ and different from $2,3,6,7,9$, such that the difference between two consecutive parts is greater than $12$ up to the following exceptions:
- ${\lambda}_i-{\lambda}_{i+1}= 9\Longrightarrow {\lambda}_i\equiv \pm 3 \mod 12$ and ${\lambda}_i-{\lambda}_{i+2}\geq 24$,
- ${\lambda}_i-{\lambda}_{i+1}= 12\Longrightarrow{\lambda}_i\equiv -2^3,-2^2,-2^1,-2^0\mod 12$,
except that the pattern $(27,18,4)$ is allowed.
For example, with $n=49$, the partitions of the first kind are $$(35,10,4),(34,11,4),(28,11,10),(23,22,4),$$ $$(23,16,10),(22,16,11)\,\,\text{and}\,\, (16,11,10,8,4)$$ and the partitions of the second kind are $$(35,14),(34,15),(33,16),(45,4),(39,10),(38,11)\,\,\text{and}\,\,(27,18,4)\,\cdot$$ **Corollary 1.1** may be compared with **Theorem 3** of [@AAB03], which is [**Theorem \[th2\]**]{} transformed by but with the dilation $q \mapsto q^{15}$ instead of $q \mapsto q^{12}$.\
\
The paper is organized as follows. In [Section \[sct2\]]{}, we will present some tools that will be useful for the proof of [**Theorem \[th1\]**]{}. After that, in [Section \[sct3\]]{}, we will give the bijection for [**Theorem \[th1\]**]{}. Then, in [Section \[sct4\]]{}, we will prove its well-definedness. Finally, in [Section \[sct5\]]{}, we will present and prove the bijection between the partitions with forbidden patterns considered in [**Theorem \[th1\]**]{} and the partitions with quaternary parts given in [**Theorem \[th2\]**]{}. In Part II of this series, we will show how our method can be used to go beyond Göllnitz’ theorem to any number of primary colors.
Preliminaries {#sct2}
=============
The setup
---------
Denote by ${\mathcal{C}}=\{a,b,c,d\}$ the set of primary colors and ${\mathcal{C}_\rtimes}=\{ad,ab,ac,bc,bd,cd\}$ the set of secondary colors, and recall the order on ${\mathcal{C}}\sqcup{\mathcal{C}_\rtimes}$: $$\label{orD}
ab<ac<ad<a<bc<bd<b<cd<c<d\,\cdot$$ We can then define the strict lexicographic order $\succ$ on colored parts by $$\label{lex}
k_p\succ l_{q} \Longleftrightarrow k-l\geq \chi(p\leq q)\,\cdot$$ Explicitly, this gives the order $$\label{cons1}
1_{ab}\prec 1_{ac}\prec 1_{ad} \prec 1_{a}\prec 1_{bc} \prec 1_{bd}\prec 1_b\prec 1_{cd}\prec 1_c\prec 1_d\prec 2_{ab}\prec\cdots\,,$$ previously etablished in . We denote by ${\mathcal{P}}$ the set of positive integers with primary color.\
\
We can easily see that for any $pq\in {\mathcal{C}_\rtimes}$, *with* $p<q$, and any $k\geq 1$, we have that $$\begin{aligned}
\label{half1}(2k)_{pq} &= k_{q}+k_{p}\\
\label{half2}(2k+1)_{pq} &= (k+1)_{p}+k_{q}\,\cdot\end{aligned}$$ In fact, any part greater that $1$ with a secondary color $pq$ can be uniquely written as the sum of two consecutive parts in ${\mathcal{P}}$ with colors $p$ and $q$. We then denote by ${\mathcal{S}}$ the set of secondary parts greater than $1$, and define the functions $\alpha$ and $\beta$ on ${\mathcal{S}}$ by $$\label{ab}
\alpha: \left\lbrace \begin{array}{l c l}
2k_{pq}&\mapsto& k_{q}\\
(2k+1)_{pq}&\mapsto&(k+1)_{p}
\end{array}\right.\qquad \text{and}\qquad\beta: \left\lbrace \begin{array}{l c l}
2k_{pq}&\mapsto& k_{p}\\
(2k+1)_{pq}&\mapsto&k_{q}
\end{array}\right.\,,$$ respectively named upper and lower halves. One can check that for any $k_{pq}\in {\mathcal{S}}$, $$\label{abba}
\alpha((k+1)_{pq}) = \beta(k_{pq})+1\quad\text{and}\quad \beta((k+1)_{pq})=\alpha(k_{pq})\,\cdot$$ In the previous sum, adding an integer to a part does not change its color. We can then deduce by induction that for any $m\geq 0$, $$\label{aj}
\alpha((k+m)_{pq})\preceq \alpha(k_{pq})+m \quad\text{and}\quad \beta((k+m)_{pq})\preceq \beta(k_{pq})+m\,\cdot$$ $$$$ Recall the table $$\begin{array}{|c|cccc|ccc|cc|c|}
\hline
_{\lambda_i}\setminus^{\lambda_{i+1}}&ab&ac&ad&a&bc&bd&b&cd&c&d\\
\hline
ab&2&2&2&2&2&2&2&2&2&2\\
ac&1&2&2&2&2&2&2&2&2&2\\
ad&1&1&2&2&\textcolor{blue}{2}&2&2&2&2&2\\
a&1&1&1&1&2&2&2&2&2&2\\
\hline
bc&\textcolor{blue}{1}&1&1&1&2&2&2&2&2&2\\
bd&1&1&1&1&1&2&2&2&2&2\\
b&1&1&1&1&1&1&1&2&2&2\\
\hline
cd&1&1&1&1&1&1&1&2&2&2\\
c&1&1&1&1&1&1&1&1&1&2\\
\hline
d&1&1&1&1&1&1&1&1&1&1\\
\hline
\end{array}\,\cdot$$ It can be viewed as an order $\triangleright$ on ${\mathcal{P}}\sqcup {\mathcal{S}}$ defined by $$\label{Ordre}
k_p\triangleright l_{q}\Longleftrightarrow k-l\geq 1+\left\lbrace
\begin{array}{ll}
\chi(p<q)&\text{if}\quad p\,\,\text{or}\,\,q\in {\mathcal{C}}\\
\chi(p\leq q)&\text{if}\quad p\,\,\text{and}\,\,q\in {\mathcal{C}_\rtimes}\end{array}\right.\,\cdot$$ By considering the lexicographic order $\succ$, becomes $$\label{Ord}
k_p\triangleright l_{q}\Longleftrightarrow\left\lbrace
\begin{array}{ll}
k_p\succeq (l+1)_{q}&\text{if}\quad p\,\,\text{or}\,\,q\in {\mathcal{C}}\\
k_p\succ (l+1)_{q}&\text{if}\quad p\,\,\text{and}\,\,q\in {\mathcal{C}_\rtimes}\end{array}\right.\,\cdot$$ We can observe that for any primary colors $p,q$ $$\label{nog}
k_p\succ l_{q}\quad\text{and}\quad k_p\not\triangleright \,\,l_{q}\quad\Longleftrightarrow \quad k-l=\chi(p<q)\quad\text{and}\quad p\neq q\,,$$ and we easily check that in this case, $(k_p,l_q) = (\alpha(k_p+l_{q}),\beta(k_p+l_{q}))$, for $k_p+l_{q}$ viewed as an element of ${\mathcal{S}}$ (see , ).\
\
We recall that the tables $$\Delta=\begin{array}{|c|cccc|ccc|cc|c|}
\hline
_{\lambda_i}\setminus^{\lambda_{i+1}}&ab&ac&ad&a&bc&bd&b&cd&c&d\\
\hline
ab&2&2&2&2&2&2&2&2&2&2\\
ac&1&2&2&2&2&2&2&2&2&2\\
ad&1&1&2&2&\textcolor{red}{1}&2&2&2&2&2\\
a&1&1&1&1&2&2&2&2&2&2\\
\hline
bc&1&1&1&1&2&2&2&2&2&2\\
bd&1&1&1&1&1&2&2&2&2&2\\
b&1&1&1&1&1&1&1&2&2&2\\
\hline
cd&\textcolor{red}{0}&1&1&1&1&1&1&2&2&2\\
c&1&1&1&1&1&1&1&1&1&2\\
\hline
d&1&1&1&1&1&1&1&1&1&1\\
\hline
\end{array}\,$$ and differ only when we have a pair $(p,q)$ of secondary colors such that $(p,q)\in \{(cd,ab),(ad,bc)\}$. In these cases, the difference in is one less.\
\
We will now define a relation $\gg$ on ${\mathcal{P}}\sqcup {\mathcal{S}}$ in such a way that, $$k_p\gg l_q \Longleftrightarrow k-l\geq \Delta(p,q)\,\cdot$$ Using , this relation can be summarized by the following equivalence : $$\label{Ordd}
k_p\gg l_{q}\Longleftrightarrow\left\lbrace
\begin{array}{ll}
k_p\succeq (l+1)_{q}&\text{if}\quad p\,\,\text{or}\,\,q \in {\mathcal{C}}\\
k_p\succ (l+1)_{q}&\text{if}\quad p\,\,\text{and}\,\,q \in {\mathcal{C}_\rtimes}\quad\text{and}\quad (p,q)\notin\{(cd,ab),(ad,bc)\}\\
k_p\succ l_{q}&\text{if}\quad (p,q)\in\{(cd,ab),(ad,bc)\}
\end{array}\right.\,\cdot$$ We denote by ${\mathcal{O}}$ the set of partitions with parts in ${\mathcal{P}}$ and well-ordered by $\succ$. We then have that ${\lambda}\in {\mathcal{O}}$ if and only if there exist ${\lambda}_1\succ\cdots\succ {\lambda}_t \in {\mathcal{P}}$ such that ${\lambda}=({\lambda}_1,\ldots,{\lambda}_t)$. We set $c({\lambda}_i)$ to be the color of ${\lambda}_i$ in ${\mathcal{C}}$, and $C({\lambda}) = c({\lambda}_1)\cdots c({\lambda}_t)$ as a commutative product of colors in $<{\mathcal{C}}>$. We denote by ${\mathcal{E}}$ the set of partitions with parts in ${\mathcal{P}}\sqcup {\mathcal{S}}$ and well-ordered by $\gg$. We then have that $\nu \in {\mathcal{E}}$ if and only if there exist $\nu_1\gg\cdots\gg\nu_t \in {\mathcal{P}}\sqcup {\mathcal{S}}$ such that $\nu=(\nu_1,\ldots,\nu_t)$. We set colors $c(\nu_i)\in {\mathcal{C}}\sqcup{\mathcal{C}_\rtimes}$ depending on whether $\nu_i$ is in ${\mathcal{P}}$ or ${\mathcal{S}}$, and we also define $C(\nu)=c(\nu_1)\cdots c(\nu_t)$ *seen* as a commutative product of colors in ${\mathcal{C}}$. In fact, a secondary color is just a product of two primary colors. For both kinds of partitions, their size is the sum of their part sizes.\
\
We also denote by ${\mathcal{E}_1}$ the subset of partitions of ${\mathcal{E}}$ with the forbidden patterns, $$\label{forb}
((k+2)_{cd},(k+2)_{ab},k_c),((k+2)_{cd},(k+2)_{ab},k_d), ((k+2)_{ad},(k+1)_{bc},k_{a})\,,$$ except the pattern $(3_{ad},2_{bc},1_a)$ which is allowed. We finally define ${\mathcal{E}_2}$ as the subset of partitions of ${\mathcal{E}}$ with parts well-ordered by $\triangleright$ in , and we observe that ${\mathcal{E}_2}$ is indeed a subset of ${\mathcal{E}_1}$.
Technical lemmas
----------------
We will state and prove some important lemmas for the proof of [**Theorem \[th1\]**]{}.
\[lem1\] For any $(l_p,k_{q})\in{\mathcal{P}}\times {\mathcal{S}}$, we have the following equivalences: $$\begin{aligned}
&\quad\l_p\not \gg k_{q}\Longleftrightarrow (k+1)_{q}\gg (l-1)_p\label{oe}\,,\\
&\quad l_p \gg \alpha(k_{q})\Longleftrightarrow \beta((k+1)_{q})\not \succ (l-1)_p\label{eo}\,\cdot\end{aligned}$$
\[lem2\] Let us consider the table $\Delta$ in . Then, for any secondary colors $p,q\in {\mathcal{C}_\rtimes}$, $$\label{gam}
\Delta(p,q) = \min\{k-l:\beta(k_{p})\succ \alpha(l_{q})\}\,\cdot$$ Moreover, if the secondary parts $k_p,l_q$ are such that $\beta(k_p)\succ\beta(\l_q)$, then $$\label{sw1}
(k+1)_p\gg l_q\,\cdot$$ Furthermore, if $k_{p}\gg l_{q}$, we then have either $\beta(k_{p})\succ \alpha(l_{q})$ or $$\label{sw}
\alpha(l_{q})+1\gg \alpha((k-1)_{p} )\succ \beta((k-1)_{p})\succ \beta(l_q)\,\cdot$$
\[lem3\] Let us consider a part $\nu =(\nu_1,\ldots,\nu_t)\in {\mathcal{E}}$. Then, for any $i\in [1,t-2]$ such that $(\nu_{i+1},\nu_{i+2})\in {\mathcal{S}}\times{\mathcal{P}}$ and $(c(\nu_i),c(\nu_{i+1}))\notin \{(ad,bc),(cd,ab)\}$, we have $$\label{ee}
\nu_{i}\succ \nu_{i+2}+2\,\cdot$$ Furthermore, the following are equivalent:
1. $\nu\in {\mathcal{E}}_1$,
2. For any $i\in [1,t-2]$ such that $(\nu_i,\nu_{i+1})$ is a pattern in $\{((k+1)_{ad},k_{bc}),(k_{cd},k_{ab})\}$ different from $(3_{ad},2_{bc})$, we have that $$\label{ee2}
\nu_{i}\succeq \nu_{i+2}+2\,\cdot$$
To prove , we observe that, for any $(l_p,k_{q})\in{\mathcal{P}}\times {\mathcal{S}}$, by , $$l_p\not \gg k_{q} \Longleftrightarrow l_p\not\succeq (k+1)_{q}\,,$$ and $$\begin{aligned}
(k+1)_{q}\gg (l-1)_p&\Longleftrightarrow (k+1)_{q}\succ l_p\\
&\Longleftrightarrow (k+1)_{q}\not\preceq l_p\,\cdot\end{aligned}$$ To prove , we first remark that, by , $\alpha(k_{q})=\beta((k+1)_{q})$. We then obtain by that $$l_p \gg \alpha(k_{q}) \Longleftrightarrow (l-1)_p \succeq \alpha(k_{q})$$ and $$\begin{aligned}
\beta((k+1)_{q})\not\succ(l-1)_p&\Longleftrightarrow \alpha(k_{q})\not\succ(l-1)_p\\
&\Longleftrightarrow \alpha(k_{q})\preceq (l-1)_p\,\cdot\end{aligned}$$
Let us consider $\min\{k-l: \beta(k_{p})\succ \alpha(l_{q})\}$. We just check for the $36$ pairs $(p,q)$ in ${\mathcal{C}_\rtimes}^2$. As an example, we take the pairs $(cd,ab),(ad,bc)$.
- For $k=2k'+1$, we have $(\alpha(k_{cd}),\beta(k_{cd}))=((k'+1)_c,k'_d)$. Then, to minimize $k-l$, $\alpha(l_{ab})$ and $\beta(l_{ab})$ have to be the greatest primary parts with color $a,b$ less than $k'_d$. So we obtain $k'_b$ and $k'_a$. We then have $$k-l= 2k'+1-2k'=1\,\cdot$$ For $k=2k'$, we have $(\alpha(k_{cd}),\beta(k_{cd}))=(k'_d,k'_c)$. Then to minimize $k-l$, $\alpha(l_{ab})$ and $\beta(l_{ab})$ have to be the greatest primary parts with color $a,b$ less than $k'_c$. So we obtain $k'_b$ and $k'_a$. We then have $$k-l= 2k'-2k'=0\,\cdot$$ Therefore, $\Delta(cd,ab)=0$.
- We check with the same reasoning by taking for $(ad,bc)$ consecutive parts $$(k+1)_a\succ k_d\succ k_c\succ k_b$$ and $$(k+1)_d\succ(k+1)_a\succ k_c \succ k_b\,,$$ and we obtain $\Delta(ad,bc)=1$.\
To prove , we have by that $\alpha((l-1)_q)=\beta(l_q)$. Since $\beta(k_p)\succ \beta(l_q)=\alpha((l-1)_q)$, this then implies by that $k_p\gg (l-1)_q$, and this is equivalent to $(k+1)_p\gg l_q$.\
\
Let us now suppose that $k-l\geq \Delta(p,q)$. We just saw that this minimum value was reached at $k$ or $k-1$. Then if we do not have $\beta(k_{p})\succ \alpha(l_{q})$, we necessarily have $\beta((k-1)_{p})\succ \alpha((l-1)_{q})=\beta(l_{q})$ by . Moreover, by , we have $$\beta(k_{p})\not\succ \alpha(l_{q})\Longleftrightarrow \alpha(l_{q})+1\gg \alpha((k-1)_{p} )\,,$$ so that we obtain .
For any $\nu =(\nu_1,\ldots,\nu_t)\in {\mathcal{E}}$ and any $i\in [1,t-2]$, we have $\nu_i\gg\nu_{i+1}\gg\nu_{i+2}$. By , the fact that $(\nu_{i+1},\nu_{i+2})\in {\mathcal{S}}\times{\mathcal{P}}$ implies that $$\nu_{i+1}\succ \nu_{i+2}+1$$ and $(c(\nu_i),c(\nu_{i+1}))\notin \{(ad,bc),(cd,ab)\}$ implies that $$\nu_i\succ \nu_{i+1}+1 \,,$$ and we thus have .\
\
To prove the second part, we have to show that not having the forbidden patterns in is equivalent to the second condition.
- If we suppose that $(\nu_i,\nu_{i+1})= (k_{cd},k_{ab})$, we then have by that $$\begin{aligned}
\nu_{i+1}\gg \nu_{i+2}&\Longleftrightarrow \nu_{i+1}\succ\nu_{i+2}+1\\
&\Longleftrightarrow k_{ab}\succ\nu_{i+2}+1\\
&\Longleftrightarrow (k-1)_{d}\succeq\nu_{i+2}+1 \quad\text{(by \eqref{cons1})}\,\cdot\end{aligned}$$ By , we then have that the fact that the patterns $k_{cd},k_{ab},(k-2)_d$ and $k_{cd},k_{ab},(k-2)_c$ are forbidden for $k\geq 3$ is equivalent to $(k-1)_{cd}\succeq\nu_{i+2}+1$, which means that $k_{cd}\succeq \nu_{i+2}+2$.
- If we suppose that $(\nu_i,\nu_{i+1})= ((k+1)_{ad},k_{bc})$ with $k\geq 3$, we then have by that $$\begin{aligned}
\nu_{i+1}\gg \nu_{i+2}&\Longleftrightarrow \nu_{i+1}\succ\nu_{i+2}+1\\
&\Longleftrightarrow k_{cb}\succ\nu_{i+2}+1\\
&\Longleftrightarrow k_{a}\succeq\nu_{i+2}+1\quad\text{(by \eqref{cons1})}\,\cdot\end{aligned}$$ We then have by that the fact that the pattern $(k+1)_{ad},k_{bc},(k-1)_a$ is forbidden for $k\geq 3$ is equivalent to $k_{ad}\succeq\nu_{i+2}+1$, which means that $(k+1)_{ad}\succeq \nu_{i+2}+2$.
Bressoud’s algorithm {#sct3}
====================
Here we adapt the algorithm given by Bressoud in his bijective proof of Schur’s partition theorem [@BR80]. The bijection is easy to describe and execute, but its justification is more subtle and is given in the next section.
From ${\mathcal{O}}$ to ${\mathcal{E}_1}$
-----------------------------------------
Let us consider the following machine $\Phi$:
- For a sequence ${\lambda}= {\lambda}_1,\ldots,{\lambda}_t$, take the smallest $i<t$ such that ${\lambda}_i,{\lambda}_{i+1}\in {\mathcal{P}}$ and ${\lambda}_i\succ {\lambda}_{i+1}$ but ${\lambda}_i\not\gg {\lambda}_{i+1}$, if it exists, and replace $$\begin{array}{l c l l}
{\lambda}_i &\leftarrowtail& {\lambda}_i+{\lambda}_{i+1}&\text{as a part in } {\mathcal{S}}\\
{\lambda}_j &\leftarrow& {\lambda}_{j+1}& \text{for all}\quad i<j<t\,\,
\end{array}$$ and move to [**Step 2**]{}. We call such a pair of parts a *troublesome* pair. We observe that ${\lambda}$ loses two parts in ${\mathcal{P}}$ and gains one part in ${\mathcal{S}}$. The new sequence is ${\lambda}= {\lambda}_1,\ldots,{\lambda}_{t-1}$. Otherwise, exit from the machine.\
- For ${\lambda}= {\lambda}_1,\ldots,{\lambda}_t$, take the smallest $i<t$ such that $({\lambda}_i,{\lambda}_{i+1})\in {\mathcal{P}}\times{\mathcal{S}}$ and ${\lambda}_i\not\gg {\lambda}_{i+1}$ if it exists, and replace $$({\lambda}_i,{\lambda}_{i+1}) \looparrowright ({\lambda}_{i+1}+1,{\lambda}_i-1)\in {\mathcal{S}}\times{\mathcal{P}}$$ and redo [**Step 2**]{}. We say that the parts ${\lambda}_i,{\lambda}_{i+1}$ are *crossed*. Otherwise, move to [**Step 1**]{}.
Let $\Phi({\lambda})$ be the resulting sequence after putting any ${\lambda}=({\lambda}_1,\ldots,{\lambda}_t)\in {\mathcal{O}}$ in $\Phi$. This transformation preserves the size and the commutative product of primary colors of partitions. Let us apply this machine on the partition $(11_c,8_d,6_a,4_d,4_c,4_b,3_a,2_b,2_a,1_d,1_c,1_b,1_a)$. $$\begin{array}{ccccccccccccccccc}
\begin{matrix}
11_c\\
8_d\\
6_a\\
\underline{4_d}\\
\underline{4_c}\\
4_b\\
3_a\\
2_b\\
2_a\\
1_d\\
1_c\\
1_b\\
1_a
\end{matrix} &\rightarrowtail&
\begin{matrix}
11_c\\
8_d\\
\mathbf{6_a}\\
\mathbf{8_{cd}}\\
4_b\\
3_a\\
2_b\\
2_a\\
1_d\\
1_c\\
1_b\\
1_a
\end{matrix}&
\looparrowright&
\begin{matrix}
11_c\\
\mathbf{8_d}\\
\mathbf{9_{cd}}\\
5_a\\
4_b\\
3_a\\
2_b\\
2_a\\
1_d\\
1_c\\
1_b\\
1_a
\end{matrix}
&
\looparrowright&
\begin{matrix}
11_c\\
10_{cd}\\
7_d\\
\underline{5_a}\\
\underline{4_b}\\
3_a\\
2_b\\
2_a\\
1_d\\
1_c\\
1_b\\
1_a
\end{matrix}
&
\rightarrowtail&
\begin{matrix}
11_c\\
10_{cd}\\
\mathbf{7_d}\\
\mathbf{9_{ab}}\\
3_a\\
2_b\\
2_a\\
1_d\\
1_c\\
1_b\\
1_a
\end{matrix}
&
\looparrowright&
\begin{matrix}
11_c\\
10_{cd}\\
10_{ab}\\
6_d\\
\underline{3_a}\\
\underline{2_b}\\
2_a\\
1_d\\
1_c\\
1_b\\
1_a
\end{matrix}
&
\rightarrowtail&
\begin{matrix}
11_c\\
10_{cd}\\
10_{ab}\\
6_d\\
5_{ab}\\
\underline{2_a}\\
\underline{1_d}\\
1_c\\
1_b\\
1_a
\end{matrix}
&
\rightarrowtail&
\begin{matrix}
11_c\\
10_{cd}\\
10_{ab}\\
6_d\\
5_{ab}\\
3_{ad}\\
\underline{1_c}\\
\underline{1_b}\\
1_a
\end{matrix}
&
\rightarrowtail&
\begin{matrix}
11_c\\
10_{cd}\\
10_{ab}\\
6_d\\
5_{ab}\\
3_{ad}\\
2_{bc}\\
1_a
\end{matrix}
\end{array}\,\cdot$$
From ${\mathcal{E}_1}$ to ${\mathcal{O}}$
-----------------------------------------
Let us consider the following machine $\Psi$:
- For a sequence $\nu= \nu_1,\ldots,\nu_t$, take the greastest $i\leq t$ such that $\nu_i\in {\mathcal{S}}$ if it exists. If $\nu_{i+1}\in{\mathcal{P}}$ and $\beta(\nu_i)\not\succ \nu_{i+1}$, then replace $$(\nu_i,\nu_{i+1}) \looparrowright (\nu_{i+1}+1,\nu_i-1)\in {\mathcal{P}}\times{\mathcal{S}}$$ and redo [**Step 1**]{}. We say that the parts $\nu_i,\nu_{i+1}$ are *crossed*. Otherwise, move to [**Step 2**]{}. If there are no more parts in ${\mathcal{S}}$, exit from the machine.\
- For $\nu= \nu_1,\ldots,\nu_t$, take the the greatest $i\leq t$ such that $\nu_i\in {\mathcal{S}}$. By [**Step 1**]{}, it satisfies $\beta(\nu_i)\succ \nu_{i+1}$. Then replace $$\begin{array}{l c l l}
\nu_{j+1} &\leftarrow& \nu_{j}& \text{for all}\quad t\geq j>i\,\, \\
(\nu_i) &\rightrightarrows& (\alpha(\nu_i),\beta(\nu_i))&\text{as a pair of parts in }{\mathcal{P}}\,,
\end{array}$$ and move to [**Step 1**]{}. We say that the part $\nu_i$ *splits*. We observe that $\nu$ gains two parts in ${\mathcal{P}}$ and loses one part in ${\mathcal{S}}$. The new sequence is $\nu = \nu_1,\ldots,\nu_{t+1}$.\
Let $\Psi(\nu)$ be the resulting sequence after putting any $\nu=(\nu_1,\ldots,\nu_t)\in {\mathcal{E}_1}$ in $\Psi$. This transformation preserves the size and the product of primary colors of partitions. For example, applying this to $(11_c,10_{cd},10_{ab},6_d,5_{ab},3_{ad},2_{bc},1_a)$ gives
$$\begin{array}{ccccccccccccccccc}
\begin{matrix}
11_c\\
10_{cd}\\
10_{ab}\\
6_d\\
5_{ab}\\
3_{ad}\\
1_c+1_b\\
1_a
\end{matrix} &
\rightrightarrows&
\begin{matrix}
11_c\\
10_{cd}\\
10_{ab}\\
6_d\\
5_{ab}\\
2_a+1_d\\
1_c\\
1_b\\
1_a
\end{matrix} &
\rightrightarrows&
\begin{matrix}
11_c\\
10_{cd}\\
10_{ab}\\
6_d\\
3_a+2_b\\
2_a\\
1_d\\
1_c\\
1_b\\
1_a
\end{matrix}&
\rightrightarrows&
\begin{matrix}
11_c\\
10_{cd}\\
5_b+5_a\\
6_d\\
3_a\\
2_b\\
2_a\\
1_d\\
1_c\\
1_b\\
1_a
\end{matrix}
&
\looparrowright&
\begin{matrix}
11_c\\
10_{cd}\\
7_d\\
5_a+4_b\\
3_a\\
2_b\\
2_a\\
1_d\\
1_c\\
1_b\\
1_a
\end{matrix}
&
\rightrightarrows&
\begin{matrix}
11_c\\
5_d+5_c\\
7_d\\
5_a\\
4_b\\
3_a\\
2_b\\
2_a\\
1_d\\
1_c\\
1_b\\
1_a
\end{matrix}
&
\looparrowright&
\begin{matrix}
11_c\\
8_d\\
5_c+4_d\\
5_a\\
4_b\\
3_a\\
2_b\\
2_a\\
1_d\\
1_c\\
1_b\\
1_a
\end{matrix}
&
\looparrowright&
\begin{matrix}
11_c\\
8_d\\
6_a\\
4_d+4_c\\
4_b\\
3_a\\
2_b\\
2_a\\
1_d\\
1_c\\
1_b\\
1_a
\end{matrix}&
\rightrightarrows&
\begin{matrix}
11_c\\
8_d\\
6_a\\
4_d\\
4_c\\
4_b\\
3_a\\
2_b\\
2_a\\
1_d\\
1_c\\
1_b\\
1_a
\end{matrix}
\end{array}\,\cdot$$
Proof of the well-definedness of Bressoud’s maps {#sct4}
================================================
In this section, we will show the following proposition.
\[pr\] The transformation $\Phi$ describes a mapping from ${\mathcal{O}}$ to ${\mathcal{E}_1}$ such that $\Psi\circ\Phi = Id_{{\mathcal{O}}}$, and $\Psi$ describes a mapping from ${\mathcal{E}_1}$ to ${\mathcal{O}}$ such that $\Phi\circ\Psi=Id_{{\mathcal{E}_1}}$.
Well-definedness of $\Phi$
--------------------------
In this subsection, we will show the following proposition.
\[pr1\] Let us consider any ${\lambda}=({\lambda}_1,\ldots,{\lambda}_t)\in {\mathcal{O}}$, and set $\gamma^0=0$, $\mu^0=\lambda$. Then, in the process $\Phi$ on ${\lambda}$, at the $u^{th}$ passage from [**Step 2** ]{}to [**Step 1**]{}, there exists a pair of partitions $\gamma^u,\mu^u\in {\mathcal{E}_1}\times {\mathcal{O}}$ such that the sequence obtained is $\gamma^u,\mu^u$. Moreover, if we denote by $l(\gamma^u)$ and $g(\mu^u)$ respectively the smallest part of $\gamma^u$ and the greatest part of $\mu^u$, we then have that
1. $l(\gamma^u)$ is the $u^{th}$ element in ${\mathcal{S}}$ of $\gamma^u$,
2. $l(\gamma^u)\gg g(\mu^u)$ so that the partition $(\gamma^u,g(\mu^u))$ is in ${\mathcal{E}}_1$,
3. for any $u$, $\gamma^u$ is the beginning of the partition $\gamma^{u+1}$ and the number of parts of $\mu^{u+1}$ is at least two less than the number of parts of $\mu^{u}$.
Let ${\lambda}= ({\lambda}_1,\ldots,{\lambda}_t)$ be a partition in ${\mathcal{O}}$. Let us set $c_1,\ldots,c_t$ to be the primary colors of the parts ${\lambda}_1,\ldots,{\lambda}_t$. Now consider the first troublesome pair ${\lambda}_i,{\lambda}_{i+1}\in {\mathcal{S}}$ obtained at [**Step 1** ]{}in $\Phi$, and the first resulting secondary part ${\lambda}_i+{\lambda}_{i+1}$. Note that this is reversible by [**Step 2** ]{}of $\Psi$.
- If there is a part ${\lambda}_{i+2}$ after ${\lambda}_{i+1}$, we have that $$\begin{aligned}
{\lambda}_i+{\lambda}_{i+1}-{\lambda}_{i+2}&= \chi(c_i<c_{i+1})+2{\lambda}_{i+1}-{\lambda}_{i+2}\quad \text{by \eqref{nog}}\\
&\geq \chi(c_i<c_{i+1})+2\chi(c_{i+1}\leq c_{i+2})+{\lambda}_{i+2}\quad \text{by \eqref{lex}}\\
&\geq 1+\chi(c_i\leq c_{i+2})+\chi(c_{i+1}\leq c_{i+2})\,\cdot\end{aligned}$$ Since by , we have that $c_i> c_{i+2}$ and $c_{i+1}> c_{i+2}$ implies $c_ic_{i+1}>c_{i+2}$, we then have that ${\lambda}_i+{\lambda}_{i+1}-{\lambda}_{i+2}\geq 1+\chi(c_ic_{i+1}\leq c_{i+2})$, and we conclude that ${\lambda}_i+{\lambda}_{i+1}\gg {\lambda}_{i+2}$.\
- The primary parts to the left of ${\lambda}_i$ are well-ordered by $\gg$. We then have $${\lambda}_1\gg\cdots\gg {\lambda}_{i-1}\gg {\lambda}_i\,\cdot$$ We obtain by and that for any $j<i$, $$\begin{aligned}
{\lambda}_{j}\gg{\lambda}_{i}+i-j-1\succeq \alpha({\lambda}_i+{\lambda}_{i+1}+i-j-1)\end{aligned}$$ so that by , ${\lambda}_{j}\gg \alpha({\lambda}_i+{\lambda}_{i+1}+i-j-1)$. If after $i-j$ iterations of [**Step 2**]{}, ${\lambda}_i+{\lambda}_{i+1}$ crosses ${\lambda}_j$, we will have at the same time that $$\begin{aligned}
({\lambda}_i+{\lambda}_{i+1}+i-j)&\gg ({\lambda}_j-1)\gg\cdots\gg{\lambda}_{i-1}-1\quad\text{(by \eqref{oe})}\label{rg}\\
\beta({\lambda}_i+{\lambda}_{i+1}+i-j)&\not\succ ({\lambda}_j-1)\gg\cdots\gg{\lambda}_{i-1}-1\quad\text{(by \eqref{eo})}\label{reverse1}\,,\end{aligned}$$ and so these iterations are reversible by [**Step 1** ]{}in $\Psi$ (recursively on $j\leq j'<i$).
- We also have by that $$\begin{aligned}
{\lambda}_{i-1}\gg{\lambda}_i\succ{\lambda}_{i+1}\succ {\lambda}_{i+2} &\Longrightarrow {\lambda}_{i-1}-1\succeq {\lambda}_i\succ{\lambda}_{i+1}\succ {\lambda}_{i+2}\\
&\Longrightarrow {\lambda}_{i-1}-1\succ{\lambda}_{i+2}\,\cdot\end{aligned}$$
- If we can no longer apply [**Step 2** ]{}after $i-j$ iterations, we then obtain $${\lambda}_1\gg\cdots\gg{\lambda}_{j-1}\gg({\lambda}_i+{\lambda}_{i+1}+i-j)\gg ({\lambda}_j-1)\gg\cdots\gg{\lambda}_{i-1}-1\succ {\lambda}_{i+2}\succ\cdots \succ {\lambda}_t$$ and we set $$\begin{aligned}
\gamma^1&={\lambda}_1\gg\cdots\gg({\lambda}_i+{\lambda}_{i+1}+i-j)\\
\mu^1&=({\lambda}_j-1)\gg\cdots\gg{\lambda}_{i-1}-1\succ {\lambda}_{i+2}\succ\cdots\succ{\lambda}_t\,,\label{pousse}\end{aligned}$$ and the conditions in the proposition are respected. In fact, even if $j=i$, we saw that ${\lambda}_i+{\lambda}_{i+1}\gg{\lambda}_{i+2}$.\
Now, by applying [**Step 1** ]{}for the second time, we see by that the next troublesome pair is either ${\lambda}_{i-1}-1,{\lambda}_{i+2}$, or ${\lambda}_{i+2+x},{\lambda}_{i+3+x}$ for some $x\geq 0$.
- If ${\lambda}_{i-1}-1\not \gg{\lambda}_{i+2}$, this means that ${\lambda}_{i-1}-1,{\lambda}_{i+2}$ are consecutive for $\succ$, and [**Step 1** ]{}occurs there. By , we have that $({\lambda}_i+{\lambda}_{i+1}+1)\gg ({\lambda}_{i-1}+{\lambda}_{i+2}-1)$. Then, even if $({\lambda}_{i-1}+{\lambda}_{i+2}-1)$ crosses the primary parts $({\lambda}_j-1)\gg\cdots\gg{\lambda}_{i-2}-1$ after $i-j-1$ iterations of [**Step 2**]{}, by , we will still have that $$({\lambda}_i+{\lambda}_{i+1}+i-j)\gg({\lambda}_{i-1}+{\lambda}_{i+2}+i-j-2)\,\cdot$$
- If ${\lambda}_{i-1}-1 \gg{\lambda}_{i+2}$, then the next troublesome pair appears at ${\lambda}_{i+2+x},{\lambda}_{i+3+x}$ for some $x\geq 0$, and it forms the secondary part ${\lambda}_{i+2+x}+{\lambda}_{i+3+x}$. We also have $${\lambda}_i\succ{\lambda}_{i+1}\succ{\lambda}_{i+2}\gg \cdots \gg {\lambda}_{i+2+x}\succ{\lambda}_{i+3+x}\,\cdot$$ By , we can easily check that $${\lambda}_i\succ{\lambda}_{i+1}\succ{\lambda}_{i+2} \succeq {\lambda}_{i+2+x}+x\succ{\lambda}_{i+3+x}+x$$ so that, by , $$({\lambda}_i+{\lambda}_{i+1})\gg ({\lambda}_{i+2+x}+{\lambda}_{i+3+x}+2x)\,\cdot$$ This means by that, $$({\lambda}_i+{\lambda}_{i+1})\gg ({\lambda}_{i+2+x}+{\lambda}_{i+3+x}+x)$$ and, as soon as $x\geq 1$, by $$\label{must}
({\lambda}_i+{\lambda}_{i+1})\triangleright ({\lambda}_{i+2+x}+{\lambda}_{i+3+x}+x)\, \cdot$$ We then obtain that, even if the secondary part ${\lambda}_{i+2+x}+{\lambda}_{i+3+x}$ crosses, after $x+i-j$ iterations of [**Step 2**]{}, the primary parts $${\lambda}_j-1\gg\cdots\gg ({\lambda}_{i-1}-1)\gg {\lambda}_{i+2}\gg \cdots \gg {\lambda}_{i+1+x}\,,$$ we will still have $$({\lambda}_i+{\lambda}_{i+1}+i-j)\gg({\lambda}_{i+2+x}+{\lambda}_{i+3+x}+x+i-j)\,\cdot$$
However, as soon as $x\geq 1$, we directly have $$({\lambda}_i+{\lambda}_{i+1}+i-j)\triangleright({\lambda}_{i+2+x}+{\lambda}_{i+3+x}+x+i-j)\,\cdot$$ In that case, the pair $({\lambda}_i+{\lambda}_{i+1}+i-j,{\lambda}_{i+2+x}+{\lambda}_{i+3+x}+x+i-j)$ cannot have the form $(k_{cd},k_{ab})$ or $((k+1)_{ad},k_{bc})$. In order to have these patterns, we must necessarily have that the second troublesome pair is either $({\lambda}_{i-1}-1,{\lambda}_{i+2})$ or $({\lambda}_{i+2},{\lambda}_{i+3})$. In both cases, we can see that either both parts crossed the primary part $l_s$ to the right of the pattern, or they do not move backward, so that the lower half of the second part is greater than the primary part $l_s$ to the right of the pattern. In the first case, we have that $$\begin{aligned}
&(l+2)_s \not \gg{\lambda}_i+{\lambda}_{i+1}+i-j-1\\
\Longleftrightarrow\quad&(l+2)_s \not \succeq {\lambda}_i+{\lambda}_{i+1}+i-j \quad \text{by \eqref{Ordd}}\end{aligned}$$ and then ${\lambda}_i+{\lambda}_{i+1}+i-j\succ (l+2)_s$, so that the forbidden patterns in do not occur. In the second case, we check the different subcases: $$\begin{aligned}
(2k'_{cd},2k'_{ab},l_s)&\Longrightarrow k'_a\succ l_s\\
&\Longrightarrow k'-l\geq \chi(a\leq s)=1\\
&\Longrightarrow 2k'-l\geq l+2\geq 3 \\
&\Longrightarrow 2k'-(l+2)\geq 1\geq \chi(cd \leq s)\,,\end{aligned}$$ $$\begin{aligned}
((2k'+1)_{cd},(2k'+1)_{ab},l_s)&\Longrightarrow k'_b\succ l_s\\
&\Longrightarrow k'-l\geq \chi(b\leq s)\\
&\Longrightarrow 2k'+1-l\geq l+1+2\chi(b\leq s)\\
&\Longrightarrow 2k'+1-l\geq 2+2\chi(b\leq s)\\
&\Longrightarrow (2k'+1)-(l+2)\geq \chi(b\leq s)\geq \chi(cd\leq s)\quad\text{since}\quad b<cd\,\cdot \end{aligned}$$ $$\begin{aligned}
((2k'+2)_{ad},(2k'+1)_{bc},l_s)&\Longrightarrow k'_c\succ l_s\\
&\Longrightarrow k'-l\geq \chi(c\leq s)\\
&\Longrightarrow 2k'-l\geq l+2+2\chi(c\leq s)\\
&\Longrightarrow 2k'-l\geq 1+2\chi(c\leq s)\\
&\Longrightarrow (2k'+2)-(l+2)\geq 1+\chi(c\leq s)\geq \chi(ad\leq s)\,\cdot\end{aligned}$$ $$\begin{aligned}
((2k'+1)_{ad},2k'_{bc},l_s)&\Longrightarrow k'_b\succ l_s\\
&\Longrightarrow k'-l\geq \chi(b\leq s)\\
&\Longrightarrow 2k'+1-l\geq l+1+2\chi(b\leq s)\\
&\Longrightarrow 2k'+1-(l+2)\geq l-1+\chi(b\leq s)\geq 1-\chi(l=1)\chi(s=a)\\\end{aligned}$$ We can see that only $(3_{ad},2_{bc},1_a)$ does not satisfy the fact that $(l+2)_s$ is less than the first part ${\lambda}_i+{\lambda}_{i+1}+i-j$. Recall that the second part needs to be greater than $(l+1)_s$. By [**Lemma \[lem3\]**]{}, we then have the forbidden patterns $$((k+2)_{cd},(k+2)_{ab},k_c),((k+2)_{cd},(k+2)_{ab},k_d), ((k+2)_{ad},(k+1)_{bc},k_{a})\,,$$ with only $(3_{ad},2_{bc},1_a)$ allowed. The conditions the proposition are satisfied after the second move from [**Step 2** ]{}to [**Step 1**]{}.\
\
By induction, [**Proposition \[pr1\]**]{} follows. Moreover, by , every single step is reversible by $\Psi$, since by its application the sequence $\gamma^{u+1},\mu^{u+1}$ becomes exactly after the iterations of [**Step 1** ]{}and splitting in [**Step 2** ]{}the sequence $\gamma^u,\mu^u$ (with the last part of $\gamma^{u+1}$).
The fact that $\Phi({\mathcal{O}})\subset {\mathcal{E}}_1$ follows from [**Proposition \[pr1\]**]{} since $\mu^u$ strictly decreases in terms of number of parts and the process stops as soon as either $\mu^u$ has at most one part, or all its primary parts are well-ordered by $\gg$. And the reversibility implies that $\Psi\circ \Phi_{|{\mathcal{O}}}= Id_{{\mathcal{O}}}$.
Well-definedness of $\Psi$
--------------------------
In this subsection, we will show the following proposition.
\[pr2\] Let us consider any $\nu=\nu_1,\ldots,\nu_t\in {\mathcal{E}_1}$, and set $\gamma^0=\nu$, $\mu^0=0$. Then, in the process $\Psi$ on $\nu$, at the $u^{th}$ passage from [**Step 2** ]{}to [**Step 1**]{}, there exists a pair of partitions $\gamma^u,\mu^u\in {\mathcal{E}_1}\times {\mathcal{O}}$ such that the sequence obtained is $\gamma^u,\mu^u$. Moreover, if we denote by $l(\gamma^u)$ and $g(\mu^u)$ respectively the smallest part of $\gamma^u$ and the greatest part of $\mu^u$, we then have that
1. $l(\gamma^u)\in {\mathcal{P}}$,
2. $l(\gamma^u)$and $g(\mu^u)$ are consecutive for $\succ$,
3. for any $u$, $\mu^u$ is the tail of the partition $\mu^{u+1}$ and the number of secondary parts of $\gamma^u$ decreases by one at each step.
If the pattern $(3_{ad},2_{bc},1_a)$ is in $\nu$, these parts are then the last ones. By applying $\Psi$, we obtain after the second passage at the tail of the partition the sequence $2_a,1_d,1_c,1_b,1_a$. Now suppose that this pattern does not occur in $\nu$. Let us consider the last secondary part $\nu_i$ of $\nu$.
- Suppose that [**Step 1** ]{}does not occur and we directly have [**Step 2**]{}. If there is a part $\nu_{i-1}$ to its left, and $(\nu_{i-1},\nu_i) \notin \{((k+1)_{ad},k_{cb}),(k_{cd},k_{ab})\}$, we then have $\nu_{i-1}\triangleright \nu_i$ and $$\begin{aligned}
\nu_{i-1}-\alpha(\nu_i)&= \nu_{i-1}-\nu_i+\beta(\nu_i)\\
&\geq 2\quad\quad (\text{by \eqref{Ordre} and the fact that}\,\,\beta(\nu_i)\geq 1)\,,\end{aligned}$$ so that $\nu_{i-1}\gg\alpha(\nu_i)$. In the case that $(\nu_{i-1},\nu_i) \in \{((k+1)_{ad},k_{cb}),(k_{cd},k_{ab})\}$, a quick check according to the parity of $k$ shows that we also have $\nu_{i-1}\gg\alpha(\nu_i)$. If we have the pattern $(\nu_{i-2},\nu_{i-1})\in\{((k+1)_{ad},k_{cb}),(k_{cd},k_{ab})\}$, then $\nu_i\preceq \nu_{i-2}-2$, and $$\begin{aligned}
\nu_{i-2}-\alpha(\nu_i)&= \nu_{i-2}-\nu_i+\beta(\nu_i)\\
&\geq 3\quad\quad \text{by \eqref{Ordre} and the fact that}\,\,\beta(\nu_i)\geq 1\,\,\end{aligned}$$ so that $\nu_{i-2}\succ(\alpha(\nu_i)+2)$. Note that by [**Lemma \[lem3\]**]{}, this implies that $\nu_{i-2},\nu_{i-1},(\alpha(\nu_i)+2)$ cannot be a forbidden pattern.\
- If $\nu_i$ crosses after iteration of [**Step 1** ]{}the primary parts $\nu_{i+1}\gg \cdots \gg \nu_{j}$, we then have $$\begin{aligned}
\nu_{i-1}&\gg\nu_{i+1}+1\gg\cdots \gg \nu_{j}+1\gg \alpha(\nu_i-j+i) \quad\text{(by \eqref{eo})}\label{orddd}\\
\nu_{i-1}&\gg\nu_{i+1}+1\gg\cdots \gg \nu_{j}+1\not\gg (\nu_i-j+i) \label{reverse2}\,\cdot\end{aligned}$$ In fact, by , if $(\nu_{i-1},\nu_i) \notin \{((k+1)_{ad},k_{cb}),(k_{cd},k_{ab})\}$, we necessarily have that $\nu_{i-1}\gg\nu_i\gg\nu_{i+1}$ so that $\nu_{i-1}\gg\nu_{i+1}+1$. If $(\nu_{i-1},\nu_i) \in \{((k+1)_{ad},k_{cb}),(k_{cd},k_{ab})\}$, since $\nu_{i-1}\succ(\nu_{i+1}+2)\in {\mathcal{P}}$, we necessarily have by that $\nu_{i-1}\gg(\nu_{i+1}+1)$.\
If we have the pattern $(\nu_{i-2},\nu_{i-1})\in\{((k+1)_{ad},k_{cb}),(k_{cd},k_{ab})\}$, then $\nu_i\preceq \nu_{i-2}-2$, and $$\nu_{i-2}\succeq \nu_i+2\succ \nu_{i+1}+3\,\cdot$$ So $\nu_{i-2}\succ\nu_{i+1}+3$, and the pattern $\nu_{i-2},\nu_{i-1},\nu_{i+1}+1$ is not forbidden.\
Finally, since $\nu_i\gg \nu_{i+1}\gg\cdots\gg \nu_j$ and $\nu_{i+1},\ldots,\nu_j\in {\mathcal{P}}$, we then have by that $\nu_i\gg \nu_j+j-i-1$, and this is equivalent by to $\nu_{j}+1\not\gg (\nu_i-j+i)$. This implies that all these iterations of [**Step 1** ]{}are reversible by [**Step 2** ]{}of $\Phi$.\
- In the case $j=t$, we have by and that $$\nu_i-t+i\succ \nu_t\,\cdot$$ If we suppose that $\nu_i-t+i$ has size $1$, then $\nu_t$ has also size $1$ and a color smaller than the color of $\nu_i$. But by and , we necessarily have that $\beta(\nu_i-t+i+1)$ has size $1$ and a color greater than the color of $\nu_i$. We then obtain by that $$\beta(\nu_i-t+i+1)\succ \nu_i-t+i\succ \nu_t\,,$$ so that we do not cross $\nu_i-t+i+1$ and $\nu_t$. This is absurd by assumption. In any case, after crossing, we still have that the secondary part size is greater than $1$, so that after splitting, its upper and lower halves stay in ${\mathcal{P}}$.\
- If we stop the iteration of [**Step 1** ]{}just before $\nu_{j+1}$, this means by that $$\nu_{i-1}\gg \nu_{i+1}+1\gg\cdots \gg \nu_{j}+1\gg\alpha(\nu_i-j+i)\succ \beta(\nu_i-j+i)\succ\nu_{j+1}\gg\cdots\gg\nu_t\,\cdot$$ We then set $$\begin{aligned}
\gamma^1 &= \nu_1\gg\cdots\gg\nu_{i-1}\gg \nu_{i+1}+1\gg \cdots \gg\nu_{j}+1\gg\alpha(\nu_i-j+i)\,,\\
\mu^1&=\beta(\nu_i-j+i)\succ\nu_{j+1}\succ\cdots\succ\nu_t\,,\end{aligned}$$ and we saw with all the different cases that the conditions of [**Proposition \[pr2\]**]{} are respected.\
Let us now consider the secondary part $\nu_{i-x}$ before $\nu_i$, for some $x\geq 1$. Then, by iteration of [**Step 1**]{}, it can never cross $\beta(\nu_i-j+i)$. In fact, suppose that it crosses all primary parts $\nu_{i-x+1}\gg\cdots \gg\nu_{i-1} \gg \nu_{i+1}+1\gg\cdots \gg \nu_{j}+1$. We then obtain $\nu_{i-x}-x+1+i-j$, and since $$\nu_{i-x}\gg\nu_{i-x+1}\gg\cdots\gg\nu_{i+1}\gg\nu_i,$$ and $\nu_{i-x+1},\ldots,\nu_{i+1}\in {\mathcal{P}}$, we have by that $\nu_{i-x}-x+1\gg \nu_i$, which is equivalent to $\nu_{i-x}-x+1+i-j\gg \nu_i-j+i$. We obtain by that $$\begin{aligned}
&\text{either}\quad\beta(\nu_{i-x}-x+1+i-j)\succ\alpha(\nu_i-j+i)\\
&\text{or}\quad\alpha(\nu_i-j+i)+1\gg\alpha(\nu_{i-x}-x+i-j)\succ\beta(\nu_{i-x}-x+i-j)\succ \beta(\nu_i-j+i)\,\cdot\end{aligned}$$ In any case, the splitting in [**Step 2** ]{}occurs before $\beta(\nu_i-j+i)$. We set then $$\begin{aligned}
\gamma^2 &= \nu_1\gg\cdots\gg\alpha(\nu_{i-x}-y)\,,\\
\mu^2&=\beta(\nu_{i-x}-y)\succ\cdots\succ \beta(\nu_i-j+i)\succ\nu_{j+1}\succ\cdots\succ\nu_t\,,\end{aligned}$$ where $y$ is the number of iterations of [**Step 1** ]{}before moving to [**Step 2**]{}, and by reasoning as before for the different cases, we can easily see that the conditions of [**Proposition \[pr2\]**]{} are respected. We obtain the result recursively. We also observe that the sequence $\gamma^u,\mu^u$ is exactly what we obtain by applying successively iteration of [**Step 1** ]{}and [**Step 2** ]{}of the transformation $\Phi$ on $\gamma^{u+1},\mu^{u+1}$.
By the lemma, since the number of secondary parts decreases by one at each passage from [**Step 2** ]{}to [**Step 1**]{}, we will stop after exactly the number of secondary parts in $\nu$. And the result is of the form $\gamma^U,\mu^U\in {\mathcal{O}}$ with $\gamma^U$ well-ordered by $\gg$, and the last part of the first partition and the first of the second partition are consecutive in terms of $\succ$. We then conclude that $\Psi({\mathcal{E}_1})\subset {\mathcal{O}}$. Since all the steps are reversible by $\Phi$, we also have $\Phi\circ\Psi_{|{\mathcal{E}_1}}=Id_{{\mathcal{E}_1}}$.
Bijective proof of [**Theorem \[th2\]**]{} {#sct5}
==========================================
In this section, we will describe a bijection for [**Theorem \[th2\]**]{}. For brevity, we refer to the partitions in [**Theorem \[th2\]**]{} as quaternary partitions.
From ${\mathcal{E}_1}$ to quaternary partitions
-----------------------------------------------
We consider the patterns $((k+1)_{ad},k_{bc}),(k_{cd},k_{ab})$ and we sum them as follows : $$\begin{aligned}
(k+1)_{ad}+k_{bc}&= (2k+1)_{abcd}\nonumber\\
k_{cd}+k_{ab}&= 2k_{abcd}\,\label{decomp}\,\cdot\end{aligned}$$
#### Let us now take a partition $\nu$ in ${\mathcal{E}_1}$
We then identify all the patterns $(M^i,m^i)\in \{((k+1)_{ad},k_{bc}),(k_{cd},k_{ab})\}$ and suppose that $$\nu = \nu_1,\ldots,\nu_x,M^1,m^1,\nu_{x+1},\ldots,\nu_y,M^2,m^2,\nu_{y+1},\ldots, M^t,m^t,\ldots,\nu_s\,\cdot$$ *As long as* we have a pattern $\nu_j,M^i,m^i$, we cross the parts by replacing them using $$\nu_j,M^i,m^i \longmapsto M^i+1,m^i+1,\nu_j-2\,\cdot$$ At the end of the process, we obtain a final sequence $$N^1,n^1,N^2,n^2,\ldots,N^t,n^t,\nu'_1,\ldots,\nu'_s\,\cdot$$ Finally, the associated pair of partitions is set to be $(K^1,\ldots,K^t),\nu'=(\nu'_1,\ldots,\nu'_t)$, where $K^i=N^i+n^i$ according to .\
To sum up the previous transformation, we only remark that, for each quaternary part $K^i$ obtained by summing of the original pattern $M^i,m^i$, we add twice the number of the remaining primary and secondary parts in $\nu$ to the left of the pattern that gave $K^i$, while we subtract from these parts two times the number of quaternary parts obtained by patterns that occur to their right.\
\
With the example $11_c,\underline{10_{cd},10_{ab}},6_d,5_{ab},\underline{3_{ad},2_{bc}},1_a$, $$\begin{footnotesize}
\begin{array}{ccccccccccccc}\begin{matrix}
11_c\\
10_{cd}\\
10_{ab}\\
6_d\\
5_{ab}\\
3_{ad}\\
2_{bc}\\
1_a
\end{matrix}
&\mapsto&
\begin{matrix}
11_c\\
10_{cd},10_{ab}\\
6_d\\
5_{ab}\\
3_{ad}, 2_{bc}\\
1_a
\end{matrix}
&\mapsto&
\begin{matrix}
11_{cd},11_{ab}\\
9_c\\
6_d\\
5_{ab}\\
3_{ad},2_{bc}\\
1_a
\end{matrix}
&\mapsto&
\begin{matrix}
11_{cd},11_{ab}\\
9_c\\
6_d\\
4_{ad},3_{bc}\\
3_{ab}\\
1_a
\end{matrix}
&\mapsto&
\begin{matrix}
11_{cd},11_{ab}\\
9_c\\
5_{ad},4_{bc}\\
4_d\\
3_{ab}\\
1_a
\end{matrix}
&\mapsto&
\begin{matrix}
11_{cd},11_{ab}\\
6_{ad},5_{bc}\\
7_c\\
4_d\\
3_{ab}\\
1_a
\end{matrix}
\end{array}\,\cdot
\end{footnotesize}$$ we obtain $[(22_{abcd},11_{abcd}),(7_c,4_d,3_{ab},1_a)]$. We now proceed to show that the image of this mapping is indeed a quaternary partition. The inverse mapping will be presented in the next subsection.
1. **Quaternary parts are well-ordered**. Let us consider two consecutive patterns $(M^j,m^j)=(k_p,l_q)$ and $(M^{j+1},m^{j+1})=(k'_{p'},l'_{q'})$. Since $\nu$ is well-ordered by $\gg$, we have by and that $$l_q\triangleright \l^1_{p_1}\triangleright\cdots\triangleright l^i_{p_i}\triangleright k'_{p'}\,\cdot$$ By , we then have that $l_q\succ k'_{p'}+i+1$ so that $l-k'\geq i+1+\chi(q\leq p')$. Since by , $k-l=\chi(p\leq q)$ and $k'-l'=\chi(p'\leq q')$, we then have that $$\begin{aligned}
k+l-(k'+l')&= \chi(p\leq q)+\chi(p'\leq q')+2(l-k')\\
&\geq \chi(p\leq q)+\chi(p'\leq q')+2\chi(q\leq p')+2i+2
\end{aligned}$$ and we obtain that $$\begin{aligned}
\chi(cd\leq ab)+&\chi(cd\leq ab)+2\chi(ab\leq cd)= 2\\
\chi(cd\leq ab)+&\chi(ad\leq bc)+2\chi(ab\leq ad)= 3\\
\chi(ad\leq bc)+&\chi(cd\leq ab)+2\chi(bc\leq cd)= 3\\
\chi(ad\leq bc)+&\chi(ad\leq bc)+2\chi(bc\leq ad)= 2\,,
\end{aligned}$$ so that $k+l-(k'+l')\geq 4+2i$. We will then have, after adding twice the remaining primary and secondary elements to their left, that the difference between two consecutive quaternary parts will be at least $4$.\
2. **The partition $\nu'$ is in ${\mathcal{E}_2}$**. Let us consider two consecutive elements $\nu_x=k_p, \nu_{x+1}=l_q$. We then have for consecutive patterns $M^u,m^u$ in between $k_p$ and $l_q$ that $$k_p\triangleright M^i\gg m^i \gg \cdots \gg M^j\gg m^j\triangleright l_q\,\cdot$$ For the case $(M^j, m^j, l_p) \neq (3_{ad},2_{bc},1_a)$, since by [**Lemma \[lem3\]**]{}, $M^{u}\succeq M^{u+1}+2$, $M^j\succeq l_q+2$, and by , we have that $k_p \succ M^i+1$, and then $$k_p\succ 1+2(j-i+1)+l_q \Longrightarrow k_p \triangleright 2(j-i+1)+l_q\,\cdot$$ For the case $(M^j, m^j, l_p) =(3_{ad},2_{bc},1_a)$, we obtain that $$k_p-2(j-i+1)+1\succ 3_{ad}$$ and this means that $k_p-2(j-i+1)+1\succeq 3_a $ so that $k_p-2(j-i+1)\succeq 2_a\triangleright 1_a$ .\
\
In any case, $k_p \triangleright 2(j-i+1)+l_p$, and this implies that after the subtraction of twice the number of the quaternary parts obtained to their right, these parts will be well-ordered by $\triangleright$.\
3. **The minimal quaternary part is well-bounded**. Let us first suppose that the tail of $\nu$ consists only of patterns $M^u,m^u$. We then have that $$\nu_s\triangleright M^i\gg m^i \gg \cdots \gg M^t\gg m^t$$ and, by [**Lemma \[lem3\]**]{} and , $\nu_s-2(t-i+1)+1\succeq M^t\succeq 2_{cd}$, so that $\nu'_s=\nu_s-2(t-i+1)\succeq 1_{cd} \succ 1_a$. This means that $1_a\notin \nu'$. We also obtain that $K^t = M^t+m^t+2s \geq 2s+4$.\
\
Now suppose that the tail of $\nu$ has the form $$l_q\triangleright \nu_u\triangleright\cdots\triangleright \nu_s\,,$$ with $M^t,m^t = k_p,l_q$. By , we obtain that $l_q\succ \nu_s+s-u+1$.
- If $\nu_s=1_a$, we then have $$\begin{aligned}
k+l &= \chi(p\leq q)+2l\\
&\geq \chi(p\leq q)+ 2(s-u+2+\chi(q\leq a))\\
&= 2(s-u+1)+ 2+\chi(p\leq q)+2\chi(q\leq a))\,,\end{aligned}$$ and with $(p,q)\in \{(ad,bc),(cd,ab)\}$ we have $$\begin{aligned}
\chi(ad\leq bc )&+2\chi(bc\leq a)) = 1\\
\chi(cd\leq ab)&+2\chi(ab\leq a))=2\end{aligned}$$ so that $k+l \geq 2(s-u+1)+ 3$. Then after the addition of $2(u-1)$ for the remaining primary and secondary parts of $\nu$ to the left of the pattern $(M^t,m^t)$, we obtain that the smallest quaternary part is at least $2s+3$. Note that $\nu'_s=\nu_s=1_a$.\
- When $\nu_s=h_r\neq 1_a$, we obtain that $$\begin{aligned}
k+l&\geq \chi(p\leq q)+ 2(s-u+1+h+\chi(q\leq r))\\
&= 2(s-u+1)+ 2h+\chi(p\leq q)+2\chi(q\leq r))\,,\end{aligned}$$ so that if $h\geq 2$, then $k+l\geq 2(s-u+1)+4$. If not, $h=1$, and since there is no secondary part of length $1$, we necessary have that $r\geq b$, so that $\chi(q\leq r)=1$ whenever $q\in\{ab,bc\}$. We thus obtain $k+l\geq 2(s-u+1)+4$. We then conclude that for $\nu_s\neq 1_a$, the smallest quaternary part is at least $2s+4$.
In any case, we have that the smallest quaternary part is at least $2s+4-\chi(1_a\in \nu')$.
From quaternary partitions to ${\mathcal{E}_1}$
-----------------------------------------------
Recall by that $K_{abcd}$ splits as follows : $$\begin{aligned}
(k+1)_{ad}+k_{bc}&= (2k+1)_{abcd}\\
k_{cd}+k_{ab}&= 2k_{abcd}\,\end{aligned}$$ Let us then consider partitions $(K^1,\ldots,K^t)$ and $\nu=(\nu_1,\ldots,\nu_s)\in {\mathcal{E}_2}$, with quaternary part $K^u$ such that $K^t\geq 4+2s-\chi(1_a\in \nu)$ and $K^u-K^{u+1}\geq 4$. We also set $K^u = (k^u,l^u)$ the decomposotion according to . We then proceed as follows by beginning with $K^t$ and $\nu_1$,
- If we do not encounter $K^{u+1}=(k^{u+1},l^{u+1})$ and $\nu_i\neq 1_a$ and $\nu_i+2\triangleright k^u-1$, then replace $$\begin{aligned}
\nu_i&\longmapsto \nu_i+2\\
(k^u,l^u)&\longmapsto (k^u-1,l^u-1)\end{aligned}$$ and move to $i+1$ and redo [**Step 1**]{}. Otherwise, move to [**Step 2**]{}.
- If we encounter $K^{u+1}= k^{u+1}\gg l^{u+1}$, then split $(k^u,l^u)$ into $k^u\gg l^u$. If not, it means that we have met $\nu_i$ such that $\nu_i+2\not\triangleright\,\,\, k^u-1$. Then we split $k^u\gg l^u$. Since we have $\nu_i+2\not\triangleright \,\,k^u-1$, which is equivalent by to $ k^u\succeq\nu_i+2$, by [**Lemma \[lem3\]**]{}, this is exactly the condition to avoid the forbidden patterns, with $k^u\gg l^u\triangleright \nu_i$.\
We can now move to [**Step 1** ]{}with $u-1$ and $i=1$.
With the example $[(22_{abcd},11_{abcd}),(7_c,4_d,3_{ab},1_a)]$, we obtain $$\begin{footnotesize}
\begin{array}{ccccccccccccccccc}
\begin{matrix}
11_{cd},11_{ab}\\
6_{ad},5_{bc}\\
7_c\\
4_d\\
3_{ab}\\
1_a
\end{matrix}
&\mapsto&
\begin{matrix}
11_{cd},11_{ab}\\
9_c\\
5_{ad},4_{bc}\\
4_d\\
3_{ab}\\
1_a
\end{matrix}
&\mapsto&
\begin{matrix}
11_{cd},11_{ab}\\
9_c\\
6_d\\
4_{ad},3_{bc}\\
3_{ab}\\
1_a
\end{matrix}
&\mapsto&
\begin{matrix}
11_{cd},11_{ab}\\
9_c\\
6_d\\
5_{ab}\\
3_{ad},2_{bc}\\
1_a
\end{matrix}
&\mapsto&
\begin{matrix}
11_{cd},11_{ab}\\
9_c\\
6_d\\
5_{ab}\\
3_{ad}\\
2_{bc}\\
1_a
\end{matrix}
&\mapsto&
\begin{matrix}
11_c\\
10_{cd},10_{ab}\\
6_d\\
5_{ab}\\
3_{ad}\\
2_{bc}\\
1_a
\end{matrix}
&\mapsto&
\begin{matrix}
11_c\\
10_{cd}\\
10_{ab}\\
6_d\\
5_{ab}\\
3_{ad}\\
2_{bc}\\
1_a
\end{matrix}
\end{array}\,\cdot
\end{footnotesize}$$ It is easy to check that when two quaternary parts meet in [**Step 2**]{}, we will always have $l^u\gg k^{u+1}$, since this is exactly the condition for the minimal difference $K^u-K^{u+1}\geq 4$ and they crossed the same number of $\nu_i$. We can also check that even if the minimal part crossed $\nu_1,\ldots,\nu_s\neq 1_a$, we will still have at the end $K^t\geq 4$ and for $\nu_s=1_a$, $K^t\geq 5$. We see with that the length of $m^t$ is at least equal to $2$, and for the case $\nu_s=1_a$, $m^t$ is a least equal to $2_{bc}\gg 1_a$. The partition obtained is then in ${\mathcal{E}_1}$.
[999]{}
K. ALLADI, G.E. ANDREWS and A. BERKOVICH, *An new four parameter $q$-series identity and its partitions implications*, Invent. Math. **153** (2003), 231–260.
K. ALLADI, G.E. ANDREWS and B. GORDON, *Generalizations and refinements of a partition theorem of Göllnitz*, J. Reine Angew. Math. **460** (1995), 165–188.
K. ALLADI and B. GORDON, *Generalization of Schur’s partition theorem*, Manuscripta Math. **79** (1993), 113–126.
G.E. ANDREWS, *A new generalization of Schur’s second partition theorem*, Acta Arith. **14** (1968) , 429–434.
G.E. ANDREWS, *On a partition theorem of Göllnitz and related formula*, J. Reine Angew. Math. **236** (1969), 37–42.
D. BRESSOUD, *A combinatorial proof of Schur’s 1926 partition theorem*, Proc. Amer. Math. Soc. **79** (1980), 338–340.
H. GÖLLNITZ, *Partitionen mit Differenzenbedingungen*, J. Reine Angew. Math. **225** (1967), 154–190.
PADMAVATHAMMA, M. RUDY SALESTINA and S.R. SUDARSHAN, *Combinatorial proof of the Göllnitz’s theorem on partitions*, Adv. Stud. Contemp. Math. **8** (2004), no.1, 47–54.
I. SCHUR, *Zur additiven zahlentheorie*, Sitzungsberichte der Preussischen Akademie der Wissenschaften (1926) , 488–495.
J.Y.J ZHAO, *A bijective proof of the Alladi-Andrews-Gordon partition theorem*, Electron. J. Combin. **22** (2015), no. 1, Paper 1.68
|
---
abstract: 'Given a flat metric one may generate a local Hamiltonian structure via the fundamental result of Dubrovin and Novikov. More generally, a flat pencil of metrics will generate a local bi-Hamiltonian structure, and with additional quasi-homogeneity conditions one obtains the structure of a Frobenius manifold. With appropriate curvature conditions one may define a curved pencil of compatible metrics and these give rise to an associated non-local bi-Hamiltonian structure. Specific examples include the $F$-manifolds of Hertling and Manin equipped with an invariant metric. In this paper the geometry supporting such compatible metrics is studied and interpreted in terms of a multiplication on the cotangent bundle. With additional quasi-homogeneity assumptions one arrives at a so-called weak $\F$-manifold - a curved version of a Frobenius manifold (which is not, in general, an $F$-manifold). A submanifold theory is also developed.'
author:
- 'Liana David and Ian A.B. Strachan'
title: 'Compatible metrics on a manifold and non-local bi-Hamiltonian structures'
---
Introduction
============
Let $M$ be a smooth manifold. The space of smooth vector fields on $M$ will be denoted ${\mathcal X}(M)$ and the space of smooth $1$-forms on $M$ will be denoted ${\mathcal E}^{1}(M).$ If $g$ is a (pseudo-Riemannian) metric on $M$, we shall denote by $g^{*}$ the induced metric on $T^{*}M.$ For a vector field $X$, $g(X)$ will denote the $1$-form corresponding to $X$ and for a $1$-form $\alpha$, $g^{*}\alpha $ will denote the vector field corresponding to $\alpha$ (via the isomorphism defined by $g$ between $TM$ and $T^{*}M$).
In this paper we will study the geometry induced by two metrics $g$ and $\tilde{g}$ on $M$. Unless otherwise stated, we will always denote by $\nabla$, $R$ ($\tilde{\nabla}$, $\tilde{R}$) the Levi-Civita connection and the curvature tensor of $g$ ($\tilde{g}$ respectively). For every constant $\lambda$ let $g_{\lambda}^{*}:=g^{*}+\lambda\tilde{g}^{*}$, which, we will assume, will always be non-degenerate. The Levi-Civita connection and curvature tensor of $g_{\lambda}$ will be denoted by $\nabla^{\lambda}$ and $R^{\lambda}$ respectively.
\[def1\][@mok]. The metrics $g$ and $\tilde{g}$ are almost compatible if the relation $$\label{defalmost}
g_{\lambda}^{*}\nabla^{\lambda}_{X}\alpha = g^{*}\nabla_{X}\alpha
+\lambda\tilde{g}^{*}\tilde{\nabla}_{X}\alpha$$ holds, for every $X\in{\mathcal X}(M)$, $\alpha\in{\mathcal E}
^{1}(M)$ and $\lambda$ constant.
The metrics $g$ and $\tilde{g}$ are compatible if they are almost compatible and moreover the relation $$g_{\lambda}^{*}(R^{\lambda}_{X,Y}\alpha )= g^{*}(R_{X,Y}\alpha
)+\lambda \tilde{g}^{*}(\tilde{R}_{X,Y}\alpha )$$ holds, for every $\alpha\in{\mathcal E}^{1}(M)$, $X,Y\in{\mathcal
X}(M)$ and $\lambda$ constant.
If $R^\lambda=0$ for all $\lambda$ then $g$, $\tilde{g}$ are said to form a flat-pencil of metrics.
The motivation for this definition comes from the theory of bi-Hamiltonian structures for equations of hydrodynamic type, i.e. for $(1+1)$-dimensional evolution equations $$\frac{\partial u^i}{\partial T} = M^i_j[u^r(X,T)] \frac{\partial
u^j}{\partial X}\,.$$ The foundational result is this area is due to Dubrovin and Novikov [@DN]:
\[DNthm\] Given two functionals of hydrodynamic type (i.e. depending only on the fields $\{u^i\}$ and not their derivatives) $$F=\int_{S^1} f(u)\,dX\,,\quad\quad G=\int_{S^1} g(u)\,dX$$ the bracket $$\{F,G\} = \int_{S^1} \frac{\delta F}{\delta u^i} \left[ g^{ij}
\frac{d~}{dX} - g^{is} \Gamma_{sk}^j u^k_X \right] \frac{\delta
G}{\delta u^j}\,\,dX$$ defines (in the non-degenerate case $det[g^{ij}]\neq 0$) a Hamiltonian structure if and only if
1. $g^{ij}$ is symmetric, and so defines a (pseudo)-Riemannian metric;
2. 3. $\Gamma_{ij}^k$ are the Christoffel symbols of the Levi-Civita connection of $g\,;$
4. 5. the curvature tensor of $g$ is identically zero.
Such a Hamiltonian structure is said to be of Dubrovin/Novikov type.
The concept of a bi-Hamiltonian structure was introduced by Magri [@Magri]. Given two Hamiltonian structures $\{,\}_1$ and $\{,\}_2$ then one may define a new bracket $$\{,\}_\lambda =\{,\}_1+\lambda \{,\}_2\,.$$
If $\{,\}_\lambda$ is a Hamiltonian structure for all $\lambda$ then the brackets $\{ ,\}_{1}$ and $\{ ,\}_{2}$ define a bi-Hamiltonian structure.
It follows immediately from the above definitions and Theorem \[DNthm\] that one has a bi-Hamiltonian structure of Dubrovin/Novikov type if and only if the corresponding metrics form a flat-pencil. The existence of such a pencil on the manifold $M$ results in a very rich geometric structure and leads, with various extra conditions, to $M$ being endowed with the structure of a Frobenius manifold [@dubrovin].
[@dubrovin]. $M$ is a Frobenius manifold if a structure of a Frobenius algebra (i.e. a commutative, associative algebra with multipication denoted by $\cdot\,,$ a unity element $e$ and an inner product $<,>$ satisfying the invariance condition $<a\cdot b,c>=<a,b\cdot c>$) is specified on any tangent plane $T_pM$ at any point $p\in M$ smoothly depending on the point such that:
- The invariant metric ${\tilde{g}}=<,>$ is a flat metric on $M\,;$
- The unity vector field $e$ is covariantly constant with respect to the Levi-Civita connection ${\tilde\nabla}$ for the metric ${\tilde{g}}$ $${\tilde\nabla}e=0\,;$$
- The symmetric $3$-tensor $c(X,Y,Z)={\tilde g}(X\cdot Y,Z)$ be such that the tensor $$({\tilde{\nabla}}_W c)(X,Y,Z)$$ is totally symmetric;
- A vector field $E$ - the Euler vector field - must be determined on $M$ such that $${\tilde\nabla}({\tilde\nabla}E)=0$$ and that the corresponding one-parameter group of diffeomorphisms acts by conformal transformations of the metric and by rescalings on the Frobenius algebras $T_pM\,.$
Generalizations of Dubrovin/Novikov structures were introduced by Ferapontov [@F]. Such structures (originally obtained by applying the Dirac theory of constrained dynamical systems) are of the form $$\{F,G\} = \int_{S^1} \frac{\delta F}{\delta u^i} \left[ g^{ij}
\frac{d~}{dX} - g^{is} \Gamma_{sk}^j u^k_X+ \sum_{\alpha}
w^i_{{\alpha}q} u^q_X \left( \nabla^\perp\right)^{-1}
w^j_{{\alpha}q}u^q_X \right] \frac{\delta G}{\delta u^j}\,\,dX$$ where $(g,\Gamma,w)$ must satisfy certain geometric conditions, the crucial difference being the presence of curvature. Here the $w$ may be interpreted as Weingarteen maps and $\nabla^\perp$ as a normal connection. Bi-Hamiltonian structures may be similarly defined. Definition \[def1\], first introduced by Mokhov [@mok], ensures that a compatible pair of metrics will define a bi-Hamiltonian structure of this generalized type, usually called a non-local bi-hamiltonian structure. No further mention will be made in this paper of bi-Hamiltonian structures, though this was one of the original motivations to study the geometry of compatible metrics.
The aim of this paper is to study the geometric structures on a manifold endowed with two compatible metrics, and conversely, to study the geometric conditions required for two metrics to be compatible. The constructions and results will mirror those in [@dubrovin], but it will turn out that many of the results require only almost compatibility or compatibility and not flatness.
The rest of the paper is laid out as follows. In Section 2 the condition on the pair $(g,\tilde{g})$ required for almost compatibility is derived. This condition, the vanishing of the Nijenhuis tensor constructed from the pair, appeared in [@mok]. The proof given here is shorter and coordinate free. It is included both for completeness and to fix the notions used in later sections. Section 3 contains the central result of the paper: the conditions required for an almost compatible pair of metrics to be compatible. These conditions are interpreted in terms of an algebraic structure on the cotangent bundle in Section 4. Again, the ideas follow Dubrovin [@dubrovin], but the algebraic structure comes from compatibility, not the flatness of the pencil. The concept of an $\F$-manifold and a weak $\F$-manifold are introduced in Section 5. With this the connection between (weak) quasi-homogeneous pencils of metrics and (weak) $\F$-manifolds can be made precise. Curvature properties are studied in Section 6. In particular, when both metrics are flat one recovers the results of [@dubrovin].
$F$-manifolds were introduced by Hertling and Manin [@HM]. An application of our results is that any $F$-manifold (with Euler field), equipped with a (non-necessarily flat) invariant metric $${\tilde{g}}(a\cdot b,c)={\tilde{g}}(a,b\cdot c),$$ will generate a pencil of compatible metrics. Thus large classes of examples may be derived from singularity theory. However, a weaker notion is sufficient to ensure the existence of a pencil of compatible metrics. It was shown [@hert] that the $F$-manifold condition is related to the total symmetry of the tensor ${\tilde\nabla}(\cdot).$ A weak $\F$-manifold, which ensures the existence of compatible metrics and hence of non-local bi-Hamiltonian structures, requires only that the tensor ${\tilde\nabla}(\cdot)(X,Y,Z,E)$ is totally symmetric in $X$, $Y$ and $Z$, where $E$ is the Euler vector field. Thus all $F$-manifolds with compatible metrics are weak $\F$-manifolds but not vice-versa. The different fount is used to denote the fact that the definition of a weak $\F$-manifold includes a metric while the definition of an $F$-manifold is metric independent. Our results from Sections 5 and 6 can be summarized in Table 1, where the vertical arrows denote $1:1$ correspondences.
[ccccccc]{} $\begin{array}{c} {\rm Frobenius} \\ {\rm manifold} \end{array}$ &$\hookrightarrow$& $\begin{array}{c} {\F-{\rm manifold~with}} \\
{\tilde{\nabla}}(e)=0\,,\mathcal{L}_E(\cdot)=\cdot\end{array}$ &$\hookrightarrow$ & $\begin{array}{c} {{\rm weak~}\F-{\rm manifold~with}} \\
{\tilde{\nabla}}(e)=0\,,\mathcal{L}_E(\cdot)=\cdot\end{array}$&$\hookrightarrow$ & weak [$\F-$manifold]{} $\updownarrow $ &&$\updownarrow $&&$\updownarrow $&& $\updownarrow $ $\begin{array}{c} {\rm quasihomogeneous} \\ {\rm
flat~pencil}\end{array}$ &$\hookrightarrow$& $\begin{array}{c}
{\rm quasihomogeneous} \\ {\rm pencil~with}
\\{\rm curvature~condition}\end{array}$ &$\hookrightarrow$& $\begin{array}{c} {\rm quasihomogeneous}
\\ {\rm pencil} \end{array}$&$\hookrightarrow$ &$\begin{array}{c} {\rm weak}\\ {\rm quasihomogeneous} \\{\rm
pencil}\end{array}$&&&&&&
The origin of this paper was one of the authors’ work on the induced structures on submanifolds of Frobenius manifolds [@ian; @ian2]. It is natural to consider conditions for metrics on a submanifold, induced from a compatible pencil of metrics in the ambient manifold, to be almost compatible and compatible. Such questions are studied in Sections 7 and 8.
The Appendix contains a short proof that, in the semi-simple case, almost compatibility implies compatibility, a result originally obtained in [@mok]. Again, it is included here for completeness. Such a result is of interest in the study of semi-Hamiltonian hydrodynamic systems.
Various related results have already appeared in the literature, but with various additional assumptions, such as semi-simplicity or flatness of at least one of the metrics. Such distinguished additional structures simplify many of the calculations. Here no such simplifying assumptions are made. Finally, it should be straightforward to extend these results to the almost Frobenius structures introduced recently by Dubrovin [@dubrovin2] and studied further by Manin [@manin].
Almost compatible metrics
=========================
The following Theorem has been proved in [@mok]. We construct a new shorter proof which uses a coordinate free argument.
\[ac\] The metrics $g$ and $\tilde{g}$ are almost compatible if and only if the $(2,1)$-tensor $$N_{A}(X,Y):=-[A(X),A(Y)]+A[A(X),Y]+A[X,A(Y)]-A^{2}[X,Y]$$ (with $A:TM\rightarrow TM$ defined by $A:=\tilde{g}^{*}g$) vanishes identically.
We recall that the Levi-Civita connection $\nabla$ of a metric $g$ on $M$ is determined on $TM$ by the Koszul formula: for every $X,
Y,Z\in{\mathcal X}(M)$, $$\begin{aligned}
g(\nabla_{Y}X,Z)&=\frac{1}{2}\{X(g(Y,Z))+Y(g(X,Z))-Z(g(X,Y))\\
&-g([X,Z],Y)-g([Y,Z],X)-g([X,Y],Z)\}.\\\end{aligned}$$ Let $X:=g^{*}\alpha$ and $Z:=g^{*}\gamma$. Then $X(g(Y,Z))=g^{*}(\alpha ,d(i_{Y}\gamma )),\quad
Z(g(X,Y))=g^{*}(\gamma ,d(i_{Y}\alpha ))\,$ and $$\begin{aligned}
g([Y,Z],X)&=-g^{*}\left( L_{Y}(\alpha ),\gamma\right)+Y\left(g^{*}(\alpha ,\gamma )\right)\\
g([X,Y],Z)&=g^{*}\left( L_{Y}(\gamma ),\alpha\right)-
Y\left(g^{*}(\alpha ,\gamma )\right).\end{aligned}$$ We deduce that the metrics $g$ and $\tilde{g}$ are almost compatible if and only if $$\label{almostc}
g_{\lambda}([X_{\lambda},Z_{\lambda}],Y)=g([X,Z],Y)+\lambda\tilde{g}([\tilde{X},\tilde{Z}],Y)$$ holds, where $$g_{\lambda}^{*}\alpha =X_{\lambda};\quad g^{*}\alpha =X; \quad
\tilde{g}^{*}\alpha =\tilde{X}$$ and $$g_{\lambda}^{*}\gamma =Z_{\lambda};\quad g^{*}\gamma =Z; \quad
\tilde{g}^{*}\gamma =\tilde{Z}.$$ Since $g_{\lambda}^{*}=g^{*}+\lambda\tilde{g}^{*}$, $X_{\lambda}=X+\lambda\tilde{X}$ and $Z_{\lambda}=Z+\lambda\tilde{Z}.$ Relation (\[almostc\]) becomes equivalent with $$g_{\lambda}([X+\lambda\tilde{X},Z+\lambda\tilde{Z}])=
g([X,Z])+\lambda\tilde{g}([\tilde{X},\tilde{Z}]).$$ Note that $\tilde{X}=A(X)$ and $\tilde{Z}=A(Z).$ Applying $g_{\lambda}^{*}$ to both terms of the above equality and identifying the coefficients of $\lambda$ we easily obtain the conclusion.
We shall use the following Proposition in our characterization of compatible metrics.
\[au\] Suppose the metrics $g$ and $\tilde{g}$ are almost compatible. Then for every $\alpha,\gamma\in{\mathcal E}^{1}(M)$, the relation $$\label{aditional}
g^{*}(\tilde{\nabla}_{\tilde{g}^{*}\gamma}\alpha
-\nabla_{\tilde{g}^{*}\gamma}\alpha ) =
\tilde{g}^{*}(\tilde{\nabla}_{{g}^{*}\gamma}\alpha
-\nabla_{g^{*}\gamma}\alpha )$$ holds.
The Levi-Civita connection $\nabla$ on $T^{*}M$ is determined by the formula (see also the proof of Theorem \[ac\]) $$2g^{*}(\nabla_{Y}\alpha ,\beta )=-g^{*}\left( i_{Y}d\beta
,\alpha\right) +g^{*}\left(i_{Y}d\alpha ,\beta\right)
+Yg^{*}(\alpha ,\beta )-g([g^{*}\alpha ,g^{*}\beta ],Y).$$ Let $A:=\tilde{g}^{*}g$ and define $A^{*}(\alpha )(X)=\alpha (AX)$ for every $\alpha\in{\mathcal E}^{1}(M)$ and $X\in{\mathcal
X}(M)$. Then $$\label{usor}
\tilde{g}^{*}(\alpha ,\beta )=g^{*}(\alpha ,A^{*}\beta )$$ and the Koszul formula for $\tilde{g}$ becomes $$2g^{*}(\tilde{\nabla}_{Y}\alpha ,A^{*}\beta )=-\tilde{g}^{*}\left(
i_{Y}d\beta ,\alpha\right) +\tilde{g}^{*}\left(i_{Y}d\alpha
,\beta\right) +Y\tilde{g}^{*}(\alpha ,\beta
)-\tilde{g}([\tilde{g}^{*}\alpha ,\tilde{g}^{*}\beta ],Y).$$ It is now easy to see that $$\begin{aligned}
2g^{*}(\nabla_{Y}\alpha -\tilde{\nabla}_{Y}\alpha ,\beta )&=
-g^{*}(i_{Y}d\beta ,\alpha )+g^{*}(i_{Y}d(A^{-1*}\beta
),A^{*}\alpha) -g([g^{*}\alpha ,g^{*}\beta ],Y)\\
&+\tilde{g}([\tilde{g}^{*}\alpha ,\tilde{g}^{*}A^{-1*}\beta ],Y)\end{aligned}$$ and that $$\begin{aligned}
2\tilde{g}^{*}(\nabla_{Y}\alpha -\tilde{\nabla}_{Y}\alpha ,\beta
)&= -g^{*}(i_{Y}d(A^{*}\beta ),\alpha )+g^{*}(i_{Y}d\beta
,A^{*}\alpha) -g([g^{*}\alpha ,g^{*}A^{*}\beta ],Y)\\
&+\tilde{g}([\tilde{g}^{*}\alpha ,\tilde{g}^{*}\beta ],Y).\end{aligned}$$ In order to prove relation (\[aditional\]) we need to show $$\label{ad}
g^{*}(\tilde{\nabla}_{\tilde{g}^{*}\gamma}\alpha
-\nabla_{\tilde{g}^{*}\gamma}\alpha ,\beta) =
\tilde{g}^{*}(\tilde{\nabla}_{{g}^{*}\gamma}\alpha
-\nabla_{g^{*}\gamma}\alpha ,\beta)$$ for every $\alpha ,\beta ,\gamma\in{\mathcal E}^{1}(M)$. Let $g^{*}\gamma =X.$ Then $\tilde{g}^{*}\gamma =A(X)$ and relation (\[ad\]) becomes $$\begin{aligned}
-(d\beta )(AX,g^{*}\alpha )&+d(A^{-1*}\beta )(AX,g^{*}A^{*}\alpha
)-g([g^{*}\alpha ,g^{*}\beta ],AX)+\tilde{g}([\tilde{g}^{*}\alpha
,\tilde{g}^{*}A^{-1*}\beta ],AX)\\
&=-d(A^{*}\beta )(X,g^{*}\alpha )+d\beta (X,g^{*}A^{*}\alpha
)-g([g^{*}\alpha ,g^{*}A^{*}\beta ],X)\\
&+\tilde{g}([\tilde{g}^{*}\alpha ,\tilde{g}^{*}\beta ],X).\\\end{aligned}$$ Since $\tilde{g}^{*}A^{-1*}=g^{*}$ and $g^{*}A^{*}=\tilde{g}^{*}$ (see relation (\[usor\])), this is equivalent to $$\begin{aligned}
&\beta ([AX,Z]-A^{-1}[AX,AZ]-A[X,Z]+[X,AZ])\\
&-g([Z,g^{*}\beta ],AX)+\tilde{g}([AZ,g^{*}\beta ],AX)\\
&+ g([Z,\tilde{g}^{*}\beta ],X)-\tilde{g}([AZ,\tilde{g}^{*}\beta ],X)\\
&=0,\end{aligned}$$ where $Z:=g^{*}\alpha .$ Let $Y:=g^{*}\beta .$ The almost compatibility property of $g$ and $\tilde{g}$ implies that the first row of the above expression is zero and then the above expression reduces to $$\label{fin}
-g([Z,Y ],AX)+\tilde{g}([AZ,Y],AX)+g([Z,AY],X)
-\tilde{g}([AZ,AY],X)=0.$$ Using $\tilde{g}(X,AY)=\tilde{g}(Y,AX)=g(X,Y)$, relation (\[fin\]) becomes $\tilde{g}(N_{A}(Y,Z),X)=0$ which is obviously true since $N_{A}=0.$
Compatible metrics
==================
\[criteriu\] Suppose the metrics $g$ and $\tilde{g}$ are almost compatible. The following statements are equivalent:
1\. The metrics $g$ and $\tilde{g}$ are compatible.
2\. For every $\alpha ,\beta\in{\mathcal E}^{1}(M)$ and $X,Y\in{\mathcal X}(M)$, the relation $$\label{conditiecomp2}
{g}^{*}(\tilde{\nabla}_{Y}\alpha -\nabla_{Y}\alpha ,
\tilde{\nabla}_{X}\beta -\nabla_{X}\beta )=
{g}^{*}(\tilde{\nabla}_{X}\alpha -\nabla_{X}\alpha ,
\tilde{\nabla}_{Y}\beta -\nabla_{Y}\beta )$$ holds.
3\. For every $\alpha ,\beta\in{\mathcal E}^{1}(M)$ and $X,Y\in{\mathcal X}(M)$, the relation $$\label{conditiecomp}
\tilde{g}^{*}(\tilde{\nabla}_{Y}\alpha -\nabla_{Y}\alpha ,
\tilde{\nabla}_{X}\beta -\nabla_{X}\beta )=
\tilde{g}^{*}(\tilde{\nabla}_{X}\alpha -\nabla_{X}\alpha ,
\tilde{\nabla}_{Y}\beta -\nabla_{Y}\beta )$$ holds.
Note that if $h$ is a pseudo-Riemannian metric on $M$ with Levi-Civita connection $\nabla^{\prime}$, then its curvature $R^{\prime}$ can be written in the form $$\begin{aligned}
h^{*}(R^{\prime}_{h^{*}(\gamma ),X}\alpha ,\beta )&=(h^{*}\gamma
)\left( h^{*}(\nabla^{\prime}_{X}\alpha ,\beta)\right)
-X\left(h^{*}(\nabla^{\prime}_{h^{*}\gamma }\alpha
,\beta )\right)-h^{*}(\nabla^{\prime}_{[h^{*}\gamma ,X]}\alpha ,\beta )\\
&+h^{*}(\nabla^{\prime}_{h^{*}(\nabla^{\prime}_{X}\beta )}\alpha ,\gamma
)+d\alpha
(h^{*}\gamma ,h^{*}\nabla^{\prime}_{X}\beta )\\
&-h^{*}(\nabla^{\prime}_{h^{*}(\nabla^{\prime}_{X}\alpha )}\beta
,\gamma )-d\beta (h^{*}\gamma ,h^{*}\nabla^{\prime}_{X}\alpha ),\end{aligned}$$ where $\alpha ,\beta ,\gamma\in{\mathcal E}^{1}(M)$ and $X\in{\mathcal X}(M).$ We shall use this observation for $h:=g,\tilde{g},g_{\lambda}$. Identifying the coefficients of $\lambda$ in the compatibility condition $$g_{\lambda}^{*}(R^{\lambda}_{g_{\lambda}^{*}(\gamma ),X}\alpha
,\beta )= g^{*}(R_{g_{\lambda}^{*}(\gamma ),X}\alpha ,\beta)+
\lambda\tilde{g}^{*}(\tilde{R}_{g_{\lambda}^{*}(\gamma ),X}\alpha,
\beta )$$ and using relation (\[defalmost\]), we see that $g$ and $\tilde{g}$ are compatible if and only if the expression $$\begin{aligned}
E_{\alpha ,\beta ,\gamma ,X}&:=g^{*}(\nabla_{X}\alpha
,\nabla_{\tilde{g}^{*}\gamma }\beta )
+\tilde{g}^{*}(\tilde{\nabla}_{X}\alpha
,\tilde{\nabla}_{g^{*}\gamma}\beta )
-g^{*}(\nabla_{\tilde{g}^{*}\gamma}\alpha ,\nabla_{X}\beta )
-\tilde{g}^{*}(\tilde{\nabla}_{g^{*}\gamma}\alpha
,\tilde{\nabla}_{X}\beta )\\
&+\tilde{g}^{*}(\tilde{\nabla}_{X}\beta
,\nabla_{g^{*}\gamma}\alpha )
-\tilde{g}^{*}(\tilde{\nabla}_{X}\alpha
,{\nabla}_{g^{*}\gamma}\beta )
+g^{*}(\tilde{\nabla}_{\tilde{g}^{*}\gamma}\alpha ,\nabla_{X}\beta
)-g^{*}({\nabla}_{X}\alpha
,\tilde{\nabla}_{\tilde{g}^{*}\gamma}\beta )\\\end{aligned}$$ is zero, for every $\alpha ,\beta ,\gamma\in{\mathcal E}^{1}(M)$ and $X\in{\mathcal X}(M).$ Using Proposition \[au\], we notice that $$\begin{aligned}
E_{\alpha ,\beta ,\gamma ,X}&= g^{*}(\nabla_{X}\alpha
,\nabla_{\tilde{g}^{*}\gamma}\beta
-\tilde{\nabla}_{\tilde{g}^{*}\gamma}\beta
)+g^{*}(\tilde{\nabla}_{\tilde{g}^{*}\gamma}\alpha-
\nabla_{\tilde{g}^{*}\gamma}\alpha ,\nabla_{X}\beta )\\
&+\tilde{g}^{*}(\tilde{\nabla}_{X}\beta
,\nabla_{g^{*}\gamma}\alpha -\tilde{\nabla}_{g^{*}\gamma}\alpha )+
\tilde{g}^{*}(\tilde{\nabla}_{X}\alpha
,\tilde{\nabla}_{g^{*}\gamma}\beta -\nabla_{g^{*}\gamma}\beta )\\
&=\tilde{g}^{*}(\tilde{\nabla}_{g^{*}\gamma}\alpha
-\nabla_{g^{*}\gamma}\alpha ,\nabla_{X}\beta
-\tilde{\nabla}_{X}\beta )\\
&+\tilde{g}^{*}(\tilde{\nabla}_{g^{*}\gamma}\beta
-\nabla_{g^{*}\gamma}\beta ,\tilde{\nabla}_{X}\alpha
-\nabla_{X}\alpha )\\\end{aligned}$$ and also $$\begin{aligned}
E_{\alpha ,\beta ,\gamma ,X}&=
g^{*}(\tilde{\nabla}_{\tilde{g}^{*}\gamma}\beta
-\nabla_{\tilde{g}^{*}\gamma}\beta ,\tilde{\nabla}_{X}\alpha
-\nabla_{X}\alpha )\\
&+g^{*}(\tilde{\nabla}_{\tilde{g}^{*}\gamma}\alpha
-\nabla_{\tilde{g}^{*}\gamma}\alpha ,{\nabla}_{X}\beta
-\tilde{\nabla}_{X}\beta ).\\\end{aligned}$$ The Theorem is proved.
Multiplication on $T^{*}M$
==========================
In this Section we show that the (almost) compatibility condition can also be formulated in terms of a multiplication $\circ$“ on $T^{*}M.$ This multiplication has been used and studied in [@dubrovin], when the metrics are flat. We here extend this study to the more general case of compatible metrics, not necessarily flat.\
By a multiplication $\circ$” on a vector bundle $V$ we mean a bundle map $$\circ :V\times V\rightarrow V.$$ The idea of defining a multiplication on the tangent bundle dates back to Vaisman [@V]. Here a multiplication on the cotangent bundle is required [@dubrovin].
\[multipl\] An arbitrary pair of metrics $(g,\tilde{g})$ on $M$ defines a multiplication $$\label{multiplication}
\alpha\circ\beta :={\nabla}_{g^{*}\alpha}(\beta
)-\tilde{\nabla}_{g^{*}\alpha}(\beta )$$ on $T^{*}M.$
Note that, in general, the multiplications determined by $(g,\tilde{g})$ and $(\tilde{g},g)$ do not coincide. The next Proposition is a rewriting of the relations (2.5) and (2.6) of [@dubrovin].
\[interpretare\] For every $\alpha ,\beta ,\gamma\in{\mathcal E}^{1}(M)$, the following relation holds: $$\label{interpret1}
g^{*}(\alpha\circ\beta ,\gamma )=g^{*}(\alpha ,\gamma\circ\beta )
.$$ If $g$ and $\tilde{g}$ are almost compatible, then also $$\label{interpret2}
\tilde{g}^{*}(\alpha\circ\beta ,\gamma )=\tilde{g}^{*}(\alpha
,\gamma\circ\beta ) .$$
Relation (\[interpret1\]) is a consequence of the torsion free property of the connections $\nabla$ and $\tilde{\nabla}$: $$\begin{aligned}
g^{*}(\alpha\circ\beta ,\gamma )-g^{*}(\alpha ,\gamma\circ\beta
)&= g^{*}({\nabla}_{g^{*}\alpha }\beta
-\tilde{\nabla}_{g^{*}\alpha }\beta ,\gamma )-
g^{*}({\nabla}_{g^{*}\gamma }\beta
-\tilde{\nabla}_{g^{*}\gamma}\beta ,\alpha )\\
&={\nabla}_{g^{*}\alpha}(\beta )\left( g^{*}\gamma\right)
-\tilde{\nabla}_{g^{*}\alpha}(\beta )\left( g^{*}\gamma\right)
-{\nabla}_{g^{*}\gamma }(\beta )\left( g^{*}\alpha\right)\\
&+\tilde{\nabla}_{g^{*}\gamma}(\beta )\left(g^{*}\alpha\right)\\
&=d\beta \left( g^{*}\alpha ,g^{*}\gamma\right) +d\beta
\left( g^{*}\gamma ,g^{*}\alpha\right)\\
&=0.\\\end{aligned}$$ Suppose now that $g$ and $\tilde{g}$ are almost compatible. Relation (\[aditional\]) of Proposition \[au\] can be written as $$\tilde{g}^{*}(\alpha\circ\beta ,\gamma
)=g^{*}(\nabla_{\tilde{g}^{*}\alpha }\beta
-\tilde{\nabla}_{\tilde{g}^{*}\alpha}\beta ,\gamma ).$$ It follows that $$\begin{aligned}
\tilde{g}^{*}(\alpha\circ\beta ,\gamma )-\tilde{g}^{*}(\alpha
,\gamma\circ\beta )&={\nabla}_{\tilde{g}^{*}\alpha}(\beta ) \left(
g^{*}\gamma\right) -\tilde{\nabla}_{\tilde{g}^{*}\alpha}(\beta )
\left(
g^{*}\gamma\right) \\
&-{\nabla}_{{g}^{*}\gamma}(\beta ) \left(\tilde{
g}^{*}\alpha\right) +\tilde{\nabla}_{{g}^{*}(\gamma )}(\beta )
\left( \tilde{g}^{*}\alpha\right)\\
&=d\beta (\tilde{g}^{*}\alpha ,g^{*}\gamma )+d\beta (g^{*}\gamma
,\tilde{g}^{*}\alpha ).\\
&=0.\end{aligned}$$
The following Proposition is a reformulation of Theorem \[criteriu\] and generalizes equation (2.7) of [@dubrovin].
Suppose that the metrics $g$ and $\tilde{g}$ are almost compatible. Then they are compatible if and only if the relation $$\label{rightsym}
(\beta\circ\gamma )\circ\alpha =(\beta\circ\alpha )\circ\gamma$$ holds, for every $\alpha ,\beta ,\gamma\in{\mathcal E}^{1}(M)$.
Relation (\[conditiecomp\]) is equivalent with $$\tilde{g}^{*}(\lambda\circ\alpha, \beta\circ\gamma
)=\tilde{g}^{*}(\lambda\circ\gamma ,\beta\circ\alpha ),$$ or, using (\[interpret2\]), with $$\tilde{g}^{*}(\lambda ,(\beta\circ\gamma )\circ\alpha )
=\tilde{g}^{*}(\lambda ,(\beta\circ\alpha )\circ\gamma ).$$ Since $\tilde{g}$ is non-degenerate, the conclusion follows.
The following Lemma relates, in a nice way, the curvatures of $g$ and $\tilde{g}$ with $\circ$"-multiplication. We state it for completeness and we leave its proof to the reader.
Let $(g,\tilde{g})$ be an arbitrary pair of metrics on $M$, with corresponding multiplication $\circ$ on $T^{*}M.$ Then $$\begin{aligned}
R_{g^{*}\alpha ,g^{*}\beta}(\gamma )&=\tilde{R}_{g^{*}\alpha
,g^{*}\beta}(\gamma )+\tilde{\nabla}_{g^{*}\alpha}(\circ )(\beta
,\delta )-\tilde{\nabla}_{g^{*}\beta}(\circ )(\alpha ,\delta )\\
&+\alpha\circ (\beta\circ\delta )-(\alpha\circ\beta)\circ\delta
-\beta\circ (\alpha\circ\delta )+(\beta\circ\alpha )\circ\delta\end{aligned}$$ for every $\alpha ,\beta ,\gamma, \delta\in{\mathcal E}^{1}(M).$
Note that if $R={\tilde R}=0$ then one may integrate the equation $$\tilde{\nabla}_{g^{*}\alpha}(\circ )(\beta ,\delta
)-\tilde{\nabla}_{g^{*}\beta}(\circ )(\alpha ,\delta )=0$$ in terms of potential functions. The remaining part of the equation - the defining condition for a Vinberg or pre-Lie algebra - gives a differential equation for the potential functions. The integrability of this differential equation was established by Ferapontov [@ferapontov2] and Mokhov [@mok].
Quasi-homogeneous pencil of metrics and weak $\F$-manifolds
===========================================================
We now turn our attention to the case of (weak) quasi-homogeneous pencils and the parallel notion of (weak) $\F$-manifolds. The aim of this Section is to prove the last two vertical 1-1 correspondences in Table 1 of the introduction.
(see [@dubrovin]) A pair $(g,\tilde{g})$ of compatible metrics on $M$ is called a (regular) quasi-homogeneous pencil of degree $d$ if the following two conditions are satisfied:
1. There is a smooth function $f$ on $M$ such that the vector fields $E:=\mathrm{grad}_{g}(f)$ and $e:=\mathrm{grad}_{\tilde{g}}(f)$ have the following properties:
$$[e,E]=e;\quad L_{E}(g^{*})=(d-1)g^{*};\quad
L_{e}(g^{*})=\tilde{g}^{*};\quad L_{e}(\tilde{g}^{*})=0.$$
2. The operator $T(u):=\frac{d-1}{2}u +u (\tilde{\nabla}E)$ is an automorphism of $T^{*}M.$
**Remark:** The following facts hold:
1. Since $L_{e}(\tilde{g})=0$ and $e=\mathrm{grad}_{\tilde{g}}(f)$, $e$ is $\tilde{\nabla}$-parallel.
2. The conditions $L_{E}({g})=(1-d){g}$ and $E=\mathrm{grad}_{g}(f)$, easily imply that $$\label{cov}
\nabla_{X}(E)=\frac{1-d}{2}X,$$ for every $X\in{\mathcal X}(M)$. Also, $E$ is a conformal Killing vector field of the metric $\tilde{g}$: $$\begin{aligned}
L_{E}(\tilde{g}^{*})&=L_{E}L_{e}(g^{*})=L_{e}L_{E}(g^{*})+L_{[E,e]}(g^{*})\\
&=(d-1)L_{e}(g^{*})-L_{e}(g^{*})=(d-2)\tilde{g}^{*}.\end{aligned}$$
A consequence of relation (\[cov\]) is the following Proposition, which justifies Definition \[wp\].
\[ajutatoare\] For every $u\in{\mathcal E}^{1}(M)$, $T(u )=df\circ u .$
This is just a simple calculation: for every $X\in{\mathcal
X}(M)$, $$\begin{aligned}
(df\circ u )(X)&=\nabla_{g^{*}(df)}(u
)(X)-\tilde{\nabla}_{g^{*}(df)}(u )(X)\\
&=\nabla_{E}(u )(X)-\tilde{\nabla}_{E}(u )(X)\\
&=\nabla_{E}(u )(X)-E(u (X)) +u
(\tilde{\nabla}_{E}X)\\
&=-u (\nabla_{X}E) +u
(\tilde{\nabla}_{X}E)\\
&=\frac{d-1}{2}u (X)+u(\tilde{\nabla}_{X}E)\end{aligned}$$ where in the last equality we have used relation (\[cov\]).
\[wp\] A pair of compatible metrics $(g,\tilde{g})$ is a (regular) weak quasi-homogeneous pencil of bi-degree $(d,D)$ if the following two conditions are satisfied:
1. There is a vector field $E$ on $M$ with the properties: $L_{E}(g)=(1-d)g$, $L_{E}(\tilde{g})=D\tilde{g}$.
2. The operator $T(u):=\frac{d-1}{2}u+u(\tilde{\nabla}E)$ is an automorphism of $T^{*}M$ and $T(u)=g(E)\circ u$, for every $u\in{\mathcal
E}^{1}(M).$
Note that, any quasi-homogeneous pencil of degree $d$ is also weak quasi-homogeneous of bi-degree $(d,2-d).$
We now introduce the parallel notions of (weak) $\F$-manifolds:
\[F\] A manifold $M$ with a multiplication $\lq\lq~\cdot$" on its tangent bundle, a vector field $E$ and a metric $\tilde{g}$ is called an $\F$-manifold if the following conditions are satisfied:
1. the multiplication $\lq\lq~\cdot$" is associative, commutative and has unity $e$;
2. the vector field $E$ (called the Euler vector field) admits an inverse $E^{-1}$ with respect to the multiplication $\lq\lq~\cdot$", satisfies $L_{E}(\cdot )=k\cdot$, $L_{E}(\tilde{g} )=D\tilde{g}$ and the operator $T(u):=\frac{D+k}{2}u-\tilde{g}(\tilde{\nabla}_{\tilde{g}^{*}(u)}E)$ is an automorphism of $T^{*}M.$
3. the metric $\tilde{g}$ is $\lq\lq~\cdot$"-invariant: $\tilde{g}(X\cdot Y,Z)=\tilde{g}(X,Y\cdot Z)$, for every $X,Y,Z\in{\mathcal X}(M)$.
4. the $(4,0)$-tensor $\tilde{\nabla}(\cdot )$ of $M$ defined by $$\tilde{\nabla}(\cdot
)(X,Y,Z,V):=\tilde{g}\left(\tilde{\nabla}_{X}(\cdot
)(Y,Z),V\right)$$ is symmetric in all its arguments.
**Remark:** By a result of Hertling [@hert], all $\F$-manifolds (originally defined in [@HM]) are $F$-manifolds, i.e. the multiplication $\cdot$" satisfies $$L_{X\cdot Y}(\cdot )=Y\cdot L_{X}(\cdot )+X\cdot L_{Y}(\cdot )$$ for every $X,Y\in{\mathcal X}(M)$ and also the $1$-form $g(e)$ is closed. The different typeface is used to denote the additional structures not present in the definition of an $F$-manifold.
\[wF\] A weak $\F$-manifold satisfies all the conditions of an $\F$-manifold except $(4)\,,$ which is replaced by the weaker condition:
$$\label{s}
\tilde{\nabla}(\cdot )(X,Y,Z,E)=\tilde{\nabla}(\cdot)(E,X,Y,Z)$$
for every $X,Y,Z\in{\mathcal X}(M),$ where $E$ is the Euler vector field.
The tensor ${\tilde{\nabla}}(\cdot)$ is automatically symmetric in the last three variables, this following from the invariance property of the metric.
From weak $\F$-manifolds to (weak) quasi-homogeneous pencils
------------------------------------------------------------
We shall now prove the following theorem.
\[main1\] Let $(M,\cdot ,\tilde{g},E)$ be a weak $\F$-manifold with $L_{E}(\tilde{g})=D\tilde{g}$, $L_{E}(\cdot )=k\cdot$ and identity $e$. Define the metric $g$ by $g^{*}\tilde{g}=E\cdot .$ The following facts hold:
1\. The pair $(g,\tilde{g})$ is a weak quasi-homogeneous pencil of bi-degree $(1+k-D,D).$
2\. If $e$ is $\tilde{\nabla}$-parallel and $k=1$, then in a neighborhood of any point of $M$ the pair $(g,\tilde{g})$ is a quasi-homogeneous pencil of degree $2-D.$ If moreover $M$ is simply connected the pair $(g,\tilde{g})$ is a global quasi-homogeneous pencil.
We divide the proof into several steps.
\[Fac\] Let $(M,\cdot ,\tilde{g},E)$ be a weak $\F$-manifold. Then $N_{E\cdot}=0.$
The torsion free property of $\tilde{\nabla}$ implies that $$\begin{aligned}
N_{E\cdot}(X,Y)&=-\tilde{\nabla}_{E\cdot X}(E)\cdot
Y-\tilde{\nabla}_{E\cdot X}(\cdot )(E,Y)+\tilde{\nabla}_{E\cdot
Y}(E)\cdot X\\
&+\tilde{\nabla}_{E\cdot Y}(\cdot )(E,X)-E\cdot
X\cdot\tilde{\nabla}_{Y}(E)-E\cdot\tilde{\nabla}_{Y}(\cdot )(E,X)\\
&+E\cdot
Y\cdot\tilde{\nabla}_{X}(E)+E\cdot\tilde{\nabla}_{X}(\cdot )(E,Y),\end{aligned}$$ for every $X,Y\in{\mathcal X}(M).$ From relation (\[s\]) and the commutativity and associativity of $\lq\lq~\cdot$", we see that $$\begin{aligned}
N_{E\cdot}(X,Y)&=-\tilde{\nabla}_{E\cdot X}(E)\cdot
Y-\tilde{\nabla}_{E}(\cdot )(E\cdot X,Y)+\tilde{\nabla}_{E\cdot Y}(E)\cdot X\\
&+\tilde{\nabla}_{E}(\cdot )(X,E\cdot Y)-E\cdot
X\cdot\tilde{\nabla}_{Y}(E)+E\cdot Y\cdot\tilde{\nabla}_{X}(E)\\
&=L_{E}(E\cdot X)\cdot Y+E\cdot X\cdot L_{E}(Y)-X\cdot
L_{E}(E\cdot Y)-E\cdot Y\cdot L_{E}(X)\\
&=0,\end{aligned}$$ where in the last equality we have used $L_{E}(\cdot )=k\cdot .$
\[Fman\] Let $\lq\lq~\cdot$“ be an associative, commutative, with identity multiplication on $TM$, $\tilde{g}$ a $\lq\lq~\cdot$”-invariant metric on $M$ and $E$ a vector field on $M$ which satisfies $L_{E}(\cdot )=k\cdot$ and $L_{E}(\tilde{g})=D\tilde{g}$, for $D\,,k$ constant. Suppose that $E\cdot $ is an automorphism of $TM$. Define a new metric $g$ on $M$ by $g^{*}\tilde{g}=E\cdot .$ If $E^{\flat}:=\tilde{g}(E)$, then $$\begin{aligned}
\label{ec}
2g^{*}({\nabla}_{\tilde{g}^{*}\gamma}\alpha
-\tilde{\nabla}_{\tilde{g}^{*}\gamma}\alpha ,\beta )&=
\tilde{g}\left( (D+k)\alpha -2\tilde{\nabla}_{\tilde{g}^{*}\alpha
}(E^{\flat}),\gamma\cdot\beta\right) -\tilde{\nabla}(\cdot
)(E^{\flat},\alpha ,\gamma ,\beta )\\
&+\tilde{\nabla} (\cdot )(\gamma ,\beta ,E^{\flat},\alpha
)-\tilde{\nabla} (\cdot
)(\alpha ,\beta ,E^{\flat},\gamma )\\
&+\tilde{\nabla}(\cdot )(\beta ,E^{\flat},\alpha ,\gamma )
+\gamma\left( N_{E\cdot}(\tilde{g}^{*}\alpha ,\tilde{g}^{*}\beta
)\cdot E^{-1}\right)\end{aligned}$$ for every $\alpha ,\beta ,\gamma\in{\mathcal E}^{1}(M)$. In particular, $$\label{ec}
{\nabla}_{\tilde{g}^{*}\gamma}\alpha
-\tilde{\nabla}_{\tilde{g}^{*}\gamma}\alpha = \frac{1}{2}\left(
(D+k)\alpha -2\tilde{\nabla}_{\tilde{g}^{*}(\alpha
)}(E^{\flat})\right)\cdot (E^{\flat})^{-1}\cdot\gamma$$ if and only if relation (\[s\]) is satisfied.
The multiplication $\lq\lq~\cdot$" on $TM$ together with the metric $\tilde{g}$ induce a multiplication on $T^{*}M.$ For $Y\in{\mathcal X}(M)$, let $Y^{\flat}:=\tilde{g}(Y).$ From the proof of Proposition \[au\] (with $g$ and $\tilde{g}$ interchanged), we obtain $$\begin{aligned}
2{g}^{*}({\nabla}_{Y}\alpha -\tilde{\nabla}_{Y}\alpha ,\beta )&=
d(\beta\cdot E^{\flat})(Y,\tilde{g}^{*}\alpha )-d\beta
(Y,\tilde{g}^{*}(E^{\flat}\cdot\alpha ))\\
&+\tilde{g}(E\cdot [\tilde{g}^{*}\alpha ,\tilde{g}^{*}\beta ]
-[E\cdot\tilde{g}^{*}\alpha ,\tilde{g}^{*}\beta ],Y)\\
&+\tilde{g}\left( N_{E\cdot}(\tilde{g}^{*}\alpha
,\tilde{g}^{*}\beta ),E^{-1}\cdot Y\right) ,\end{aligned}$$ since in our case $A^{*}(\alpha )=\alpha\cdot E^{\flat}$. Let $$E_{1}:=d(\beta\cdot E^{\flat})(Y,\tilde{g}^{*}\alpha )-d\beta
\left( Y,\tilde{g}^{*}(E^{\flat}\cdot\alpha )\right)$$ and $$E_{2}=\tilde{g}\left( E\cdot [\tilde{g}^{*}\alpha
,\tilde{g}^{*}\beta ] -[E\cdot\tilde{g}^{*}\alpha
,\tilde{g}^{*}\beta ],Y\right) .$$ Then $$\begin{aligned}
E_{1}&=\tilde{\nabla}_{Y}(\beta\cdot
E^{\flat})(\tilde{g}^{*}\alpha
)-\tilde{\nabla}_{\tilde{g}^{*}\alpha}(\beta\cdot E^{\flat})(Y)
-\tilde{\nabla}_{Y}(\beta )\left(\tilde{g}^{*}(E^{\flat}\cdot
\alpha )\right)
+\tilde{\nabla}_{\tilde{g}^{*}(E^{\flat}\cdot\alpha )}(\beta
)(Y)\\
&=\tilde{g}^{*}\left(\beta
,\tilde{\nabla}_{Y}(E^{\flat})\cdot\alpha -
\tilde{\nabla}_{\tilde{g}^{*}\alpha}(E^{\flat})\cdot
Y^{\flat}\right) +\tilde{\nabla}_{\tilde{g}^{*} (\alpha )\cdot
E}(\beta )(Y)- \tilde{\nabla}_{\tilde{g}^{*}\alpha}(\beta )(Y\cdot
E)\\
&+\tilde{\nabla}(\cdot )(Y^{\flat},\beta ,E^{\flat},\alpha )-
\tilde{\nabla}(\cdot )(\alpha ,\beta ,E^{\flat},Y^{\flat}).\\\end{aligned}$$
On the other hand, since $\tilde{\nabla}$ is torsion free and $\tilde{g}$ is $\cdot$"-invariant, $$\begin{aligned}
E_{2}&=\tilde{g}\left( [\tilde{g}^{*}\alpha ,\tilde{g}^{*}\beta
],E\cdot
Y\right) -\tilde{g}\left( [E\cdot\tilde{g}^{*}\alpha ,\tilde{g}^{*}\beta ],Y\right)\\
&=\tilde{\nabla}_{\tilde{g}^{*}\alpha}({\beta})(E\cdot
Y)-\tilde{\nabla}_{\tilde{g}^{*}\beta}(\alpha )(E\cdot Y)\\
&-\tilde{\nabla}_{E\cdot\tilde{g}^{*}\alpha}(\beta
)(Y)+\tilde{g}(\tilde{\nabla}_{\tilde{g}^{*}\beta}(E\cdot\tilde{g}^{*}\alpha ),Y)\\
&=\tilde{\nabla}_{\tilde{g}^{*}\alpha }(\beta )(E\cdot
Y)-\tilde{\nabla}_{E\cdot\tilde{g}^{*}\alpha}(\beta )(Y)\\
&+\left(\tilde{\nabla}_{\tilde{g}^{*}\beta}(E^{\flat})\cdot\alpha
\right) (Y)+\tilde{\nabla}(\cdot )(\beta
,E^{\flat},\alpha ,Y^{\flat})\\
&=\tilde{\nabla}_{\tilde{g}^{*}\alpha }(\beta )(E\cdot
Y)-\tilde{\nabla}_{E\cdot\tilde{g}^{*}\alpha}(\beta )(Y)\\
&+\tilde{\nabla}_{\tilde{g}^{*}\beta}(E^{\flat})\left( Y\cdot
\tilde{g}^{*}\alpha\right)+\tilde{\nabla} (\cdot )(\beta
,E^{\flat},\alpha
,Y^{\flat})\\
&=\tilde{\nabla}_{\tilde{g}^{*}\alpha}(\beta )(E\cdot
Y)-\tilde{\nabla}_{E\cdot\tilde{g}^{*}\alpha}(\beta )(Y)\\
&+d(E^{\flat})\left(\tilde{g}^{*}\beta ,Y\cdot \tilde{g}^{*}\alpha
\right)+\tilde{\nabla}_{\tilde{g}^{*}(\alpha )\cdot
Y}(E^{\flat})\left(\tilde{g}^{*}\beta \right)\\
&+\tilde{\nabla}(\cdot )(\beta ,E^{\flat},\alpha ,Y^{\flat}).\end{aligned}$$ We deduce that $$\begin{aligned}
E_{1}+E_{2}&=\tilde{g}^{*}\left(\beta
,\tilde{\nabla}_{Y}(E^{\flat})\cdot\alpha
-\tilde{\nabla}_{\tilde{g}^{*}\alpha}(E^{\flat})\cdot Y^{\flat}
-i_{\tilde{g}^{*}(\alpha )\cdot Y}d(E^{\flat})
+\tilde{\nabla}_{\tilde{g}^{*}(\alpha )\cdot
Y}(E^{\flat})\right)\\
&+\tilde{\nabla}(\cdot )(Y^{\flat},\beta ,E^{\flat},\alpha
)-\tilde{\nabla}(\cdot )(\alpha ,\beta ,E^{\flat},Y^{\flat})
+\tilde{\nabla}(\cdot )(\beta ,E^{\flat},\alpha ,Y^{\flat}).\end{aligned}$$
Let $\tilde{g}^{*}\alpha =X$ and $\tilde{g}^{*}\beta =Z.$ Using $L_{E}(\tilde{g})=D\tilde{g}$, $E_{1}+E_{2}$ becomes $$\begin{aligned}
\tilde{g}(\tilde{\nabla}_{Y}(E)\cdot
X,Z)&-\tilde{g}(\tilde{\nabla}_{X}E,Y\cdot
Z)-\tilde{g}(\tilde{\nabla}_{X\cdot Y}E,Z)
+{g}(\tilde{\nabla}_{Z}E,X\cdot
Y)+\tilde{g}(\tilde{\nabla}_{X\cdot Y}E,Z)\\
&+\tilde{\nabla}(\cdot )(Y,Z,E,X)-\tilde{\nabla}(\cdot
)(X,Z,E,Y)+\tilde{\nabla}(\cdot )(Z,E,X,Y)\\
=D\tilde{g}(X\cdot Y, Z)&+\tilde{g}(\tilde{\nabla}_{Y}(E)\cdot
X,Z)-\tilde{g}(\tilde{\nabla}_{X}(E)\cdot
Y,Z)-\tilde{g}(\tilde{\nabla}_{X\cdot Y}E,Z)\\
&+\tilde{\nabla}(\cdot )(Z,E,X,Y)+\tilde{\nabla}(\cdot
)(Y,Z,E,X)-\tilde{\nabla}(\cdot )(X,Z,E,Y).\\\end{aligned}$$
If follows that $$\begin{aligned}
2{g}^{*}\left({\nabla}_{Y}X^{\flat}
-\tilde{\nabla}_{Y}X^{\flat},Z^{\flat}\right)&=\tilde{g}\left(
(DY+\tilde{\nabla}_{Y}E)\cdot X-\tilde{\nabla}_{X}(E)\cdot Y
-\tilde{\nabla}_{X\cdot Y}E,Z\right)\\
&+\tilde{\nabla} (\cdot )(Y,Z,E,X)-\tilde{\nabla} (\cdot
)(X,Z,E,Y)+\tilde{\nabla} (\cdot
)(Z,E,X,Y)\\
&+\tilde{g}\left( N_{E\cdot}(X,Z),E^{-1}\cdot Y\right)\\
&=\tilde{g}\left( (DY+\tilde{\nabla}_{Y}E)\cdot
X+\tilde{\nabla}_{E}(X\cdot
Y)-\tilde{\nabla}_{E}(X)\cdot Y,Z\right)\\
&+\tilde{g}\left(-\tilde{\nabla}_{E}(Y)\cdot
X-\tilde{\nabla}_{X}(E)\cdot Y-
\tilde{\nabla}_{X\cdot Y}(E),Z\right)\\
&+\tilde{\nabla} (\cdot )(Y,Z,E,X)-\tilde{\nabla} (\cdot
)(X,Z,E,Y)+\tilde{\nabla} (\cdot )(Z,E,X,Y)\\
&-\tilde{\nabla}(\cdot )(E,X,Y,Z) +
\tilde{g}\left( N_{E\cdot}(X,Z),E^{-1}\cdot Y\right) ,\\\end{aligned}$$ where in the second equality we have added the (identically zero) expression $$\tilde{\nabla}_{E}(X\cdot Y)-\tilde{\nabla}_{E}(X)\cdot
Y-X\cdot\tilde{\nabla}_{E}(Y)-\tilde{\nabla}_{E}(\cdot )(X,Y).$$ Using now $L_{E}(\cdot )=k\cdot$ we obtain $$\begin{aligned}
2{g}^{*}\left({\nabla}_{Y}X^{\flat}
-\tilde{\nabla}_{Y}X^{\flat},Z^{\flat}\right)&=\tilde{g}\left(
(DY+\tilde{\nabla}_{Y}E)\cdot X+L_{E}(X)\cdot
Y+X\cdot L_{E}(Y)+kX\cdot Y,Z\right)\\
&-\tilde{g}\left(\tilde{\nabla}_{E}(X)\cdot
Y+\tilde{\nabla}_{E}(Y)\cdot
X+\tilde{\nabla}_{X}(E)\cdot Y,Z\right)\\
&+\tilde{\nabla} (\cdot )(Y,Z,E,X)-\tilde{\nabla} (\cdot
)(X,Z,E,Y)+\tilde{\nabla} (\cdot )(Z,E,X,Y)\\
&-\tilde{\nabla}(\cdot )(E,X,Y,Z) +\tilde{g}\left(
N_{E\cdot}(X,Z),E^{-1}\cdot Y\right)\\
&=\tilde{g}\left( (D+k)X\cdot Y-2\tilde{\nabla}_{X}(E)\cdot
Y,Z\right) +\tilde{\nabla}(\cdot )(Y,Z,E,X)\\
&-\tilde{\nabla}(\cdot )(X,Z,E,Y)+\tilde{\nabla}(\cdot
)(Z,E,X,Y)-\tilde{\nabla}(\cdot )(E,X,Y,Z)\\
&+\tilde{g}\left( N_{E\cdot}(X,Z),E^{-1}\cdot Y\right) .\\\end{aligned}$$ The first statement of the Proposition is proved. We easily notice that $$\begin{aligned}
-\tilde{\nabla}(\cdot )(E,X,Y,Z)&+\tilde{\nabla}(\cdot )
(Y,Z,E,X)-\tilde{\nabla}(\cdot )(X,Z,E,Y)+\tilde{\nabla}(\cdot
)(Z,E,X,Y)\\
+&\tilde{g}\left( N_{E\cdot}(X,Z),E^{-1}\cdot Y\right) =0\end{aligned}$$ for every $X,Y,Z\in{\mathcal X}(M)$ is equivalent with relation (\[s\]) (exchange $X$ and $Z$ in the above equality, use the symmetry of $\tilde{\nabla}(\cdot )$ in the last three arguments and Lemma \[Fac\]).
*Proof of the theorem:* Lemma \[Fac\] implies that the metrics $g$ and $\tilde{g}$ are almost compatible. The compatibility condition (\[conditiecomp\]) is trivially satisfied using relation (\[ec\]) and the $\cdot$"-invariance of $\tilde{g}.$ Since $g^{*}\tilde{g}=E\cdot$ and $E$ is the Euler vector field, $L_{E}(g^{*})\tilde{g}+Dg^{*}\tilde{g}=kg^{*}\tilde{g}$ or $L_{E}(g^{*})=(k-D)g^{*}.$ Relation (\[ec\]) can be written in the form $g\tilde{g}^{*}(\gamma )\circ\alpha =T(\alpha )\cdot
(E^{\flat})^{-1}\cdot\gamma$, where $T$ is precisely the operator associated to the pair $(g,\tilde{g})$ as in Definition \[wp\]. For $\gamma :=\tilde{g}(E)$ we obtain $g(E)\circ \alpha =T(\alpha
).$ It follows that $(g,\tilde{g})$ is a weak quasi-homogeneous pencil of bi-degree $(1+k-D,D).$ The first statement is proved.
Suppose now that $k=1$ and $\tilde{\nabla}(e)=0.$ In particular, $L_{e}(\tilde{g})=0$. Since $\tilde{\nabla}$ is torsion free, $d\tilde{g}(e)=0$ and at least locally there is a smooth function $f$ such that $\tilde{g}(e)=df.$ Since $g^{*}\tilde{g}=E\cdot$, $E=g^{*}\tilde{g}(e)=\mathrm{grad}_{g}(f).$ Also, $[e,E]=e$ because $k=1$. In order to prove that $(g,\tilde{g})$ is a quasi-homogeneous pencil (of degree $2-D$), we still need to show that $L_{e}(g^{*})=\tilde{g}^{*}.$ For this, consider the equality (which follows from $g^{*}\tilde{g}=E\cdot$) $$g^{*}(\alpha ,\beta )=\tilde{g}^{*}(\alpha ,\beta\cdot E^{\flat})$$ and take its Lie derivative in the direction of $e$. From $[e,E]=e$ and $L_{e}(\tilde{g})=0$ we get $$L_{e}(g^{*})(\alpha ,\beta )=\tilde{g}^{*}\left(\alpha
,L_{e}(\cdot )(\beta ,E^{\flat})\right) +\tilde{g}^{*}(\alpha
,\beta ).$$ On the other hand, from $\tilde{\nabla}(e)=0$ and relation (\[s\]), we easily see that $$\begin{aligned}
L_{e}(\cdot )(X,E)&=-[E\cdot X,e]+[X,e]\cdot E-X\\
&=\tilde{\nabla}_{e}(E\cdot X)+[X,e]\cdot E-X\\
&=\tilde{\nabla}_{e}(E)\cdot X+\tilde{\nabla}_{E}(\cdot
)(e,X)+E\cdot\tilde{\nabla}_{e}(X)\\
&+[X,e]\cdot E-X=0,\\\end{aligned}$$ which implies that $L_{e}(\cdot )(\beta
,E^{\flat})=\tilde{g}\left( L_{e}(\cdot )(\tilde{g}^{*}\beta
,E)\right) =0$, because $L_{e}(\tilde{g})=0$. It follows that $L_{e}(g^{*})=\tilde{g}^{*}$ and the Theorem is proved.
From quasi-homogeneous pencil of metrics to weak $\F$-manifolds
---------------------------------------------------------------
\[cvasihom\] Let $(g,\tilde{g})$ be a weak quasi-homogeneous pencil as in Definition \[wp\]. Define a new multiplication $u\cdot v:=u\circ
T^{-1}(v)$ on $T^{*}M$ and denote also by $\lq\lq~\cdot$" the induced multiplication on $TM$, using the metric $\tilde{g}.$ The following statements hold:
1. $(M,\cdot ,\tilde{g},E)$ is a weak $\F$-manifold with Euler vector field $E$ and identity $e:=\tilde{g}^{*}g(E).$ Moreover, $g^{*}\tilde{g}=E\cdot .$
2. If $(g,\tilde{g})$ is a quasi-homogeneous pencil then $e$ is $\tilde{\nabla}$-flat and $L_{E}(\cdot )=\cdot$ on $TM.$
We divide the proof into several steps. We begin with the following Lemma:
\[usoara\] Let $(g,\tilde{g})$ be a weak quasi-homogeneous pencil. The following facts hold:
1. The multiplication $\lq\lq~\cdot$" on $T^{*}M$ is associative, commutative and has unity $g(E).$
2. For every $\alpha ,\beta\in{\mathcal E}^{1}(M)$, $g^{*}(\alpha
,\beta )=(\alpha\cdot\beta )(E).$
We prove the first claim: the commutativity $T(u)\cdot
T(v)=T(v)\cdot T(u)$ is equivalent with $T(u)\circ v=T(v)\circ u$ or with $(g(E)\circ u)\circ v=(g(E)\circ v)\circ u$, which follows from relation (\[rightsym\]), the metrics $g$ and $\tilde{g}$ being compatible. Using the commutativity $\lq\lq~\cdot$“, the associativity $(u\cdot v)\cdot w=u\cdot (v\cdot w)$ is equivalent with $(v\cdot u)\cdot w=(v\cdot w)\cdot u$, which is again a consequence of relation (\[rightsym\]) and the definition of $\cdot$ ”. Relation $T(v)=g(E)\circ v$, for every $v:=T^{-1}(u)\in T^{*}M$ becomes $g(E)\cdot u=u$, for every $u\in
T^{*}M.$ This clearly means that the multiplication $\cdot$" of $T^{*}M$ has unity $g(E).$
The second claim is an application of the definitions and of relation (\[interpret1\]): $$\begin{aligned}
g^{*}(T(\beta ),\alpha)-(\alpha\circ\beta )(E)
&=g^{*}\left( g(E)\circ\beta ,\alpha\right)-(\alpha\circ\beta )(E)\\
&=g^{*}\left( g(E),\alpha\circ\beta \right)-(\alpha\circ\beta )(E)\\
&=0.\end{aligned}$$
*Proof of the Theorem:* The multiplication $\lq\lq~\cdot$“ being associative and commutative on $T^{*}M$, the induced multiplication $\lq\lq~\cdot$” on $TM$ has the same properties and has the identity $e=\tilde{g}^{*}g(E).$ Since $(g,\tilde{g})$ is a weak quasi-homogeneous pencil, $g$ and $\tilde{g}$ are in particular almost compatible and relation (\[interpret2\]) with $\beta$ replaced by $T^{-1}(\beta )$ implies the $\lq\lq~\cdot$“-invariance of $\tilde{g}.$ An immediate consequence of the $\cdot$”-invariance of $\tilde{g}$ is the relation $g^{*}\tilde{g}=E\cdot$: indeed, to show this we notice from the second part of Lemma \[usoara\] that $g^{*}(\alpha
,\beta )=\tilde{g}^{*}(\alpha\cdot\beta
,E^{\flat})=\tilde{g}^{*}(\alpha\cdot E^{\flat},\beta )$, which implies that $g^{*}\alpha =\tilde{g}^{*}(\alpha\cdot E^{\flat})$, or, for $\alpha :=\tilde{g}(X)$, $g^{*}\tilde{g}(X)=E\cdot X.$ In order to prove that $E$ satisfies the conditions of an Euler vector field, we notice first that $L_{E}(\tilde{\nabla})=L_{E}(\nabla )=0$ (since $L_{E}(g)=(1-d)g$, $L_{E}(\tilde{g})=D\tilde{g}$ and $d\,,D$ are constant), which imply in particular that $L_{E}(T)=0$. This, together with $L_{E}(g^{*})=(d-1)g^{*}$, imply that $L_{E}(\cdot )=(d-1)\cdot$ on $T^{*}M$ or, using $L_{E}(\tilde{g})=D\tilde{g}$, that $L_{E}(\cdot )=(d+D-1)\cdot$ on $TM.$ We can now apply Proposition \[Fman\] to prove the weak $\F$-manifold condition (\[s\]): the very definition of $\lq\lq~\cdot$“, $\lq\lq~\circ$” and the relation (\[cov\]) imply that relation (\[ec\]) is satisfied. Thus $(M,\cdot ,\tilde{g},E)$ is a weak $\F$-manifold and the first claim of the Theorem is proved.
The second claim of the Theorem is trivial: if $D=2-d$, then $L_{E}(\cdot )=\cdot$ on $TM$ and moreover, the very definition of quasi-homogeneous pencils implies that $e$ is $\tilde{\nabla}$-flat.
Curvature properties of weak $\F$-manifolds
===========================================
In this Section we will prove the first two vertical 1-1 correspondences in Table 1 of the introduction. In particular, we show how Dubrovin’s correspondence between flat quasi-homogeneous pencils and Frobenius manifolds fits into our theory.
We begin with the following simple Lemma on conformal Killing vector fields on pseudo-Riemannian manifolds.
\[confkilling\] Let $(M,\tilde{g})$ be a pseudo-Riemannian manifold and $E\in{\mathcal X}(M)$ which satisfies $L_{E}(\tilde{g})=D\tilde{g}$, with $D$ constant. Then $$\tilde{g}(\tilde{R}_{Z,X}(E),Y)=g\left(\tilde{\nabla}_{X}(\tilde{\nabla}
E)_{Y},Z\right)$$ for every $X,Y,Z\in{\mathcal X}(M)$.
We take the covariant derivative with respect to $Z$ of the equality $$\tilde{g}(\tilde{\nabla}_{X}E,Y)+\tilde{g}(\tilde{\nabla}_{Y}E,X)=D\tilde{g}(X,Y),$$ we use $\tilde{\nabla}(\tilde{g})=0$ and then we take cyclic permutations of $X$, $Y$, $Z$. We obtain three relations. Substracting the second and the third relation from the first one and using the symmetries of pseudo-Riemannian curvature tensors, we easily obtain the result.
Thus if ${\tilde R}=0$, then $E$ can be at most linear in the flat coordinates.
\[flat\] Let $(M,\cdot ,\tilde{g},E)$ be a weak $\F$-manifold as in Definition \[wF\]. Then for every $\alpha\in {\mathcal
E}^{1}(M)$ and $X,Y\in{\mathcal X}(M)$, the following relation holds: $$\begin{aligned}
R_{E\cdot X,E\cdot Y}(\alpha )-\tilde{R}_{E\cdot X,E\cdot
Y}(\alpha )&=\tilde{\nabla}_{E\cdot X}(\cdot )(T(\alpha )\cdot
(E^{\flat})^{-1}, E^{\flat}\cdot Y^{\flat})
-\tilde{\nabla}_{E\cdot Y}(\cdot )( T(\alpha )\cdot
(E^{\flat})^{-1},
E^{\flat}\cdot X^{\flat})\\
&+\tilde{\nabla}_{E\cdot X}(T)(\alpha )\cdot
Y^{\flat}-\tilde{\nabla}_{E\cdot Y}(T)(\alpha )\cdot X^{\flat}.\end{aligned}$$ In particular, $M$ is an $\F$-manifold if and only if $$R_{E\cdot X,E\cdot Y}(\alpha )-\tilde{R}_{E\cdot X,E\cdot
Y}(\alpha ) =\tilde{\nabla}_{E\cdot X}(T)(\alpha )\cdot
Y^{\flat}-\tilde{\nabla}_{E\cdot Y}(T)(\alpha )\cdot X^{\flat},$$ for every $\alpha\in {\mathcal E}^{1}(M)$ and $X,Y\in{\mathcal
X}(M)$.
Let $Q (\alpha ):=T(\alpha )\cdot E^{\flat -1}.$ Using relation (\[ec\]), we easily see that $$\begin{aligned}
R_{X,Y}(\alpha )-\tilde{R}_{X,Y}(\alpha )&= [\tilde{\nabla}_{X}(Q
(\alpha ))-Q(\tilde{\nabla}_{X}\alpha
)-Q( Q(\alpha )\cdot X^{\flat}) ]\cdot Y^{\flat}\\
&-[\tilde{\nabla}_{Y}(Q( \alpha ))-Q(\tilde{\nabla}_{Y}\alpha
)-Q( Q(\alpha )\cdot Y^{\flat}) ]\cdot X^{\flat}\\
&+\tilde{\nabla}_{X}(\cdot )(Q(\alpha
),Y^{\flat})-\tilde{\nabla}_{Y}(\cdot )(Q(\alpha ),X^{\flat})\\\end{aligned}$$ which becomes, after replacing $\alpha$ with $T^{-1}(\alpha )$, $X$, $Y$ with $E\cdot X$, $E\cdot Y$ respectively, the following relation: $$\begin{aligned}
R_{E\cdot X,E\cdot Y}(T^{-1}\alpha )-\tilde{R}_{E\cdot X,E\cdot
Y}(T^{-1}\alpha )&=\tilde{\nabla}_{E\cdot X}(\cdot )(\alpha\cdot
(E^{\flat})^{-1},E^{\flat}\cdot Y^{\flat}) \\
&-\tilde{\nabla}_{E\cdot Y}(\cdot )(\alpha\cdot
(E^{\flat})^{-1},E^{\flat}\cdot X^{\flat})]\\
&+[\tilde{\nabla}_{E\cdot X}(\alpha\cdot (E^{\flat})^{-1})\cdot
E^{\flat}+\tilde{\nabla}_{\tilde{g}^{*}(\alpha )\cdot
X}(E^{\flat})]\cdot Y^{\flat}\\
&-[\tilde{\nabla}_{E\cdot Y}(\alpha\cdot (E^{\flat})^{-1})\cdot
E^{\flat}+\tilde{\nabla}_{\tilde{g}^{*}(\alpha )\cdot
Y}(E^{\flat})]\cdot X^{\flat}\\
&-Q(\tilde{\nabla}_{E\cdot X}(T^{-1}\alpha ))\cdot E^{\flat}\cdot
Y^{\flat}+Q(\tilde{\nabla}_{E\cdot Y}(T^{-1}\alpha ))\cdot
E^{\flat}\cdot X^{\flat}.\end{aligned}$$ Define now ${\mathcal A}(\alpha ,X):=\tilde{\nabla}_{E\cdot
X}(\alpha\cdot (E^{\flat})^{-1})\cdot E^{\flat}+
\tilde{\nabla}_{\tilde{g}^{*}(\alpha ) \cdot X}(E^{\flat}).$ Using the fact that $M$ is a weak $\F$-manifold, we easily get $$\begin{aligned}
{\mathcal A}(\alpha ,X)&=\tilde{\nabla}_{E\cdot X}(\alpha
)-\tilde{\nabla}_{E\cdot X}(\cdot )(E^{\flat},\alpha\cdot
(E^{\flat})^{-1})-\alpha\cdot
(E^{\flat})^{-1}\cdot\tilde{\nabla}_{E\cdot
X}(E^{\flat})+\tilde{\nabla}_{\tilde{g}^{*}(\alpha )\cdot X}(E^{\flat})\\
&=\tilde{\nabla}_{E\cdot X}(\alpha )-\tilde{\nabla}_{E}(\cdot
)(E^{\flat}\cdot X^{\flat},\alpha\cdot
(E^{\flat})^{-1})-\alpha\cdot
(E^{\flat})^{-1}\cdot\tilde{\nabla}_{E\cdot X}(E^{\flat})+\tilde{\nabla}_{\tilde{g}^{*}(\alpha )\cdot X}(E^{\flat})\\
&=\tilde{\nabla}_{E\cdot X}(\alpha
)-\tilde{\nabla}_{E}(\alpha\cdot
X^{\flat})+\tilde{\nabla}_{E}(E^{\flat}\cdot
X^{\flat})\cdot\alpha\cdot (E^{\flat})^{-1}+E^{\flat}\cdot
X^{\flat}\cdot \tilde{\nabla}_{E}(\alpha\cdot (E^{\flat})^{-1})\\
&-\alpha\cdot (E^{\flat})^{-1}\cdot \tilde{\nabla}_{E\cdot
X}(E^{\flat})+\tilde{\nabla}_{\tilde{g}^{*}(\alpha )\cdot
X}(E^{\flat})\\
&=\tilde{\nabla}_{E\cdot X}(\alpha )-\tilde{g}\left(
L_{E}(\tilde{g}^{*}(\alpha )\cdot X)\right) +\tilde{g}\left(
L_{E}(E\cdot X)\right)\cdot \alpha\cdot
(E^{\flat})^{-1}+E^{\flat}\cdot
X^{\flat}\cdot\tilde{\nabla}_{E}(\alpha\cdot
(E^{\flat})^{-1})\\
&=\tilde{\nabla}_{E\cdot X}(\alpha )+X^{\flat}\cdot h(\alpha ),\\\end{aligned}$$ where in the last equality we have used $L_{E}(\cdot )=k\cdot$ on $TM$ and $h :=h(\alpha )$ is an expression which depends only on $\alpha .$ We thus obtain $$\begin{aligned}
R_{E\cdot X,E\cdot Y}(T^{-1}\alpha )-\tilde{R}_{E\cdot X,E\cdot
Y}(T^{-1}\alpha )=&\tilde{\nabla}_{E\cdot X}(\cdot )(\alpha\cdot
(E^{\flat})^{-1},E^{\flat}\cdot Y^{\flat}) -
\tilde{\nabla}_{E\cdot Y}(\cdot )(\alpha\cdot (E^{\flat})^{-1},E^{\flat}\cdot X^{\flat})\\
&+[\tilde{\nabla}_{E\cdot X}(\alpha )-Q(\tilde{\nabla}_{E\cdot X}(T^{-1}\alpha ))\cdot E^{\flat}]\cdot Y^{\flat}\\
&-[\tilde{\nabla}_{E\cdot Y}(\alpha )-Q(\tilde{\nabla}_{E\cdot Y}(T^{-1}\alpha ))\cdot E^{\flat}]\cdot X^{\flat}\\\end{aligned}$$ which easily implies the Theorem.
(see [@dubrovin]) Let $(g,\tilde{g})$ be a quasi-homogeneous pencil on $M$. Suppose that $\tilde{R}=0.$ Then the corresponding weak $\F$-manifold is a Frobenius manifold if and only if $R=0.$
This is just a consequence of Theorem \[flat\] and Lemma \[confkilling\]: since $\tilde{R}=0$, $\tilde{\nabla}(T)=0$ and $$R_{E\cdot X,E\cdot Y}(\alpha )=\tilde{\nabla}_{E\cdot X}(\cdot
)(T(\alpha )\cdot (E^{\flat})^{-1}, E^{\flat}\cdot Y^{\flat})
-\tilde{\nabla}_{E\cdot Y}(\cdot )( T(\alpha )\cdot
(E^{\flat})^{-1}, E^{\flat}\cdot X^{\flat})$$ for every $X,Y\in{\mathcal X}(M)$ and $\alpha\in{\mathcal
E}^{1}(M).$ It follows that $R=0$ is equivalent to the total symmetry of ${\tilde{\nabla}}(\cdot)\,.$ The other conditions of a Frobenius manifold are clearly satisfied.
Compatible metrics and submanifolds
===================================
Suppose now that the metrics $g$ and $\tilde{g}$ are compatible. Let $h$, $\tilde{h}$ be the metrics induced by $g$, $\tilde{g}$ on a submanifold $N$ of $M$. We assume that $h$, $\tilde{h}$, $h_{\lambda}$ are non-degenerate (although a theory of bi-Hamiltonian structures with degenerate metrics may be formulated [@G; @ian3]). Let $A:=\tilde{g}^{*}g$ and $B:=\tilde{h}^{*}h.$\
#### **Notations:**
We shall use the following conventions:
1. $TN^{\perp g}$ ($TN^{\perp\tilde{g}}$ respectively) will denote the orthogonal complement of $TN$ in $TM$, with respect to the metric $g$ (respectively $\tilde{g}$).
2. For a sub-bundle $V$ of $TM$, $V^{0}$ will denote the annihilator of $V$ in $T^{*}M.$
3. With respect to the orthogonal decomposition $TM=TN\oplus TN^{\perp\tilde{g}}$ any tangent vector $X\in TM$ will be written as $X=X^{t}+X^{n}.$
The following Lemma can be easily proved and justifies Definition \[dist\].
For every $X\in TN$, $B(X)=A(X)^{t}$. In particular, the following statements are equivalent:
1. For every $X\in TN$, $A(X)=B(X).$
2. $A(TN)\subset TN.$
3. The orthogonal complement of $TN$ with respect to the metrics $g$ and $\tilde{g}$ coincide.
\[dist\] The submanifold $N$ of $M$ is called distinguished if $A(TN)\subset TN$.
For now on we shall restrict to the case when the submanifold $N$ is distinguished and we shall denote by $TN^{\perp}$ the orthogonal complement of $TN$ in $TM$, with respect to the metric $g$ or $\tilde{g}.$ For $\alpha\in{\mathcal E}^{1}(N)$, we will denote by $\bar{\alpha}\in{\mathcal E}^{1}(M)$ its extension to $TM$, which is zero on $TN^{\perp}.$\
It can be easily verified that the restrictions to $N$ of the metrics $g_{\lambda}$ generated by $(g,\tilde{g})$ coincide with the metrics $h_{\lambda}$ generated by $(h,\tilde{h})$. Since $A=B$ on $TN$, the metrics $h$ and $\tilde{h}$ are almost compatible. A natural problem which arises is to determine when they are compatible. For this, let $D$ (respectively $\tilde{D}$) be the Levi-Civita connections of $h$ and $\tilde{h}$. For every ${\alpha}\in{\mathcal E}^{1}(N)$ and $X\in{\mathcal X}(N)$ we decompose $\nabla_{X}\bar{\alpha}$ as $$\label{secondf}
\nabla_{X}\bar{\alpha} =\nabla_{X}(\bar{\alpha}
)^{t}+S_{X}(\bar{\alpha})$$ according to the decomposition $$T^{*}M=(TN^{\perp})^{0}+(TN)^{0}$$ of $T^{*}M.$ Note that, via the identifications $T^{*}N\cong
(TN^{\perp})^{0}$ and $(TN^{\perp})^{*}\cong (TN)^{0}$, $D_{X}({\alpha})=\nabla_{X}(\bar{\alpha} )^{t}$ and that the map $S:TN\times T^{*}N\rightarrow (TN)^{0}$, defined by $S_{X}({\alpha})=S_{X}(\bar{\alpha})$, is the second fundamental form of the submanifold $N$ of $(M,g).$ Similar facts hold for $\tilde{D}$ and the second fundamental form $\tilde{S}$ of the submanifold $N$ of $(M,\tilde{g}).$
The metrics $h$ and $\tilde{h}$ are compatible if and only if for every $\alpha ,\beta\in (TN^{\perp})^{0}$ and $X,Y\in TN$, the relation $$\label{submanifold}
\tilde{g}^{*}(\tilde{S}_{X}\alpha -S_{X}\alpha ,
\tilde{S}_{Y}\beta -S_{Y}\beta )=
\tilde{g}^{*}(\tilde{S}_{Y}\alpha -S_{Y}\alpha ,
\tilde{S}_{X}\beta -S_{X}\beta ),$$ holds.
This is just a consequence of relation (\[conditiecomp\]) and of the decomposition (\[secondf\]).
#### **Remark:**
The compatibility condition on the metrics $h$ and $\tilde{h}$ can also be written in terms of the multiplication $\circ$ from Definition \[multipl\]. Denote by $$\circ^{t}:T^{*}M\times T^{*}M\rightarrow (TN^{\perp})^{0}$$ and $$\circ^{n}:T^{*}M\times T^{*}M\rightarrow (TN)^{0}$$ the maps induced by $\circ$ and the orthogonal projections $T^{*}M\rightarrow (TN^{\perp})^{0}$, $T^{*}M\rightarrow
(TN)^{0}$. Let $\circ_{N}$ be the multiplication associated to the pair $(h,\tilde{h})$, as in Definition \[multipl\]. Then, for every $\alpha ,\beta\in{\mathcal E}^{1}(N)$, $$\alpha\circ_{N}\beta =D_{h^{*}\alpha}(\beta
)-\tilde{D}_{h^{*}\alpha}(\beta )
=\nabla_{g^{*}\bar{\alpha}}(\bar{\beta})^{t}-\nabla_{g^{*}\bar{\alpha}}(\bar{\beta})^{t}\\
=\bar{\alpha}\circ^{t}\bar{\beta},$$ which implies $\alpha\circ_{N}\beta
=\bar{\alpha}\circ^{t}\bar{\beta}.$ It follows that $h$ and $\tilde{h}$ are compatible if and only if for every $\bar{\alpha},\bar{\beta},\bar{\gamma}\in (TN^{\perp})^{0}$, $$(\bar{\alpha}\circ^{t}\bar{\beta})\circ^{t}\bar{\gamma}=
(\bar{\alpha}\circ^{t}\bar{\gamma})\circ^{t}\bar{\beta}.$$
Submanifolds of weak $\F$-manifolds
===================================
In this Section we consider a weak $\F$-manifold $(M,\cdot
,\tilde{g},E)$ with $L_{E}(\tilde{g})=D\tilde{g}$ and $L_{E}(\cdot
)=k\cdot .$ Let $N$ be a submanifold of $M$ which satisfies the following two properties:
1. For every $X$, $Y$ in $TN$, $X\cdot Y$ belongs to $TN.$
2. The Euler vector field $E$ satisfies $$\label{euler}
E\cdot TN\subset TN.$$
We note that any natural submanifold [@ian] of an $\F$-manifold satisfies these conditions, but in our case the vector field $E$ is not necessarily tangent to $N$ along $N.$ We shall denote by $TN^{\perp}$ the orthogonal complement of $TN$ in $TM$, with respect to the metric $\tilde{g}$ and by $P:TM\rightarrow TN^{\perp}$ the orthogonal projection.
\[verysimple\] For every $X\in TN$ and $Y\in TM$, $X\cdot P(Y)=P(X\cdot Y).$
Since $\tilde{g}$ is $\lq\lq~\cdot$"-invariant, the operator $X\cdot$ of $TM$ is self-adjoint (with respect to the metric $\tilde{g}$). Since it preserves $TN$, it will preserve $TN^{\perp}$ as well. The conclusion follows.
Recall now that on a weak $\F$-manifold we have considered a second metric $g$, defined by $g^{*}\tilde{g}=E\cdot .$
The metrics $g$ and $\tilde{g}$ induce compatible metrics on $N$.
The almost compatibility is obvious from relation (\[euler\]). Let $P^{*}:T^{*}M\rightarrow (TN)^{0}$ be the orthogonal projection with respect to the metric $\tilde{g}^{*}.$ Relation (\[ec\]) together with Lemma \[verysimple\] imply that, for every $\alpha ,\beta\in (TN^{\perp})^{0}$, $S_{X}\alpha
-\tilde{S}_{X}\alpha =\frac{1}{2}P^{*}Q(\alpha )\cdot X^{\flat}$, where, we recall, $Q(\alpha )=\frac{1}{2}\left( (D+k)\alpha
-2\tilde{\nabla}_{\tilde{g}^{*}\alpha}(E^{\flat})\right)\cdot
(E^{\flat})^{-1}.$ It is now obvious that relation (\[submanifold\]) is satisfied.
Appendix: The semi-simple case {#appendix-the-semi-simple-case .unnumbered}
==============================
Recall that we have associated to the pair $(g,\tilde{g})$ an endomorphism $N_{A}$ of $TM$ (see Theorem \[ac\]).
The pair $(g,\tilde{g})$ is semi-simple if the eigenvalues of the tensor $N_{A}$ are distinct in every point.
\[semisimple\] Any semi-simple pair of almost compatible metrics is compatible.
If $(g,\tilde{g})$ is a semi-simple pair, we can find coordinates $(x_{1},\cdots ,x_{n})$ on $M$ such that the tensor $A:=\tilde{g}^{*}g$ is diagonal: $\tilde{g}^{*}dx_{i}=\lambda_{i}g^{*}dx_{i}$, for $i=\overline{1,n}$ and moreover, both $g$ and $\tilde{g}$ are diagonal: $g^{*}(dx_{i},dx_{j})=\delta_{ij}{g}^{ii}; \quad
\tilde{g}^{*}(dx_{i},dx_{j})=\delta_{ij}\tilde{g}^{ii}$, for $i,j=\overline{1,n}$, for some smooth functions $\lambda_{i}\,,g^{ii}$ and $\tilde{g}^{ii}$ . The almost compatibility condition implies that the functions $\lambda_{i}$ depend only on the coordinate $x_{i}$ (see [@mok]). Using the formula for the Cristoffel symbols in a chart and the fact that $\lambda_{i}$ depend only on the coordinate $x_{i}$, we easily see that $\tilde{\Gamma}^{j}_{ik}-\Gamma^{j}_{ik}=0$ for $i\neq k$ and every $j$. Then $$\tilde{\nabla}_{\frac{\partial~}{\partial x_{i}}}dx_{j}-
\nabla_{\frac{\partial~}{\partial x_{i}}}dx_{j}
=-\left(\tilde{\Gamma}_{ii}^{j}-\Gamma_{ii}^{j}\right) dx_{i}$$ which implies $$\begin{aligned}
\tilde{g}^{*}\left(\tilde{\nabla}_{\frac{\partial~}{\partial
x_{i}}}dx_{j}-\nabla_{\frac{\partial~}{\partial x_{i}}}dx_{j},
\tilde{\nabla}_{\frac{\partial~}{\partial k_{k}}
}dx_{n}-\nabla_{\frac{\partial~}{\partial x_{k}}}dx_{n}\right) &=
(\tilde{\Gamma}_{ii}^{j}-\Gamma_{ii}^{j})
(\tilde{\Gamma}_{kk}^{n}-\Gamma_{kk}^{n})\tilde{g}^{*}(dx_{i},dx_{k})\\
&=\delta_{ik}\tilde{g}^{ii}
(\tilde{\Gamma}_{ii}^{j}-\Gamma_{ii}^{j})
(\tilde{\Gamma}_{kk}^{n}-\Gamma_{kk}^{n})\,.\end{aligned}$$ This expression is obviously symmetric in $i$ and $k.$
####
[**Acknowledgements**]{} Financial support was provided by the EPSRC (grant GR/R05093).
[99]{}
Dubrovin, B. and Novikov, S.P., [*Hydrodynamics of weakly deformed soliton lattices. Differential geometry and Hamiltonian theory,*]{} Russian Math. Surveys [**44**]{} (1989), no. 6, 35–124.
Dubrovin, B., [*Flat pencils of metrics and Frobenius manifolds*]{} in [*Integrable systems and algebraic geometry*]{} (Kobe/Kyoto, 1997), 47–72, World Sci. Publishing, River Edge, NJ, 1998.
Dubrovin, B., [*On almost duality for Frobenius manifolds*]{}, math.DG/0307374.
Ferapontov, E.V., [*Compatible Poisson brackets of hydrodynamic type*]{}, J. Phys. [**A34**]{} (2001), 2377–2388.
Magri, F., [*A simple model of the integrable Hamiltomian equation,*]{} J. Math. Phys. [**19**]{} (1978) 1156-1162.
Manin, Yu., [*F-manifolds with flat structure and Dubrovin’s duality,*]{} math.DG/0402451.
Mokhov, O.I., [*Compatible flat metrics*]{}, J. Appl. Math. [**2**]{} (2002), no. 7, 337–370.
Strachan, I.A.B., [*Frobenius submanifolds*]{}, Journal of Geometry and Physics, 38 (2001), 285-307.
Strachan, I.A.B., [*Frobenius manifolds: natural submanifolds and induced bi-hamiltonian structures*]{}, Differential Geometry and its Applications, [**20**]{} (2004), 67-99.
Strachan, I.A.B., [*Degenerate Frobenius manifolds and the bi-Hamiltonian structure of rational Lax equations*]{}, J. Math. Phys. [**40**]{} (1999), 5058–5079.
Vaisman, I., [*Sur quelques formules du calcul de Ricci global*]{}, Comment. Math. Helv. [**41**]{} (1966/67), 73-87.
[Authors’ addresses:]{}
-------------------- ------------------------------------------------------------------------
Liana David: (Permanent address): Institute of Mathematics of the Romanian Academy,
Calea Grivitei nr 21, Bucharest, Romania;
e-mail: lili@mail.dnttm.ro
(Present address): Department of Mathematics, University of Glasgow,
Glasgow G12 8QW;
e-mail: l.david@maths.gla.ac.uk
Ian A.B. Strachan: Department of Mathematics, University of Glasgow,
Glasgow G12 8QW;
e-mail: i.strachan@maths.gla.ac.uk
-------------------- ------------------------------------------------------------------------
|
---
abstract: 'A method to approximate continuous multi-dimensional probability density functions (PDFs) using their projections and correlations is described. The method is particularly useful for event classification when estimates of systematic uncertainties are required and for the application of an unbinned maximum likelihood analysis when an analytic model is not available. A simple goodness of fit test of the approximation can be used, and simulated event samples that follow the approximate PDFs can be efficiently generated. The source code for a FORTRAN-77 implementation of this method is available.'
address: |
Ottawa-Carleton Institute for Physics\
Department of Physics, Carleton University\
Ottawa, Canada K1S 5B6
author:
- 'Dean Karlen[^1]'
date: 'May 7, 1998'
title: |
Using projections and correlations to\
approximate probability distributions
---
Introduction
============
Visualization of multi-dimensional distributions is often performed by examining single variable distributions (that is, one-dimensional projections) and linear correlation coefficients amongst the variables. This can be adequate when the sample size is small, the distribution consists of essentially uncorrelated variables, or when the correlations between the variables is approximately linear. This paper describes a method to approximate multi-dimensional distributions in this manner and its applications in data analysis.
The method described in this paper, the Projection and Correlation Approximation (PCA), is particularly useful in analyses which make use of either simulated or control event samples. In particle physics, for example, such samples are used to develop algorithms that efficiently select events of one type while preferentially rejecting events of other types. The algorithm can be as simple as a set of criteria on quantities directly measured in the experiment or as complex as an application of an artificial neural network [@REF:ann] on a large number of observables. The more complex algorithm may result in higher efficiency and purity, but the determination of systematic errors can be difficult to estimate. The PCA method can be used to define a sophisticated selection algorithm with good efficiency and purity, in a way that systematic uncertainties can be reliably estimated.
Another application of the PCA method is in parameter estimation from a data set using a maximum likelihood technique. If the information available is in the form of simulated event samples, it can be difficult to apply an unbinned maximum likelihood method, because it requires a functional representation of the multidimensional probability density function (PDF). The PCA method can be used to approximate the PDFs required for the maximum likelihood method. A simple goodness of fit test is available to determine if the approximation is valid.
To verify the statistical uncertainty of an analysis, it can be useful to create a large ensemble of simulated samples, each sample equivalent in size to the data set being analyzed. In cases where this is not practical because of limited computing resources, the approximation developed in the PCA method can be used, as it is in a form that leads to an efficient method for event generation.
In the following sections, the projection and correlation approximation will be described along with its applications. An example data analysis using the PCA method is shown.
Projection and correlation approximation {#SECT:pca}
========================================
Consider an arbitrary probability density function ${\cal P}(\bf x)$ of $n$ variables, $x_i$. The basis for the approximation of this PDF using the PCA approach is the $n$-dimensional Gaussian distribution, centered at the origin, which is described by an $n\times n$ covariance matrix, $V$, by $$G({\bf y}) = (2\pi)^{-n/2}\,\vert V\vert^{-1/2}
\exp\left({-{{\textstyle{1\over2}}}\, {\bf y}^T\, V^{-1}\, {\bf y}}
\right)
\label{EQ:ngaus}$$ where $\vert V \vert$ is the determinant of $V$. The variables ${\bf x}$ are not, in general, Gaussian distributed so this formula would be a poor approximation of the PDF, if used directly. Instead, the PCA method uses parameter transformations, $y_i(x_i)$, such that the individual distributions for $y_i$ are Gaussian and, as a result, the $n$-dimensional distribution for ${\bf y}$ may be well approximated by Eq. (\[EQ:ngaus\]).
The monotonic function $y(x)$ that transforms a variable $x$, having a distribution function $p(x)$, to the variable $y$, which follows a Gaussian distribution of mean 0 and variance 1, is $$y(x)=\sqrt{2}\,{\rm erf}^{-1}\left(2F(x)-1 \right)
\label{EQ:transform}$$ where erf$^{-1}$ is the inverse error function and $F(x)$ is the cumulative distribution of $x$, $$F(x) = {\int_{x_{\rm min}}^x p(x')\,dx' \over
\int_{x_{\rm min}}^{x_{\rm max}} p(x')\,dx'} \ \ .
\label{EQ:cumul}$$
The resulting $n$-dimensional distribution for [**y**]{} will not, in general, be an $n$-dimensional Gaussian distribution. It is only guaranteed that the projections of this distribution onto each $y_i$ axis is Gaussian. In the PCA approximation, however, the probability density function of [**y**]{} is assumed to be Gaussian. Although not exact, this can represent a good approximation of a multi-dimensional distribution in which the correlation of the variables is relatively simple.
Written in terms of the projections, $p_i(x_i)$, the approximation of ${\cal P}({\bf x})$ using the PCA method is, $$P({\bf x})= \vert V \vert^{-1/2}
\exp\left(-{{\textstyle{1\over2}}}\,{\bf y}^T\,(V^{-1}-I)\,{\bf y}
\right)
\prod_{i=1}^n p_i(x_i)
\label{EQ:pca}$$ where $V$ is the covariance matrix for [**y**]{} and $I$ is the identity matrix. To approximate the projections, $p_i(x_i)$, needed in Eqs.(\[EQ:cumul\]) and (\[EQ:pca\]), binned frequency distributions (histograms) of $x_i$ can be used.
The projection and correlation approximation is exact for distributions with uncorrelated variables, in which case $V=I$. It is also exact for a Gaussian distribution modified by monotonic one-dimensional variable transformations for any number of variables; or equivalently, multiplication by a non-negative separable function.
A large variety of distributions can be well approximated by the PCA method. However, there are distributions for which this will not be true. For the PCA method to yield a good approximation in two-dimensions, the correlation between the two variables must be the same sign for all regions. If the space can be split into regions, inside of which the correlation has everywhere the same sign, then the PCA method can be used on each region separately. To determine if a distribution is well approximated by the PCA method, a goodness of fit test can be applied, as described in the next section.
The generation of simulated event samples that follow the PCA PDF is straightforward and efficient. Events are generated in $y$ space, according to Eq. (\[EQ:ngaus\]), and then are transformed to the $x$ space. The procedure involves no rejection of trial events, and is therefore fully efficient.
Goodness of fit test {#SECT:goodness}
====================
Some applications of the PCA method do not require that the PDFs be particularly well approximated. For example, to estimate the purity and efficiency of event classification, it is only necessary that the simulated or control samples are good representations of the data. Other applications, such as its use in maximum likelihood analyses, require the PDF to be a good approximation, in order that the estimators are unbiased and that the estimated statistical uncertainties are valid. Therefore it may be important to check that the approximate PDF derived with the PCA method is adequate for a given problem.
In general, when approximating a multidimensional distribution from a sample of events, it can be difficult to derive a goodness of fit statistic, like a $\chi^2$ statistic. This is because the required multidimensional binning can reduce the average number of events per bin to a very small number, much less than 1.
When the PCA method is used, however, it is easy to form a statistic to test if a sample of events follows the PDF, without slicing the variable space into thousands of bins. The PCA method already ensures that the projections of the approximate PDF will match that of the event sample. A statistic that is sensitive to the correlation amongst the variables is most easily defined in the space of transformed variables, $y$, where the approximate PDF is an $n$-dimensional Gaussian. For each event the value $X^2$ is calculated, $$X^2 = {\bf y}^T\,V^{-1}\,{\bf y} \ \ ,$$ and if the events follow the PDF, the $X^2$ values will follow a $\chi^2$ distribution with $n$ degrees of freedom, where $n$ is the dimension of the Gaussian. A probability weight, $w$, can therefore be formed, $$w(X^2) = \int_{X^2}^\infty\,\chi^2(t,n)\,dt \ \ ,$$ which will be uniformly distributed between 0 and 1, if the events follow the PDF. The procedure can be thought of in terms of dividing the $n$-dimensional $y$ space into layers centered about the origin (and whose boundaries are at constant probability in $y$ space) and checking that the right number of events appears in each layer. The goodness of fit test for the PCA distribution is therefore reduced to a test that the $w$ distribution is uniform.
When the goodness of fit test shows that the event sample is not well described by the projection and correlation approximation, further steps may be necessary before the PCA method can be applied to an analysis. To identify correlations which are poorly described, the goodness of fit test can be repeated for each pair of variables. If the test fails for a pair of variables, it may be possible to improve the approximation by modifying the choice of variables used in the analysis, or by treating different regions of variable space by separate approximations.
Event classification {#SECT:class}
====================
Given two categories of events that follow the PDFs ${\cal P}_1({\bf x})$ and ${\cal P}_2({\bf x})$, the optimal event classification scheme to define a sample enriched in type 1 events, selects events having the largest values for the ratio of probabilities, $R={\cal P}_1({\bf x})/{\cal P}_2({\bf x})$. Using simulated or control samples, the PCA method can be used to define the approximate PDFs $P_1({\bf x})$ and $P_2({\bf x})$, and in order to define a quantity limited to the range $[0,1]$, it is useful to define a likelihood ratio $${\cal L}={ P_1({\bf x}) \over
P_1({\bf x}) + P_2({\bf x}) } \ .
\label{EQ:probrat}$$ With only two categories of events, it is irrelevant if the PDFs $P_1$ and $P_2$ are renormalized to their relative abundances in the data set. The generalization to more than two categories of events requires that the PDFs $P_i$ be renormalized to their abundances. In either case, each event is classified on the basis of the whether or not the value of ${\cal L}$ for that event is larger than some critical value.
Systematic errors in the estimated purity and efficiency of event classification can result if the simulated (or control) samples do not follow the true PDFs. To estimate the systematic uncertainties of the selection, the projections and covariance matrices used to define the PCA PDFs can be varied over suitable ranges.
Example application
===================
In this section the PCA method and its applications are demonstrated with simple analyses of simulated event samples. Two samples, one labeled signal and the other background, are generated with, $x_1\in(0,10)$ and $x_2\in(0,1)$, according to the distributions, $$\begin{aligned}
d_s(x_1,x_2)&=&{\displaystyle{{(x_1-a_1)^2+a_2 \over
(a_3(x_1-a_4(1+a_5x_2))^4+a_6)((x_2-a_7)^4+a_8)}}} \cr
& & \cr
d_b(x_1,x_2)&=&{\displaystyle{{1\over(b_1(x_1+x_2)^2+b_2x_2^3+b_3)}}}
\label{EQ:mcdist}\end{aligned}$$ where the vectors of constants are given by [**a**]{}$=(7,2,6,4,0.8,40,0.6,2)$ and [**b**]{}$=(0.1,3,0.1)$. These samples of 4000 events each correspond to simulated or control samples used in the analysis of a data set. In what follows it is assumed that the analytic forms of the parent distributions, Eq. (\[EQ:mcdist\]), are unknown.
The signal and background control samples are shown in Fig. \[FIG:sigcontrol\] and Fig. \[FIG:backcontrol\] respectively. A third sample, considered to be data and shown in Fig. \[FIG:data\], is formed by mixing a further 240 events generated according to $d_s$ and 160 events generated according to $d_b$.
The transformation given in Eq. (\[EQ:transform\]) is applied to the signal control sample, which results in the distribution shown in Fig. \[FIG:sigtrans\]. To define the transformation, the projections shown in Fig. \[FIG:sigcontrol\] are used, 40 bins for each dimension. The projections of the transformed distribution are Gaussian, and the correlation coefficient is found to be 0.40. The goodness of fit test, described in section \[SECT:goodness\], checks the assumption that the transformed distribution is a 2-dimensional Gaussian. The resulting $w(X^2)$ distribution from this test is relatively uniform, as shown in Fig. \[FIG:wdist\].
A separate transformation of the background control sample gives the distribution shown in Fig. \[FIG:backtrans\], which has a correlation coefficient of 0.03. Note that a small linear correlation coefficient does not necessarily imply that the variables are uncorrelated. In this case the 2-dimensional distribution is well described by 2-dimensional Gaussian, as shown in Fig. \[FIG:wdist\].
Since the PCA method gives a relatively good approximation of the signal and background probability distributions, an efficient event classification scheme can be developed, as described in section \[SECT:class\]. Care needs to be taken, however, so that the estimation of the overall efficiency and purity of the selection is not biased. In this example, the approximate signal PDF is defined by 81 parameters (two projections of 40 bins, and one correlation coefficient) derived from the 4000 events in the signal control sample. These parameters will be sensitive to the statistical fluctuations in the control sample, and thus if the same control sample is used to optimize the selection and estimate the efficiency and purity, the estimates may be biased. To reduce this bias, additional samples are generated with the method described at the end of section \[SECT:pca\]. These samples are used to define the 81 parameters, and the event classification scheme is applied to the original control samples to estimate the purity and efficiency. In this example data analysis, the bias is small. When the original control sample is used to define the 81 parameters, the optimal signal to noise is achieved with an efficiency of $0.880$ and purity of $0.726$. When the PCA generated samples are used instead, the selection efficiency is reduced to $0.873$, for the same purity.
When the classification scheme is applied to the data sample, 261 events are classified as signal events. Given the efficiency and purity quoted above, the number of signal events in the sample is estimated to be $217 \pm 19$.
The number of signal events in the data sample can be more accurately determined by using a maximum likelihood analysis. The likelihood function is defined by $$L = \prod_{j=1}^{400} (f_s\, P_s({\bf x}_j) + (1-f_s)\,P_b({\bf x}_j))
\label{EQ:likelihood}$$ where the product runs over the 400 data events, $f_s$ is the fraction of events attributed to signal, and $P_s$ and $P_b$ are the PCA approximated PDFs, defined by Eq. (\[EQ:pca\]). The signal fraction, estimated by maximizing the likelihood, is $0.617 \pm 0.040$, a relative uncertainty of 6.4% compared to the 8.5% uncertainty from the counting method. To check that the data sample is well described by the model used to define the likelihood function, Eq. (\[EQ:likelihood\]), the ratio of probabilities, Eq. (\[EQ:probrat\]), is shown in Fig. \[FIG:datatest\], and compared to a mixture of PCA generated signal and background samples.
Fortran Implementation
======================
The source code for a FORTRAN-77 implementation of the methods described in this paper is available from the author. The program was originally developed for use in an analysis of data from OPAL, a particle physics experiment located at CERN, and makes use of the CERNLIB library [@REF:cernlib]. An alternate version is also available, in which the calls to CERNLIB routines are replaced by calls to equivalent routines from NETLIB [@REF:netlib].
References to artificial neural networks are numerous. One source with a focus on applications in High Energy Physics is:\
[http://www.cern.ch/NeuralNets/nnwInHep.html]{}. Information on CERNLIB is available from:\
[http://wwwinfo.cern.ch/asd/index.html]{}. Netlib is a collection of mathematical software, papers, and databases found at\
[http://www.netlib.org]{}.
[^1]: E-mail: karlen@physics.carleton.ca
|
---
abstract: 'We report high statistics measurements of inclusive charged hadron production in Au+Au and p+p collisions at =200 GeV. A large, approximately constant hadron suppression is observed in central Au+Au collisions for $5\lt\pT\lt12$ GeV/c. The collision energy dependence of the yields and the centrality and dependence of the suppression provide stringent constraints on theoretical models of suppression. Models incorporating initial-state gluon saturation or partonic energy loss in dense matter are largely consistent with observations. We observe no evidence of -dependent suppression, which may be expected from models incorporating jet attentuation in cold nuclear matter or scattering of fragmentation hadrons.'
author:
- 'J. Adams'
- 'C. Adler'
- 'M.M. Aggarwal'
- 'Z. Ahammed'
- 'J. Amonett'
- 'B.D. Anderson'
- 'M. Anderson'
- 'D. Arkhipkin'
- 'G.S. Averichev'
- 'S.K. Badyal'
- 'J. Balewski'
- 'O. Barannikova'
- 'L.S. Barnby'
- 'J. Baudot'
- 'S. Bekele'
- 'V.V. Belaga'
- 'R. Bellwied'
- 'J. Berger'
- 'B.I. Bezverkhny'
- 'S. Bhardwaj'
- 'P. Bhaskar'
- 'A.K. Bhati'
- 'H. Bichsel'
- 'A. Billmeier'
- 'L.C. Bland'
- 'C.O. Blyth'
- 'B.E. Bonner'
- 'M. Botje'
- 'A. Boucham'
- 'A. Brandin'
- 'A. Bravar'
- 'R.V. Cadman'
- 'X.Z. Cai'
- 'H. Caines'
- 'M. Calderón de la Barca Sánchez'
- 'J. Carroll'
- 'J. Castillo'
- 'M. Castro'
- 'D. Cebra'
- 'P. Chaloupka'
- 'S. Chattopadhyay'
- 'H.F. Chen'
- 'Y. Chen'
- 'S.P. Chernenko'
- 'M. Cherney'
- 'A. Chikanian'
- 'B. Choi'
- 'W. Christie'
- 'J.P. Coffin'
- 'T.M. Cormier'
- 'J.G. Cramer'
- 'H.J. Crawford'
- 'D. Das'
- 'S. Das'
- 'A.A. Derevschikov'
- 'L. Didenko'
- 'T. Dietel'
- 'X. Dong'
- 'J.E. Draper'
- 'K.A. Drees'
- 'F. Du'
- 'A.K. Dubey'
- 'V.B. Dunin'
- 'J.C. Dunlop'
- 'M.R. Dutta Majumdar'
- 'V. Eckardt'
- 'L.G. Efimov'
- 'V. Emelianov'
- 'J. Engelage'
- 'G. Eppley'
- 'B. Erazmus'
- 'P. Fachini'
- 'V. Faine'
- 'J. Faivre'
- 'R. Fatemi'
- 'K. Filimonov'
- 'P. Filip'
- 'E. Finch'
- 'Y. Fisyak'
- 'D. Flierl'
- 'K.J. Foley'
- 'J. Fu'
- 'C.A. Gagliardi'
- 'M.S. Ganti'
- 'N. Gagunashvili'
- 'J. Gans'
- 'L. Gaudichet'
- 'M. Germain'
- 'F. Geurts'
- 'V. Ghazikhanian'
- 'P. Ghosh'
- 'J.E. Gonzalez'
- 'O. Grachov'
- 'V. Grigoriev'
- 'S. Gronstal'
- 'D. Grosnick'
- 'M. Guedon'
- 'S.M. Guertin'
- 'A. Gupta'
- 'E. Gushin'
- 'T.D. Gutierrez'
- 'T.J. Hallman'
- 'D. Hardtke'
- 'J.W. Harris'
- 'M. Heinz'
- 'T.W. Henry'
- 'S. Heppelmann'
- 'T. Herston'
- 'B. Hippolyte'
- 'A. Hirsch'
- 'E. Hjort'
- 'G.W. Hoffmann'
- 'M. Horsley'
- 'H.Z. Huang'
- 'S.L. Huang'
- 'T.J. Humanic'
- 'G. Igo'
- 'A. Ishihara'
- 'P. Jacobs'
- 'W.W. Jacobs'
- 'M. Janik'
- 'I. Johnson'
- 'P.G. Jones'
- 'E.G. Judd'
- 'S. Kabana'
- 'M. Kaneta'
- 'M. Kaplan'
- 'D. Keane'
- 'J. Kiryluk'
- 'A. Kisiel'
- 'J. Klay'
- 'S.R. Klein'
- 'A. Klyachko'
- 'D.D. Koetke'
- 'T. Kollegger'
- 'A.S. Konstantinov'
- 'M. Kopytine'
- 'L. Kotchenda'
- 'A.D. Kovalenko'
- 'M. Kramer'
- 'P. Kravtsov'
- 'K. Krueger'
- 'C. Kuhn'
- 'A.I. Kulikov'
- 'A. Kumar'
- 'G.J. Kunde'
- 'C.L. Kunz'
- 'R.Kh. Kutuev'
- 'A.A. Kuznetsov'
- 'M.A.C. Lamont'
- 'J.M. Landgraf'
- 'S. Lange'
- 'C.P. Lansdell'
- 'B. Lasiuk'
- 'F. Laue'
- 'J. Lauret'
- 'A. Lebedev'
- 'R. Lednický'
- 'V.M. Leontiev'
- 'M.J. LeVine'
- 'C. Li'
- 'Q. Li'
- 'S.J. Lindenbaum'
- 'M.A. Lisa'
- 'F. Liu'
- 'L. Liu'
- 'Z. Liu'
- 'Q.J. Liu'
- 'T. Ljubicic'
- 'W.J. Llope'
- 'H. Long'
- 'R.S. Longacre'
- 'M. Lopez-Noriega'
- 'W.A. Love'
- 'T. Ludlam'
- 'D. Lynn'
- 'J. Ma'
- 'Y.G. Ma'
- 'D. Magestro'
- 'S. Mahajan'
- 'L.K. Mangotra'
- 'D.P. Mahapatra'
- 'R. Majka'
- 'R. Manweiler'
- 'S. Margetis'
- 'C. Markert'
- 'L. Martin'
- 'J. Marx'
- 'H.S. Matis'
- 'Yu.A. Matulenko'
- 'T.S. McShane'
- 'F. Meissner'
- 'Yu. Melnick'
- 'A. Meschanin'
- 'M. Messer'
- 'M.L. Miller'
- 'Z. Milosevich'
- 'N.G. Minaev'
- 'C. Mironov'
- 'D. Mishra'
- 'J. Mitchell'
- 'B. Mohanty'
- 'L. Molnar'
- 'C.F. Moore'
- 'M.J. Mora-Corral'
- 'V. Morozov'
- 'M.M. de Moura'
- 'M.G. Munhoz'
- 'B.K. Nandi'
- 'S.K. Nayak'
- 'T.K. Nayak'
- 'J.M. Nelson'
- 'P. Nevski'
- 'V.A. Nikitin'
- 'L.V. Nogach'
- 'B. Norman'
- 'S.B. Nurushev'
- 'G. Odyniec'
- 'A. Ogawa'
- 'V. Okorokov'
- 'M. Oldenburg'
- 'D. Olson'
- 'G. Paic'
- 'S.U. Pandey'
- 'S.K. Pal'
- 'Y. Panebratsev'
- 'S.Y. Panitkin'
- 'A.I. Pavlinov'
- 'T. Pawlak'
- 'V. Perevoztchikov'
- 'W. Peryt'
- 'V.A. Petrov'
- 'S.C. Phatak'
- 'R. Picha'
- 'M. Planinic'
- 'J. Pluta'
- 'N. Porile'
- 'J. Porter'
- 'A.M. Poskanzer'
- 'M. Potekhin'
- 'E. Potrebenikova'
- 'B.V.K.S. Potukuchi'
- 'D. Prindle'
- 'C. Pruneau'
- 'J. Putschke'
- 'G. Rai'
- 'G. Rakness'
- 'R. Raniwala'
- 'S. Raniwala'
- 'O. Ravel'
- 'R.L. Ray'
- 'S.V. Razin'
- 'D. Reichhold'
- 'J.G. Reid'
- 'G. Renault'
- 'F. Retiere'
- 'A. Ridiger'
- 'H.G. Ritter'
- 'J.B. Roberts'
- 'O.V. Rogachevski'
- 'J.L. Romero'
- 'A. Rose'
- 'C. Roy'
- 'L.J. Ruan'
- 'V. Rykov'
- 'R. Sahoo'
- 'I. Sakrejda'
- 'S. Salur'
- 'J. Sandweiss'
- 'I. Savin'
- 'J. Schambach'
- 'R.P. Scharenberg'
- 'N. Schmitz'
- 'L.S. Schroeder'
- 'K. Schweda'
- 'J. Seger'
- 'D. Seliverstov'
- 'P. Seyboth'
- 'E. Shahaliev'
- 'M. Shao'
- 'M. Sharma'
- 'K.E. Shestermanov'
- 'S.S. Shimanskii'
- 'R.N. Singaraju'
- 'F. Simon'
- 'G. Skoro'
- 'N. Smirnov'
- 'R. Snellings'
- 'G. Sood'
- 'P. Sorensen'
- 'J. Sowinski'
- 'H.M. Spinka'
- 'B. Srivastava'
- 'S. Stanislaus'
- 'R. Stock'
- 'A. Stolpovsky'
- 'M. Strikhanov'
- 'B. Stringfellow'
- 'C. Struck'
- 'A.A.P. Suaide'
- 'E. Sugarbaker'
- 'C. Suire'
- 'M. Šumbera'
- 'B. Surrow'
- 'T.J.M. Symons'
- 'A. Szanto de Toledo'
- 'P. Szarwas'
- 'A. Tai'
- 'J. Takahashi'
- 'A.H. Tang'
- 'D. Thein'
- 'J.H. Thomas'
- 'V. Tikhomirov'
- 'M. Tokarev'
- 'M.B. Tonjes'
- 'T.A. Trainor'
- 'S. Trentalange'
- 'R.E. Tribble'
- 'M.D. Trivedi'
- 'V. Trofimov'
- 'O. Tsai'
- 'T. Ullrich'
- 'D.G. Underwood'
- 'G. Van Buren'
- 'A.M. VanderMolen'
- 'A.N. Vasiliev'
- 'M. Vasiliev'
- 'S.E. Vigdor'
- 'Y.P. Viyogi'
- 'S.A. Voloshin'
- 'W. Waggoner'
- 'F. Wang'
- 'G. Wang'
- 'X.L. Wang'
- 'Z.M. Wang'
- 'H. Ward'
- 'J.W. Watson'
- 'R. Wells'
- 'G.D. Westfall'
- 'C. Whitten Jr. '
- 'H. Wieman'
- 'R. Willson'
- 'S.W. Wissink'
- 'R. Witt'
- 'J. Wood'
- 'J. Wu'
- 'N. Xu'
- 'Z. Xu'
- 'Z.Z. Xu'
- 'A.E. Yakutin'
- 'E. Yamamoto'
- 'J. Yang'
- 'P. Yepes'
- 'V.I. Yurevich'
- 'Y.V. Zanevski'
- 'I. Zborovský'
- 'H. Zhang'
- 'H.Y. Zhang'
- 'W.M. Zhang'
- 'Z.P. Zhang'
- 'P.A. Żo[ł]{}nierczuk'
- 'R. Zoulkarneev'
- 'J. Zoulkarneeva'
- 'A.N. Zubarev'
title: ' Transverse momentum and collision energy dependence of high hadron suppression in Au+Au collisions at ultrarelativistic energies\'
---
High energy partons propagating through matter are predicted to lose energy via induced gluon radiation, with the total energy loss strongly dependent on the color charge density of the medium [@EnergyLoss]. This process can provide a sensitive probe of the hot and dense matter generated early in ultrarelativistic nuclear collisions, when a plasma of deconfined quarks and gluons may form. The hard scattering and subsequent fragmentation of partons generates jets of correlated hadrons. In nuclear collisions, jets may be studied via observables such as high transverse momentum (high ) hadronic inclusive spectra [@WangGyulassyLeadinHadron] and correlations. Several striking high phenomena have been observed at the Relativistic Heavy Ion Collider (RHIC) [@STARHighptSpectra; @PHENIXhighpt; @PHOBOSNpartScaling; @STARHighptFlow; @STARHighptCorrelations], including strong suppression of inclusive hadron production [@STARHighptSpectra; @PHENIXhighpt; @PHOBOSNpartScaling]. These phenomena are consistent with large partonic energy loss in high energy density matter [@EnergyLoss; @Suppression; @VitevGyulassy; @SalgadoWiedemann; @HiranoNara] though other mechanisms have been proposed, including gluon saturation in the initial nuclear wavefunction [@KharzeevLevinMcLerran], attenuation of jet formation in cold nuclear matter [@TomasikPartonFormationTime], and scattering of fragmentation hadrons [@GallmeisterEtAl]. Additional measurements are required to discriminate among these pictures and to isolate effects due to final state partonic energy loss.
We report high statistics measurements by the STAR Collaboration of the inclusive charged hadron yield (defined as the summed yields of primary $\pi^\pm$, K$^\pm$, p and ) in Au+Au collisions and non-singly diffractive (NSD) p+p collisions at nucleon-nucleon center of mass energy =200 GeV. The Au+Au data extend considerably the range of earlier charged hadron suppression studies, and the p+p data are the first such measurement at this energy. Comparisons are made to several theoretical models. The high precision and broad kinematic coverage of the data significantly constrain the possible mechanisms of hadron suppression. In addition, the energy dependence of the yields may be sensitive to gluon shadowing at low Bjorken $x$ in heavy nuclei.
We compare the data to two calculations based on hard parton scattering evaluated via perturbative QCD (pQCD-I [@XNWangPrivateComm] and pQCD-II [@VitevGyulassy]) and to a calculation extending the saturation model to high momentum transfer [@KharzeevLevinMcLerran]. Both pQCD models for Au+Au collisions incorporate nuclear shadowing, the Cronin effect [@CroninEffect], and partonic energy loss, but with different formulations. pQCD-I results excluding one or more nuclear effects are also shown. Neither pQCD calculation includes non-perturbative effects that generate particle species-dependent differences for 5 GeV/c[@VitevGyulassyBaryonEnhancement; @XNWangPrivateComm].
Charged particle trajectories were measured in the Time Projection Chamber (TPC) [@TPC]. The magnetic field was 0.5 T, resulting in a factor of three improvement in resolution at high relative to [@STARHighptSpectra; @STARHighptFlow]. After event selection cuts, the Au+Au dataset comprised 1.7 million minimum bias events ($97\pm3\%$ of the geometric cross section ) and 1.5 million central events (10% of ). Centrality selection and analysis of spectra follow Ref. [@STARHighptSpectra]. Background at high is dominated by weak decay products, with correction factors calculated using STAR measurements of (+) and for 6 GeV/c [@STARLambdaK0sToBePublished] and assuming constant yield ratios (+)/(+) and /(+) for 6 GeV/c. The (+) yield was scaled by a factor 1.4 to account for decays. Table \[TableCorr\] summarizes the correction factors at high .
Tracking Background resolution
-------------- ----------------- ----------------- -------------------------
p+p 1.18 $\pm$ 0.07 0.90 $\pm$ 0.08 0.89 $^{+0.05}_{-0.05}$
Au+Au 60-80% 1.11 $\pm$ 0.06 0.95 $\pm$ 0.05 0.97 $^{+0.03}_{-0.05}$
Au+Au 0-5% 1.25 $\pm$ 0.06 0.94 $\pm$ 0.06 0.95 $^{+0.05}_{-0.05}$
: Multiplicative correction factors applied to the measured yields at =10 GeV/c for p+p and Au+Au data. Factors vary by approximately 5% within 412 GeV/c and have similar uncertainties. \[TableCorr\]
After event selection cuts, the p+p dataset comprised 5 million mainly NSD events, triggered on the coincidence of two Beam-Beam counters (BBCs). The BBCs are annular scintillator detectors situated $\pm$3.5 m from the interaction region, covering pseudorapidity 3.3$|\eta|$5.0. A van der Meer scan [@VernierScan] measured the BBC trigger cross section to be 26.1$\pm$0.2(stat)$\pm$1.8(sys) mb. The BBC trigger was simulated using PYTHIA [@PYTHIA] and HERWIG [@HERWIG] events passed though a GEANT detector model. The PYTHIA trigger cross section is 27 mb, consistent with measurement, of which 0.7 mb results from singly diffractive events. The PYTHIA and HERWIG simulations show that the trigger accepts 87$\pm$8% of all NSD events containing a TPC track, with negligible track -dependence. Non-interaction backgrounds contributed 3$\pm$2% of the trigger rate. The high p+p interaction rate generated significant pileup in the TPC. Valid tracks matched an in-time hit in the Central Trigger Barrel (CTB [@TPC]) surrounding the TPC and projected to a distance of closest approach $DCA$1 cm to the average beam trajectory. To avoid event selection multiplicity bias, an approximate event vertex position along the beam () was calculated by averaging over all valid tracks. Accepted events were required to have $|\zvert|$75 cm, corresponding to 69$\pm$4% of all events. The track fit did not include the event vertex. The CTB track-matching efficiency is 94$\pm$2% and combinatorial background is 2$\pm$2%. Other significant p+p tracking backgrounds result from weak decays and antinucleon annihilation in detector material, with corrections calculated using HIJING [@Hijing] and preliminary STAR measurements. Correction factors at high are given in Table \[TableCorr\]. For p+p collisions relative to peripheral Au+Au, exclusion of the event vertex from the fit results in poorer resolution, while the CTB matching requirement results in lower tracking efficiency. The p+p inclusive spectrum was also analysed for 3.5 GeV/c by an independent method in which a primary vertex is found and incorporated into the track fit, with consistent results.
The p+p NSD differential cross section is the product of the measured per-event yield and the BBC NSD trigger cross section, and has a normalization uncertainty of $\pm$14%. The charged hadron invariant cross section has been measured in collisions at =200 GeV [@UA1]. The p+p cross section reported here is smaller by a factor of $0.79\pm0.18$, approximately independent of , where the uncertainty includes the two spectrum normalizations and the correction for different acceptances [@STARHighptSpectra]. The difference is due in large part to differing NSD cross section, which is $35\pm1$ mb in [@UA1] but is measured here to be $30.0\pm3.5$ mb.
Figure \[FigOne\] shows inclusive invariant distributions of within $|\eta|$0.5 for Au+Au and p+p collisions at =200 GeV. The Au+Au spectra are shown for percentiles of , with 0-5% indicating the most central (head-on) collisions. Error bars are the quadrature sum of the statistical and systematic uncertainties and are dominated by the latter except at the highest .
Figure \[FigTwo\] shows , the ratio of charged hadron yields at =200 and 130 GeV[@STARHighptSpectra], for centrality selected Au+Au collisions. Error bars are the quadrature sum of the statistical and systematic uncertainties, dominated for 4 GeV/c by statistics at 130 GeV. In the absence of nuclear effects, the hard process inclusive yield in nuclear collisions is expected to scale as , the average number of binary collisions for the respective centrality selection. has not been scaled by the ratio (200)/(130), which Glauber model calculations [@STARHighptSpectra; @Glauber] give as $\sim1.02$ for all centralities. Figure \[FigTwo\] also shows the saturation model calculation and pQCD-I calculations for p+p and centrality-selected Au+Au collisions (shadowing-only and full). Both models approximately reproduce the -dependence of the ratio for Au+Au for 2 GeV/c, with pQCD-I slightly better for more peripheral collisions. The various pQCD-I calculations shown illustrate that in this model the reduction in for Au+Au relative to p+p is predominantly due to nuclear shadowing[@XNWangPrivateComm]. This sensitivity arises because the shadowing is $x$-dependent and at fixed , different corresponds to different $\xT=2\pT/\sqrts$. The quantitative agreement of pQCD-I with the data improves for more peripheral collisions, suggesting that the prescription for the centrality dependence of shadowing in [@XNWangPrivateComm] may not be optimal. Alternatively, introduction of -dependent energy loss to the model in [@XNWangPrivateComm] may also improve the agreement.
Nuclear effects on the inclusive spectra are measured by comparison to a nucleon-nucleon (NN) reference via the nuclear modification factor: $$\label{RAA}
\RAA=\frac{d^2N^{AA}/d{\pT}d\eta}{\TAA{d}^2\sigma^{NN}/d{\pT}d{\eta}}\ ,$$ where =/ from a Glauber calculation accounts for the nuclear collision geometry [@STARHighptSpectra; @Glauber] and we adopt = 42 mb. ${d}^2\sigma^{NN}/d{\pT}d{\eta}$ refers to inelastic collisions, whereas we have measured the p+p NSD differential cross section. However, singly diffractive interactions contribute predominantly to low [@ZeusDiffraction]. A multiplicative correction based on PYTHIA, applied to ${d}^2\sigma^{NN}/d{\pT}d{\eta}$ in Eq. \[RAA\], is 1.05 at =0.4 GeV/c and unity above 1.2 GeV/c.
Figure \[FigThree\] shows at =200 GeV for centrality-selected Au+Au spectra relative to the measured p+p spectrum. Horizontal dashed lines show Glauber model expectations [@STARHighptSpectra; @Glauber] for scaling of the yield with or mean number of participants , with grey bands showing their respective uncertainties summed in quadrature with the p+p normalization uncertainty. The error bars represent the quadrature sum of the Au+Au and remaining p+p spectrum uncertainties. For 6 GeV/c, is similar to that observed at =130 GeV [@STARHighptSpectra]. Hadron production for 610 GeV/c is suppressed by a factor of 4-5 in central Au+Au relative to p+p collisions.
Figure \[FigThree\] also shows the full pQCD-I calculation and the influence of each nuclear effect. The energy loss for central collisions is a fit parameter, with the and centrality dependence of the suppression constrained by theory. The Cronin enhancement and shadowing alone cannot account for the suppression, which is reproduced only if partonic energy loss in dense matter is included. The full calculation generally agrees with data for 5 GeV/c if the initial parton density in central collisions is adjusted to be $\sim$15 times that of cold nuclear matter [@dEdxColdMatter]. pQCD-II exhibits similar agreement for central collisions. In Ref. [@VitevGyulassy], the pQCD-II calculation was used to predict a -independent suppression factor in this range from the interplay between shadowing, the Cronin effect, and partonic energy loss.
Figure \[FigFour\] shows , the -normalized ratio of central and peripheral Au+Au spectra. extends to higher than , with smaller point-to-point uncertainties. The error bars show the quadrature sum of statistical and uncorrelated systematic uncertainties. Statistical errors dominate the uncertainties for 8 GeV/c. Lines and grey bands are as in Fig. \[FigThree\]. for 6 GeV/c is similar to measurements at =130 GeV [@STARHighptSpectra], but is now seen to be approximately constant for 512 GeV/c. It is consistent with scaling at $\pT\sim4$ GeV/c as reported in [@PHOBOSNpartScaling], but is below scaling at higher .
The -dependence of the suppression in Figure \[FigFour\] is well reproduced for 5 GeV/c by the full pQCD-I and pQCD-II calculations in both panels and the saturation calculation in the upper but not the lower panel. The magnitude of suppression is fitted to the central collision data in the pQCD models but is predicted in the saturation calculation. Attenuation of initial jet formation due to multiple nucleon interactions [@TomasikPartonFormationTime] generates an increase in partonic for central collisions of a factor $\sim2$ in $5\lt\ET\lt12$ GeV. A similar -dependence would be expected for high hadrons, in contrast to observations. Suppression in the final state due to in-medium scattering of fragmentation hadrons should also result in a rising with increasing due to the dependence of hadron formation time on the total jet energy [@GallmeisterEtAl], though detailed comparison of this model to data requires further theoretical development.
In summary, STAR has measured inclusive charged hadron yields from Au+Au and p+p collisions at =200 GeV, at higher precision and over a much broader range than previous measurements. Large, constant hadron suppression is observed in central nuclear collisions at high . The systematic behaviour of the suppression at high is well described both by pQCD calculations incorporating final-state partonic energy loss in dense matter and a model of initial-state gluon saturation, though the latter model provides a poorer description of peripheral collision data. The isolation of initial state effects on high hadron production may be achieved through the study of d+Au collisions at RHIC, allowing a quantitative measurement of final state effects from the data presented here.
We thank C. Greiner, M. Gyulassy, D. Kharzeev, C. Salgado, B. Tomasik, I. Vitev and X.N. Wang for valuable discussions. We thank the RHIC Operations Group and RCF at BNL, and the NERSC Center at LBNL for their support. This work was supported in part by the HENP Divisions of the Office of Science of the U.S. DOE; the U.S. NSF; the BMBF of Germany; IN2P3, RA, RPL, and EMN of France; EPSRC of the United Kingdom; FAPESP of Brazil; the Russian Ministry of Science and Technology; the Ministry of Education and the NNSFC of China; SFOM of the Czech Republic, DAE, DST, and CSIR of the Government of India; the Swiss NSF.
2002[Proceedings of Quark Matter 2002, Nantes, France, Jul. 18-24, 2002. Nucl. Phys. A, to be published.]{}
[99]{}
R. Baier, D. Schiff and B.G. Zakharov, Ann. Rev. Nucl. Part. Sci. [**50**]{}, 37 (2000); M. Gyulassy, I. Vitev, X.N. Wang, B. Zhang, nucl-th/0302077.
X.N. Wang and M. Gyulassy, Phys. Rev. Lett. [**68**]{}, 1480 (1992).
C. Adler , Phys. Rev. Lett. [**89**]{}, 202301 (2002).
K. Adcox , Phys. Rev. Lett. [**88**]{}; 022301 (2002); Phys. Lett. [**B561**]{}, 82 (2003); S.S. Adler , Phys. Rev. Lett. [**91**]{}, 072301 (2003).
B.B. Back , nucl-ex/0302015.
C. Adler , Phys. Rev. Lett. [**90**]{}, 032301 (2003).
C. Adler , Phys. Rev. Lett. [**90**]{}, 082302 (2003).
M. Gyulassy and X.N. Wang, Nucl. Phys. [**B420**]{}, 583 (1994); X.N. Wang, Phys. Rev. [**C58**]{}, 2321 (1998).
I. Vitev and M. Gyulassy, Phys. Rev. Lett. [**89**]{}, 252301 (2002).
C.A. Salgado and U.A. Wiedemann, Phys. Rev. Lett. [**89**]{}, 092303 (2002).
T. Hirano and Y. Nara, nucl-th/0301042.
D. Kharzeev, E. Levin and L. McLerran, Phys. Lett. [**B561**]{}, 93 (2003); D. Kharzeev, private communication.
R. Lietava, J. Pisut, N. Pisutova, B. Tomasik, Eur. Phys. J. [**C28**]{}, 119 (2003).
K. Gallmeister, C. Greiner and Z. Xu, Phys. Rev. [**C67**]{}, 044905 (2003). Note that Eq. (2) of this reference implies that $\langle{L}/\lambda\rangle$ in Fig. 9 decreases substantially in $5<\pT<12$ GeV/c.
X.N. Wang, nucl-th/0305010; private communication. Calculations use model parameters $\mu_0=2.0$ GeV and $\epsilon_0=2.04$ GeV/fm.
D. Antreasyan , Phys. Rev. [**D19**]{}, 764 (1979); P.B. Straub , Phys. Rev. Lett. [**68**]{}, 452 (1992).
I. Vitev and M. Gyulassy, Phys. Rev. [ **C65**]{}, 041902 (2002); R.J. Fries, B. Muller, C. Nonaka and S.A. Bass, Phys. Rev. Lett. [**90**]{} 202303 (2003).
M. Anderson , Nucl. Instr. Meth. [**A499**]{}, 659 (2003).
J. Adams , nucl-ex/0306007.
A. Drees and Z. Xu, Proceedings of the Particle Accelerator Conference 2001, Chicago, Il, p. 3120.
T. Sjoestrand , Comp. Phys. Comm. [**135**]{}, 238 (2001).
G. Corcella , JHEP [**0101**]{}, 010 (2001).
X.N. Wang and M. Gyulassy, Phys. Rev. [**D44**]{}, 3501 (1991).
C. Albajar , Nucl. Phys. **B335**, 261 (1990).
Correction for a recently found error in the Glauber calculation reported in [@STARHighptSpectra] yields $\sim$2-6% larger values, with the largest increase for the most peripheral and central bins. Negligible change results for . We thank K. Reygers for valuable discussions.
M. Derrick , Z. Phys. [**C67**]{}, 227 (1995); R.E. Ansorge , Z. Phys. [**C33**]{}, 175 (1986).
E. Wang and X.N. Wang, Phys. Rev. Lett. [**89**]{}, 162301 (2002); F. Arleo, Phys. Lett. [ **B532**]{}, 231 (2002).
|
---
abstract: 'Using a matrix product state algorithm with infinite boundary conditions, we compute high-resolution dynamic spin and quadrupolar structure factors to explore the low-energy excitations of isotropic bilinear-biquadratic spin-1 chains. Haldane mapped the spin-1 Heisenberg antiferromagnet to a continuum field theory, the non-linear sigma model (NL$\sigma$M). We find that the NL$\sigma$M fails to capture the influence of the biquadratic term and provides only an unsatisfactory description of the Haldane phase physics. But several features in the Haldane phase can be explained by non-interacting multi-magnon states. The physics at the Uimin-Lai-Sutherland (ULS) point is characterized by multi-soliton continua. Moving into the extended critical phase, we find that these excitation continua contract, which we explain using a field-theoretic description. New excitations emerge at higher energies and, in the vicinity of the purely biquadratic point, they show simple cosine dispersions. Using block fidelities, we identify them as elementary one-particle excitations and relate them to the integrable Temperley-Lieb chain.'
author:
- Moritz Binder
- Thomas Barthel
date: '[May 07, 2020]{}'
title: '[The low-energy physics of isotropic spin-1 chains in the critical and Haldane phases]{}'
---
Introduction
============
The most general model for a spin-1 chain with isotropic nearest-neighbor interactions is given by the bilinear-biquadratic Hamiltonian $$\label{eq:H_blbq}
{\hat{H}}_\theta = \sum_i\left[\cos\theta ({\hat{{\boldsymbol}{S}}}_i \cdot {\hat{{\boldsymbol}{S}}}_{i+1}) + \sin\theta ({\hat{{\boldsymbol}{S}}}_i \cdot {\hat{{\boldsymbol}{S}}}_{i+1})^2\right],$$ where the angle $\theta\in[-3\pi/4,5\pi/4)$ parametrizes the ratio of the two couplings. It describes quasi one-dimensional quantum magnets like CsNiCl$_3$ [@buyers_1986; @tun_1990; @zaliznyak_2001], Ni(C$_2$H$_8$N$_2$)$_2$NO$_2$ClO$_4$ (NENP) [@ma_1992; @regnault_1994], or LiVGe$_2$O$_6$ [@millet_1999; @lou_2000], and can be realized with cold atoms in optical lattices [@yip_2003; @imambekov_2003; @garcia_ripoll_2004]. Depending on $\theta$, the ground state can be in one of several interesting quantum phases. In addition to a ferromagnetic ($\pi/2 < \theta$) and a gapped dimerized phase ($-3\pi/4 < \theta < -\pi/4$) [@barber_1989; @kluemper_1989; @xian_1993; @laeuchli_2006], the model features the gapped Haldane phase ($-\pi/4 < \theta <\pi/4$) [@haldane_1983_pla; @haldane_1983_prl; @affleck_1989] characterized by symmetry-protected topological order [@gu_2009; @pollmann_2012], and an extended critical phase ($\pi/4 \leq \theta < \pi/2$) [@uimin_1970; @lai_1974; @sutherland_1975; @laeuchli_2006; @fath_1991; @fath_1993; @itoi_1997]. While the groundstate phase diagram has been studied extensively, much less is known about the low-energy dynamics.
We use a recently introduced algorithm [@binder_2018] based on the density matrix renormalization group (DMRG) [@white_1992; @white_1993; @schollwoeck_2005] and the time evolution of matrix product states (MPS) [@vidal_2004; @white_2004; @daley_2004] with infinite boundary conditions [@mcculloch_2008; @phien_2012] to compute dynamic structure factors $$\label{eq:dsf_def}
S(k, \omega) = \sum_x e^{-ikx} \int\!{\mathrm{d}}t\,e^{i\omega t} {\langle}\psi|{\hat{A}}_x(t) {\hat{B}}_0(0)|\psi{\rangle},$$ where ${\hat{X}}(t) := e^{i{\hat{H}}t}{\hat{X}}\, e^{-i{\hat{H}}t}$ and $|\psi{\rangle}$ is the ground state. Due to the SU(2) symmetry of the model, there are only two independent structure factors for one-site operators – the *spin* structure factor $S^{zz}(k, \omega)$ where ${\hat{A}}={\hat{B}}={\hat{S}}^z$, and the *quadrupolar* structure factor $S^{QQ}(k, \omega)$ where ${\hat{A}}={\hat{B}}={\hat{Q}}={\mathrm{diag}}(1/3, -2/3, 1/3)$. As the ground states of interest are singlets ($S_{{\mathrm{tot}}}=0$), selection rules imply that these structure factors probe excitations with total spin quantum numbers $S_{{\mathrm{tot}}}=1$ and $2$, respectively. They can also be measured in neutron-scattering or ARPES experiments. Starting from high-resolution dynamic structure factors computed with the MPS algorithm [@binder_2018], we study the relevant excitations of the model to explain the observed features. To this end, we compare the numerical results to Bethe ansatz and field-theoretical treatments. In this paper, we focus on the Haldane phase and the extended critical phase, which have the most interesting physics.
Haldane phase
=============
A natural starting point for the discussion is the Heisenberg antiferromagnet with $\theta=0$, where the biquadratic term vanishes. For this case, Haldane mapped the model to a continuum field theory, the $O(3)$ non-linear sigma model (NL$\sigma$M) [@haldane_1983_pla; @haldane_1983_prl], by restricting to the most relevant low-energy modes at momenta $k=0$ and $\pi$. The mapping becomes exact in the limit of large spin $S\to\infty$. The NL$\sigma$M is integrable and predicts an energy gap to the lowest excited states, which is known as the Haldane gap. This is at the heart of the famous Haldane conjecture, according to which the physics of integer and half-integer antiferromagnetic spin chains is fundamentally different.
Based on the NL$\sigma$M description, one expects that the lowest excited states are given by a triplet of single-magnon states at momentum $k=\pi$. The single-magnon dispersion near $k=\pi$ is predicted to be of the form $$\label{eq:nlsm_dispersion}
\varepsilon_\text{NL$\sigma$M}(k)=\sqrt{\Delta^2 + v^2 (k-\pi)^2}$$ with the energy gap $\Delta$, and the spin-wave velocity $v$. Correspondingly, the onset of a two-magnon continuum at $(k,\omega)=(0,2\Delta)$ and of a three-magnon continuum at $(k,\omega)=(\pi,3\Delta)$ are predicted, and the contributions of these continua to the dynamic structure factors have been computed for the NL$\sigma$M [@affleck_1992; @horton_1999; @essler_2000; @essler_2005].
![Comparison of the NL$\sigma$M and fMPS predictions with numerical results. Left: $\theta$-dependence of the excitation gap $\Delta$ (top) and the squared spin-wave velocity $v^2$ (bottom). The NL$\sigma$M values have been scaled to match numerics at $\theta=0$. Right: Dynamic structure factor $S^{zz}(k, \omega)$ at momentum $k=\pi$ versus the NL$\sigma$M three-magnon continuum. For the comparison, the latter has been scaled by matching the single-magnon weights and multiplying a factor four.[]{data-label="fig:nlsm"}](fig001){width="\columnwidth"}
To study the applicability of the NL$\sigma$M for the physics in the Haldane phase, we include the biquadratic term from the Hamiltonian in the mapping to the field theory. Details are provided in Appendix \[sec:mapping\_to\_NLSM\]. In the end, this boils down to evaluating the matrix element of the biquadratic interaction with respect to spin-coherent states. Using the fact that higher-order terms vanish in the continuum limit, we find that the biquadratic term does not change the form of the resulting action. Its effect is a renormalization of the coupling constant $J$ such that $J(\theta) = J(0)(\cos\theta - \sin\theta)$. As long as the biquadratic term is sufficiently small, the identification of the relevant degrees of freedom and the further derivations remain valid. Thus, one would expect the physics to be unchanged for a region around $\theta \approx 0$ with the renormalization leading to a $\theta$ dependence of the gap and the spin-wave velocity with $\Delta,v\propto\cos\theta - \sin\theta$.
Surprisingly, these predictions strongly disagree with our numerical data as shown in Fig. \[fig:nlsm\]. While the NL$\sigma$M predicts a *decreasing* gap when we increase $\theta$, the actual gap *increases*. For the spin-wave velocity, the trend predicted by the NL$\sigma$M seems correct at first sight. However, after crossing the AKLT point $\theta=\arctan(1/3)\approx 0.1024\pi$ [@aklt_1987; @aklt_1988], the minimum of the single-magnon dispersion shifts away from $k=\pi$, resulting in a change in curvature of the dispersion near this antiferromagnetic wavevector (see Fig. \[fig:dsf\_Haldane\]). This is irreconcilable with the NL$\sigma$M prediction and corresponds to a negative $v^2$ in Eq. . In the right panel of Fig. \[fig:nlsm\], we compare the dynamic structure factor $S^{zz}(k, \omega)$ with the NL$\sigma$M result for the three-magnon continuum at momentum $k=\pi$ [@horton_1999; @essler_2000] for several values of $\theta$. While they are qualitatively similar, the NL$\sigma$M curves have significantly stronger high-energy tails [@white_2008], and the discrepancies become more pronounced when increasing $\theta$. Both shape and total spectral weight do not agree. Hence, overall the NL$\sigma$M predictions for the relevant quantities in the Haldane phase are unsatisfactory.
![The dynamic spin and quadrupolar structure factors in the Haldane phase. Dashed lines indicate thresholds for two- and three-magnon continua in the non-interacting approximation.[]{data-label="fig:dsf_Haldane"}](fig002){width="\columnwidth"}
While the NL$\sigma$M description fails quantitatively, it correctly predicts the presence of elementary magnon excitations with dispersion minimum at $k=\pi$ for an extended region in the Haldane phase. The stable single-magnon line and corresponding multi-magnon continua are clearly observed in the dynamic structure factors of Fig. \[fig:dsf\_Haldane\]. See Appendix \[sec:technique\] for details on the numerical computations. The exact shape of the excitations strongly depends on $\theta$. A lot of the features can be explained by using a non-interacting approximation, where multi-magnon states are obtained by adding lattice momenta and energies $\varepsilon(k_i)$ of single-magnon states. This gives rise to boundaries and thresholds corresponding to jumps in the multi-magnon density of states as indicated in Fig. \[fig:dsf\_Haldane\]. Jumps occur when group velocities ${\mathrm{d}}\varepsilon(k_i)/{\mathrm{d}} k_i$ agree for all magnons. Several of the threshold lines do not extend over the entire Brillouin zone, because the single-magnon states are only well defined down to a momentum $k_c(\theta)$ where $\varepsilon(k)$ enters a multi-magnon continuum, e.g., $k_c(0)\approx 0.23\pi$.
![image](fig003){width="\textwidth"}
For small $\theta$, almost all features in $S$ correspond to such thresholds. See, for example, the lower boundaries of the two- and three-magnon continua and, for $\theta=0$, the structures at $(k,\omega)\approx (0.6\pi,3)$ and $(k,\omega)\approx (0.1\pi,5)$, which result from an interplay of jumps in the density of two- and three-magnon states. With increasing $\theta\gtrsim 0.1\pi$, the magnons interact more strongly and the non-interacting approximation cannot explain all structures anymore. At the AKLT point for example, a sharp feature in the quadrupolar structure factor $S^{QQ}$ corresponds to an exactly known excited state with $S_{{\mathrm{tot}}}=2$, $k=\pi$, and energy $\omega=12/\sqrt{10}\approx 3.795$ [@moudgalya_2018].
Tsvelik suggested a free Majorana field theory for the vicinity of the integrable Babujan-Takhtajan point $\theta=-\pi/4$ [@takhtajan_1982; @babujian_1982; @babujian_1983]. Surprisingly, we find that structure factors of that theory [@tsvelik_1990; @essler_2000] deviate even stronger than the NL$\sigma$M results also near $\theta=-\pi/4$. This should be due to a neglect of current-current interactions. Very recently, another alternative field-theoretic approach to the Haldane phase has been suggested [@kim_2020]. Instead of spin-coherent states, it uses an overcomplete basis of “fluctuating” MPS (fMPS) with bond dimension $D=2$, containing the AKLT groundstate [@aklt_1987; @aklt_1988]. Hence, the resulting Gaussian field theory works best around the AKLT point and reproduces the corresponding single-mode approximation for $\varepsilon(k)$ [@arovas_1988]. Fig. \[fig:nlsm\] shows gaps and spin-wave velocities for the fMPS approach. It matches quite well around the AKLT point, but predicts the gap to close too early, at $\theta\approx 0.18\pi$ instead of at the Uimin-Lai-Sutherland (ULS) point $\theta=\pi/4$, and at $\theta\approx 0.04\pi$ instead of at the transition point $\theta=-\pi/4$ to the dimerized phase.
Uimin-Lai-Sutherland point
==========================
The transition from the Haldane phase to the critical phase occurs at the SU(3)-symmetric ULS point $\theta=\pi/4$. Here, the model can be solved using the nested Bethe ansatz [@uimin_1970; @lai_1974; @sutherland_1975]. The low-energy excitations are two types of soliton-like particles with $\varepsilon_1(k_1)=\left(\frac{2}{3}\right)^{3/2}\pi\,[\cos(\frac{\pi}{3}-k_1)-\cos\frac{\pi}{3}]$ for $k_1\in[0,\frac{2\pi}{3}]$ and $\varepsilon_2(k_2)=\left(\frac{2}{3}\right)^{3/2}\pi\,[\cos\frac{\pi}{3}-\cos(\frac{\pi}{3}+k_2)]$ for $k_2\in[0,\frac{4\pi}{3}]$, respectively. They are always created in pairs [@sutherland_1975; @johannesson_1986]. Note that a computation of dynamical correlation functions based on the Bethe ansatz has not yet been achieved for this model. While recent work [@belliard_2012; @wheeler_2013] has addressed the computation of scalar products of Bethe vectors, a single determinant representation has not yet been found. Hence, in the left panel of Fig. \[fig:dsf\_critical\], we show the numerical result for the dynamic structure factor $S^{zz}(k, \omega)$ and the boundaries of the relevant multi-soliton continua, which agree precisely with the main features. The line $\omega_1(k)$ indicates the lowest energy of a two-soliton excitation for a given total momentum $k$, while the second threshold $\omega_2(k)$ marks the energy above which the two-soliton density of states doubles. The upper boundary of the two-soliton continuum is given by $\omega_u(k)$. In addition, a multi-particle continuum with less spectral weight can be found in the momentum range $k\in[\frac{2\pi}{3},\pi]$. Its lower bound $\omega_4(k)=\varepsilon_1(k-\frac{2\pi}{3})$ corresponds to four-soliton states.
The critical phase
==================
As we increase $\theta$ starting from $\pi/4$, the soliton continua remain visible in the dynamic structure factor, but contract to lower energies as shown in Fig. \[fig:dsf\_critical\]. In addition, further excitations emerge at higher energies. The contraction of the continua can be explained by a field-theoretical description that is valid in the vicinity of the ULS point. In this region, the Hamiltonian can be mapped to a level-one SU(3) Wess-Zumino-Witten (WZW) model (action ${\mathcal{A}}_{{\mathrm{SU}}(3)_1}$), a conformal field theory with central charge $c=2$ and certain marginal perturbations [@itoi_1997]. As a function of $\theta$, the overall action can be written as $$\label{eq:action_marginal_terms}
{\mathcal{A}}_\theta = \cos\theta \big[{\mathcal{A}}_{{\mathrm{SU}}(3)_1} + g_1(\theta) {\mathcal{A}}_1 + g_2(\theta) {\mathcal{A}}_2 \big].$$ The first marginal term ${\mathcal{A}}_1$ describes an SU(3)-symmetric current interaction, which arises from constraining the dimension of the local Hilbert space and from a Gaussian integration over fluctuations of a mean-field variable [@itoi_1997]. The second marginal term ${\mathcal{A}}_2$ corresponds to the SU(3)-symmetry breaking Hamiltonian term ${\hat{H}}_\theta-{\hat{H}}_{\pi/4}$ with coupling $g_2\propto \tan\theta - 1$, where $g_2=0$ corresponds to the SU(3)-symmetric ULS point.
![Top: RG flow of the marginal terms in the field-theoretical description in the vicinity of the ULS point [@itoi_1997] (left) and its relation to the phase diagram of ${\hat{H}}_\theta$ (right). Bottom: Comparison of the $k=0$ group velocity extracted from the MPS simulations to the field-theoretical prediction. $v_0 = \sqrt{2}\pi/3$ is the exact group velocity at the ULS point.[]{data-label="fig:contraction"}](fig004){width="0.95\columnwidth"}
Fig. \[fig:contraction\] shows trajectories of the renormalization group (RG) flow for the couplings $g_1$ and $g_2$ of the marginal perturbations [@itoi_1997]. Comparison with the exact Bethe ansatz solution at the ULS point shows that the physically relevant trajectories start with $g_1 \leq 0$. In this regime, the term ${\mathcal{A}}_1$ is always marginally irrelevant and leads only to logarithmic finite-size corrections. Depending on the initial value of $g_2$, we have to distinguish two types of trajectories. For $g_2 < 0$ ($\theta < \pi/4$), the term ${\mathcal{A}}_2$ becomes marginally relevant, leading to a Berezinskii-Kosterlitz-Thouless (BKT) transition. Here, the model is asymptotically free with a slow exponential opening of the Haldane gap. For $g_2 \geq 0$ ($\theta \geq \pi/4$), the term ${\mathcal{A}}_2$ is marginally irrelevant and the RG flow approaches the only fixed point $g_1^* = g_2^* = 0$. Hence, the low-energy physics of this regime is described by the same field theory as the ULS point, corresponding to the presence of the extended critical phase. Furthermore, the prefactor $\cos\theta$ in the action explains the contraction of the multi-soliton continua for increasing $\theta$ as observed in Fig. \[fig:dsf\_critical\]. Fig. \[fig:contraction\] shows a comparison of the numerically obtained group velocities in the critical phase to the field-theoretic prediction, finding very good agreement.
Elementary excitations for theta to pi/2
========================================
With increasing $\theta$, further higher-energy features emerge. To understand them, let us focus on the limit $\theta\to\pi/2^-$ (right panel in Fig. \[fig:dsf\_critical\]). The low-energy continua have collapsed onto the line $\omega=0$ and we observe that the new dispersive excitations at higher energies can be described by intriguingly simple dispersion relations
\[eq:elementaryBQ\] $$\begin{aligned}
\varepsilon^{\pm}_1(k) &=3 + 2\cos(\pm k - 4\pi/3)\quad \text{and}\\
\varepsilon^{\pm}_2(k) &= 7/3 + 2/3 \cos(\pm k -\pi/3).\end{aligned}$$
Additional structures are constant-energy lines that appear at the minima and maxima of $\varepsilon^{\pm}_{1,2}(k)$, bounding corresponding excitation continua. The states in these continua can be explained as combinations of one of the massive excitations with one of the $\omega=0$ excitations with arbitrary momentum $k$.
To characterize the nature of the dispersive features, in particular, to assess whether they are due to elementary one- or two-particle excitations, we compare subsystem density matrices for the perturbed time-evolved state $|\psi(t){\rangle}\propto e^{-i{\hat{H}}t}{\hat{B}}_0|\psi{\rangle}$ and the ground state $|\psi{\rangle}$. Let us define block ${\mathcal{A}}$ as the left part of the spin chain, up to but excluding the central site $x=0$ on which the perturbation is applied, and let us call the remainder of the system ${\mathcal{B}}$. Reduced density matrices for block ${\mathcal{A}}$ are obtained by a partial trace over the degrees of freedom of block ${\mathcal{B}}$, and we define ${\hat{\sigma}}_{\mathcal{A}}(t) := {\operatorname{Tr}}_{\mathcal{B}} |\psi(t){\rangle}{\langle}\psi(t)|$ and ${\hat{\rho}}_{\mathcal{A}} := {\operatorname{Tr}}_B |\psi{\rangle}{\langle}\psi|$. To quantify how similar the perturbed time-evolved states and the ground state are on block ${\mathcal{A}}$, we employ the block fidelity $$\label{eq:subsystem-fidelity}
F_{\mathcal{A}}(t) := \left[ {\operatorname{Tr}}\sqrt{\sqrt{{\hat{\rho}}_{\mathcal{A}}}\, {\hat{\sigma}}_{\mathcal{A}}(t) \sqrt{{\hat{\rho}}_{\mathcal{A}}}} \right]^2. $$
![Evolution of block fidelities for different systems and perturbation operators ${\hat{B}}$ as indicated in brackets. We show the spin-1 Heisenberg chain \[$\theta=0$ in Eq. \] as an example for elementary one-particle excitations and want to characterize excitations at the biquadratic point $\theta=\pi/2^-$. Examples for elementary two-particle excitations include isotropic and anisotropic spin-$1/2$ XXZ chains and the spin-1 chain at the ULS point $\theta=\pi/4$.[]{data-label="fig:block_fidelities"}](fig005){width="0.96\columnwidth"}
For elementary single-particle excitations, we expect half of the weight of $|\psi(t){\rangle}$ to describe a left-moving particle. In this component, the state of the left subsystem is orthogonal to the ground state; hence it does not contribute to $F_{\mathcal{A}}(t)$. The other half describes a particle traveling to the right. On subsystem ${\mathcal{A}}$, this component looks like the ground state. We therefore expect $F_{\mathcal{A}}(t)$ to approach $1/2$ for large times. For elementary two-particle excitations, the wavefunction will contain components describing one particle traveling to the left and one traveling to the right. There can be additional components with both particles traveling in the same direction. Only components where both particles travel to the right will contribute to $F_{\mathcal{A}}(t)$, which should hence approach a value significantly below $1/2$.
Fig. \[fig:block\_fidelities\] shows fidelities $F_{\mathcal{A}}(t)$ for several models. We include isotropic and anisotropic spin-$1/2$ XXZ chains, and the bilinear-biquadratic spin-1 chain at the ULS point $\theta=\pi/4$. For these three examples, we know that the dynamics is dominated by elementary two-particle excitations [@bethe_1931; @cloizeaux_1962; @cloizeaux_1966; @yamada_1969; @bougourzi_1996; @bougourzi_1998; @sutherland_1975; @johannesson_1986]. As expected, $F_{\mathcal{A}}(t)$ converges to a small value significantly below $1/2$. For the spin-1 antiferromagnetic chain, where the dynamics is dominated by the single-magnon excitations, we confirm that the fidelity converges to approximately $1/2$. The small deviation can be attributed to the contribution of multi-magnon excitations with relatively small spectral weight. For the spin-1 chain at $\theta=\pi/2^-$, we find that the block fidelity approaches approximately $1/2$. This is a strong indication that the dispersive features in the dynamic structure factor in the right panel of Fig. \[fig:dsf\_critical\] are elementary one-particle excitations. Further evidence due to equal-time correlators is given in Appendix \[sec:equaltime-correl\].
Temperley-Lieb chain and integrability
======================================
The simple functional form of the dispersions suggests that an exact solution is possible for $\theta=\pi/2^-$. At the purely biquadratic point $\theta=\pi/2$, the Hamiltonian is in fact frustration free and can be expressed as a sum of bond-singlet projectors ${\hat{P}}_{i,i+1}$ such that ${\hat{H}}_{\pi/2} = \sum_i (1 + 3{\hat{P}}_{i,i+1})$. The groundstate space is exponentially large, containing all states without bond singlets. The projectors $\{{\hat{P}}_{i,i+1}\}$ obey a Temperley-Lieb algebra [@temperley_1971; @barber_1989], which implies integrability of the model, and a corresponding generalization of the coordinate Bethe ansatz has been found [@koeberle_1994]. Starting from a ferromagnetic reference state, the ${\hat{H}}_{\pi/2}$ eigenstates can be constructed by creating two types of pseudo-particles and adding so-called impurities. For $\theta=\pi/2^-$, an infinitesimal bilinear term $\sim\sum_i {\hat{{\boldsymbol}{S}}}_i \cdot {\hat{{\boldsymbol}{S}}}_{i+1}$ resolves the groundstate degeneracy. In terms of the Bethe ansatz, the resulting $\theta=\pi/2^-$ ground state is a specific linear combination of $\theta=\pi/2$ ground states containing a complex array of impurities and pseudo-particles. Unfortunately, the Bethe ansatz solution in its current form does not give access to this ground state. Hence, analytically deriving the dispersion relations remains an open problem. These massive excitations need to involve one bond singlet and, thus, $\varepsilon_{1,2}^\pm(k)\geq 1$.
Conclusion
==========
We have explored the low-energy physics of isotropic spin-1 chains. Using an MPS algorithm [@binder_2018], we were able to compute precise dynamic structure factors, even in the highly entangled critical phase with $c=2$. We have found that the NL$\sigma$M and the Majorana field theory fail to capture the influence of the biquadratic term and provide only a rather unsatisfactory description for the Haldane phase. While an interpretation in terms of non-interacting magnons explains a lot of features for small $\theta$, magnon interactions are quite important around and beyond the AKLT point, and a better field-theoretical understanding would be very valuable. In the critical phase, we have observed and explained the contraction of the two-particle continua from the Uimin-Lai-Sutherland (ULS) finding agreement with field theory arguments. In addition, we have discovered new excitations at higher energies, which we have characterized to be of elementary one-particle type. For $\theta\to\pi/2^-$, the dispersion relations of these excitations approach intriguingly simple forms. We hope that this observation will stimulate further research, possibly extending Bethe ansatz treatments for the integrable Temperley-Lieb chain.
We gratefully acknowledge helpful discussions with Fabian H. L. Essler, Israel Klich, Arthur P. Ramirez, and Matthias Vojta, and support through US Department of Energy grant DE-SC0019449.
------------------------------------------------------------------------
Mapping to the non-linear sigma model {#sec:mapping_to_NLSM}
=====================================
In this appendix, we explicitly show the calculations for the mapping of the bilinear-biquadratic spin-1 model $$\label{eq:H_blbq2}
{\hat{H}}_\theta = \sum_i {\hat{h}}_{\theta}(i, i+1), \quad \text{where} \quad {\hat{h}}_{\theta}(i, i+1) \equiv \cos\theta({\hat{{\boldsymbol}{S}}}_i\cdot{\hat{{\boldsymbol}{S}}}_{i+1}) + \sin\theta({\hat{{\boldsymbol}{S}}}_i\cdot{\hat{{\boldsymbol}{S}}}_{i+1})^2,$$ to the non-linear sigma model (NL$\sigma$M), complementing the discussion in the main text. We use a path-integral description based on spin-coherent states as, e.g., described in Ref. [@fradkin_2013], and show how the derivations need to be modified due to the presence of the biquadratic term.
Path integral with spin-coherent states {#ssec:path_integral_spins}
---------------------------------------
For a single spin-$S$ with ${\hat{S}}^z$ eigenbasis $\{|S;M{\rangle}\}$, we define coherent states $|{{{\boldsymbol}{n}}}{\rangle}$ parametrized by unit vectors ${{{\boldsymbol}{n}}}$. They obey $({\hat{{\boldsymbol}{S}}}\cdot{{{\boldsymbol}{n}}})|{{{\boldsymbol}{n}}}{\rangle}=|{{{\boldsymbol}{n}}}{\rangle}$ and can be obtained by rotating the state with maximum ${\hat{S}}^z$ quantum number by an angle $\chi$, $$\label{eq:spin_coherent_states}
|{{{\boldsymbol}{n}}}{\rangle}:= e^{i\chi \frac{({{{\boldsymbol}{e}_z}}\times {{{\boldsymbol}{n}}})}{\vert{{{\boldsymbol}{e}_z}}\times {{{\boldsymbol}{n}}}\vert}\cdot{\hat{{\boldsymbol}{S}}}} |S;S{\rangle},$$ where ${{{\boldsymbol}{n}}}\cdot{{{\boldsymbol}{e}_z}}=\cos\chi$ and ${{{\boldsymbol}{e}_z}}$ is the unit vector along the $z$-axis. These states can be used to derive a path integral representation for spin systems [@fradkin_2013].
Let us first consider a general spin chain with nearest-neighbor interactions ${\hat{H}}= \sum_i {\hat{h}}(i, i+1)$, where ${\hat{h}}(i, i+1)$ acts on sites $i$ and $i+1$. Starting from the partition function in imaginary time, $Z = {\operatorname{Tr}}e^{-\beta{\hat{H}}}$, one follows the usual procedure of discretizing time, $\beta = N\tau$, and inserting resolutions of the identity in terms of the states for each intermediate time point and each lattice site $i$. This leads to the formal expression $$Z = \lim_{\substack{N\to\infty \\ N\tau = \beta}} \int\!{\mathcal{D}}[\{{{{\boldsymbol}{n}}}_i\}]\, e^{-{\mathcal{S}}[\{{{{\boldsymbol}{n}}}_i\}]}.$$ Here, ${\mathcal{D}}[\{{{{\boldsymbol}{n}}}_i\}]$ is an appropriate measure for the integral over the collection of smooth individual unit-vector paths ${{{\boldsymbol}{n}}}_i(t)$ with periodic boundary conditions ${{{\boldsymbol}{n}}}_i(0) = {{{\boldsymbol}{n}}}_i(\beta)$. For the case of nearest-neighbor interactions, one can show that the Euclidean action ${\mathcal{S}}$ takes the form $$\label{eq:action_spin_chain}
{\mathcal{S}}[\{{{{\boldsymbol}{n}}}_i\}] = -iS\sum_i{\mathcal{S}}_{{\mathrm{WZ}}}[{{{\boldsymbol}{n}}}_i(t)] + \sum_i \int_0^\beta\!{\mathrm{d}}t\,{\langle}{{{\boldsymbol}{n}}}_i(t),{{{\boldsymbol}{n}}}_{i+1}(t)|{\hat{h}}(i, i+1)|{{{\boldsymbol}{n}}}_i(t),{{{\boldsymbol}{n}}}_{i+1}(t){\rangle},$$ where $|{{{\boldsymbol}{n}}}_i,{{{\boldsymbol}{n}}}_{i+1}{\rangle}\equiv |{{{\boldsymbol}{n}}}_i{\rangle}\otimes |{{{\boldsymbol}{n}}}_{i+1}{\rangle}$ denotes the tensor product of two spins on neighboring sites. The first term is the sum of Wess-Zumino terms for individual spins, where ${\mathcal{S}}_{{\mathrm{WZ}}}[{{{\boldsymbol}{n}}}_i(t)]$ is given by the total area of the cap on the unit sphere bounded by the (closed) trajectory ${{{\boldsymbol}{n}}}_i(t)$.
Evaluating the matrix element
-----------------------------
In order to obtain the action for the spin-1 model as a function of $\theta$, we need to evaluate the matrix element $${\langle}{{{\boldsymbol}{n}}}_i,{{{\boldsymbol}{n}}}_{i+1}|{\hat{h}}_{\theta}(i, i+1)|{{{\boldsymbol}{n}}}_i,{{{\boldsymbol}{n}}}_{i+1}{\rangle}= \cos\theta {\langle}{{{\boldsymbol}{n}}}_i,{{{\boldsymbol}{n}}}_{i+1}|({\hat{{\boldsymbol}{S}}}_i\cdot{\hat{{\boldsymbol}{S}}}_{i+1})|{{{\boldsymbol}{n}}}_i,{{{\boldsymbol}{n}}}_{i+1}{\rangle}+
\sin\theta {\langle}{{{\boldsymbol}{n}}}_i,{{{\boldsymbol}{n}}}_{i+1}|({\hat{{\boldsymbol}{S}}}_i\cdot{\hat{{\boldsymbol}{S}}}_{i+1})^2|{{{\boldsymbol}{n}}}_i,{{{\boldsymbol}{n}}}_{i+1}{\rangle}.$$ Evaluating the bilinear term is straightforward and yields $$\label{eq:matrix_element_bilinear}
{\langle}{{{\boldsymbol}{n}}}_i,{{{\boldsymbol}{n}}}_{i+1}|({\hat{{\boldsymbol}{S}}}_i\cdot{\hat{{\boldsymbol}{S}}}_{i+1})|{{{\boldsymbol}{n}}}_i,{{{\boldsymbol}{n}}}_{i+1}{\rangle}= {\langle}{{{\boldsymbol}{n}}}_i|{\hat{{\boldsymbol}{S}}}|{{{\boldsymbol}{n}}}_i{\rangle}\cdot {\langle}{{{\boldsymbol}{n}}}_{i+1}|{\hat{{\boldsymbol}{S}}}|{{{\boldsymbol}{n}}}_{i+1}{\rangle}= {{{\boldsymbol}{n}}}_i\cdot{{{\boldsymbol}{n}}}_{i+1}.$$ For the biquadratic term, let $|{{{\boldsymbol}{n}}}(\chi){\rangle}:= e^{i\chi{\hat{S}}^y}|S=1;M=1{\rangle}$. Then $|{{{\boldsymbol}{n}}}(0){\rangle}= |1;1{\rangle}$, and we consider the matrix element $$\label{eq:fchi}
f(\chi) := {\langle}{{{\boldsymbol}{n}}}(0),{{{\boldsymbol}{n}}}(\chi)|({\hat{{\boldsymbol}{S}}}_1\cdot{\hat{{\boldsymbol}{S}}}_2)^2|{{{\boldsymbol}{n}}}(0),{{{\boldsymbol}{n}}}(\chi){\rangle}.$$ Writing the operator in the form $({\hat{{\boldsymbol}{S}}}_1\cdot{\hat{{\boldsymbol}{S}}}_2)^2 = \big(\frac{1}{2}{\hat{S}}^+_1{\hat{S}}^-_2 + \frac{1}{2}{\hat{S}}^-_1{\hat{S}}^+_2 + {\hat{S}}^z_1{\hat{S}}^z_2\big)^2$, one obtains nine terms from expanding the square, and it is straightforward to see that only the two terms $({\hat{S}}^z_1)^2({\hat{S}}^z_2)^2$ and $\frac{1}{4}({\hat{S}}^+_1{\hat{S}}^-_1)({\hat{S}}^-_2{\hat{S}}^+_2)$ yield non-zero contributions in Eq. . Therefore, $$\begin{split}
f(\chi) &= {\langle}{{{\boldsymbol}{n}}}(0)|({\hat{S}}^z)^2|{{{\boldsymbol}{n}}}(0){\rangle}{\langle}{{{\boldsymbol}{n}}}(\chi)|({\hat{S}}^z)^2|{{{\boldsymbol}{n}}}(\chi){\rangle}+ \frac{1}{4} {\langle}{{{\boldsymbol}{n}}}(0)|{\hat{S}}^+ {\hat{S}}^-|{{{\boldsymbol}{n}}}(0){\rangle}{\langle}{{{\boldsymbol}{n}}}(\chi)|{\hat{S}}^- {\hat{S}}^+|{{{\boldsymbol}{n}}}(\chi){\rangle}\\
&= {\langle}{{{\boldsymbol}{n}}}(\chi)|({\hat{S}}^z)^2|{{{\boldsymbol}{n}}}(\chi){\rangle}+ \frac{1}{2} {\langle}{{{\boldsymbol}{n}}}(\chi)|{\hat{S}}^- {\hat{S}}^+|{{{\boldsymbol}{n}}}(\chi){\rangle}.
\end{split}$$ Note that ${\hat{S}}^-{\hat{S}}^+ = {\hat{{\boldsymbol}{S}}}^2 - ({\hat{S}}^z)^2 - {\hat{S}}^z$, and we can easily read off ${\langle}{{{\boldsymbol}{n}}}(\chi)|{\hat{{\boldsymbol}{S}}}^2|{{{\boldsymbol}{n}}}(\chi){\rangle}=2$ as well as ${\langle}{{{\boldsymbol}{n}}}(\chi)|{\hat{S}}^z|{{{\boldsymbol}{n}}}(\chi){\rangle}= \cos\chi$, because we have $S=1$ and ${\hat{{\boldsymbol}{S}}}$ transforms like a vector under rotations. To evaluate the remaining matrix element ${\langle}{{{\boldsymbol}{n}}}(\chi)|({\hat{S}}^z)^2|{{{\boldsymbol}{n}}}(\chi){\rangle}$, we expand the rotated state $|{{{\boldsymbol}{n}}}(\chi){\rangle}$ in the ${\hat{S}}^z$ eigenbasis $\{|1;M{\rangle}\}$, $$|{{{\boldsymbol}{n}}}(\chi){\rangle}= \sum_{M'=-1}^1 |1;M'{\rangle}{\langle}1;M'| e^{i\chi{\hat{S}}^y}|1;1{\rangle}= \frac{1}{2}(1 + \cos\chi) |1;1{\rangle}+ \frac{1}{\sqrt{2}}\sin\chi |1;0{\rangle}+ \frac{1}{2} (1-\cos\chi) |1;-1{\rangle},$$ where the coefficients are entries of the representation matrix for spin-$1$ rotations (Wigner (small) d-matrix). Hence, $${\langle}{{{\boldsymbol}{n}}}(\chi)|({\hat{S}}^z)^2|{{{\boldsymbol}{n}}}(\chi){\rangle}= \frac{1}{4}(1 + \cos\chi)^2 + \frac{1}{4} (1-\cos\chi)^2 = \frac{1}{2} + \frac{1}{2}\cos^2\chi.$$ Putting everything together, we obtain $f(\chi) = \frac{5}{4} - \frac{1}{2} \cos\chi + \frac{1}{4}\cos^2\chi$. As $({\hat{{\boldsymbol}{S}}}_1\cdot{\hat{{\boldsymbol}{S}}}_2)^2$ transforms as a scalar under rotations, the matrix element depends only on the angle between the two spin-coherent states. Thus, the calculation generalizes to any two states $|{{{\boldsymbol}{n}}}_1, {{{\boldsymbol}{n}}}_2{\rangle}$, and we can replace $\cos\chi$ by ${{{\boldsymbol}{n}}}_1\cdot{{{\boldsymbol}{n}}}_2$, obtaining $${\langle}{{{\boldsymbol}{n}}}_1,{{{\boldsymbol}{n}}}_2|({\hat{{\boldsymbol}{S}}}_1\cdot{\hat{{\boldsymbol}{S}}}_2)^2|{{{\boldsymbol}{n}}}_1,{{{\boldsymbol}{n}}}_2{\rangle}= \frac{5}{4} - \frac{1}{2} {{{\boldsymbol}{n}}}_1\cdot{{{\boldsymbol}{n}}}_2 + \frac{1}{4} ({{{\boldsymbol}{n}}}_1\cdot{{{\boldsymbol}{n}}}_2)^2.$$ Combining this result with Eq. , we arrive at the matrix element of the Hamiltonian interaction $$\label{eq:matrix_element_nlsm}
{\langle}{{{\boldsymbol}{n}}}_i,{{{\boldsymbol}{n}}}_{i+1}|{\hat{h}}_{\theta}(i, i+1)|{{{\boldsymbol}{n}}}_i,{{{\boldsymbol}{n}}}_{i+1}{\rangle}= \frac{5}{4} \sin\theta + \Big(\cos\theta - \frac{1}{2}\sin\theta \Big) ({{{\boldsymbol}{n}}}_i\cdot{{{\boldsymbol}{n}}}_{i+1}) + \frac{1}{4} \sin\theta ({{{\boldsymbol}{n}}}_i\cdot{{{\boldsymbol}{n}}}_{i+1})^2.$$ Note that $$\begin{split}
({{{\boldsymbol}{n}}}_i+{{{\boldsymbol}{n}}}_{i+1})^2 &= {{{\boldsymbol}{n}}}_i^2 + 2{{{\boldsymbol}{n}}}_i\cdot{{{\boldsymbol}{n}}}_{i+1} + {{{\boldsymbol}{n}}}_{i+1}^2 = 2{{{\boldsymbol}{n}}}_i\cdot{{{\boldsymbol}{n}}}_{i+1} + 2, \qquad\text{and}\\
({{{\boldsymbol}{n}}}_i+{{{\boldsymbol}{n}}}_{i+1})^4 &= (2{{{\boldsymbol}{n}}}_i\cdot{{{\boldsymbol}{n}}}_{i+1} + 2)^2 = 4({{{\boldsymbol}{n}}}_i\cdot{{{\boldsymbol}{n}}}_{i+1})^2 + 8 {{{\boldsymbol}{n}}}_i\cdot{{{\boldsymbol}{n}}}_{i+1} + 4,
\end{split}$$ such that $$\begin{split}
{{{\boldsymbol}{n}}}_i\cdot{{{\boldsymbol}{n}}}_{i+1} &= \frac{1}{2}({{{\boldsymbol}{n}}}_i+{{{\boldsymbol}{n}}}_{i+1})^2 + {\mathrm{const}}, \qquad\text{and}\\
({{{\boldsymbol}{n}}}_i\cdot{{{\boldsymbol}{n}}}_{i+1})^2 &= \frac{1}{4} ({{{\boldsymbol}{n}}}_i+{{{\boldsymbol}{n}}}_{i+1})^4 - ({{{\boldsymbol}{n}}}_i+{{{\boldsymbol}{n}}}_{i+1})^2 + {\mathrm{const}}.
\end{split}$$ Inserting this into Eq. yields for the matrix element, up to an irrelevant additive constant, $${\langle}{{{\boldsymbol}{n}}}_i,{{{\boldsymbol}{n}}}_{i+1}|{\hat{h}}_{\theta}(i, i+1)|{{{\boldsymbol}{n}}}_i,{{{\boldsymbol}{n}}}_{i+1}{\rangle}= \frac{1}{2} (\cos\theta - \sin\theta) ({{{\boldsymbol}{n}}}_i+{{{\boldsymbol}{n}}}_{i+1})^2 + \frac{1}{16} \sin\theta ({{{\boldsymbol}{n}}}_i+{{{\boldsymbol}{n}}}_{i+1})^4 + {\mathrm{const}}.$$ Then, as a function of $\theta$, the action for the bilinear-biquadratic spin-1 chain is given by $$\label{eq:action_blbq}
{\mathcal{S}}_\theta[\{{{{\boldsymbol}{n}}}_i\}] = -i \sum_i{\mathcal{S}}_{{\mathrm{WZ}}}[{{{\boldsymbol}{n}}}_i(t)] + \int_0^\beta\!{\mathrm{d}}t\,\sum_i \Big[\frac{1}{2}(\cos\theta - \sin\theta)({{{\boldsymbol}{n}}}_i(t)+{{{\boldsymbol}{n}}}_{i+1}(t))^2 + \frac{1}{16} \sin\theta ({{{\boldsymbol}{n}}}_i(t)+{{{\boldsymbol}{n}}}_{i+1}(t))^4 \Big].$$ Here, the special case $\theta=0$ corresponds to the spin-1 Heisenberg antiferromagnet, for which the original derivation was done [@haldane_1983_pla; @haldane_1983_prl; @affleck_1989].
Continuum limit and non-linear sigma model mapping
--------------------------------------------------
In the next steps of the derivation, we follow the same approach that was taken for the Heisenberg antiferromagnet [@haldane_1983_pla; @haldane_1983_prl; @affleck_1989; @fradkin_2013]. It is reasonable to expect staggered short-range order for the spin field ${{{\boldsymbol}{n}}}$, and the most relevant low-energy modes should be ferromagnetic and antiferromagnetic fluctuations. Hence, we can choose an ansatz that separates these relevant degrees of freedom, $$\label{eq:field_separation_nlsm}
{{{\boldsymbol}{n}}}_i = (-1)^i \sqrt{1-a^2{{{\boldsymbol}{l}}}_i^2}\, {{{\boldsymbol}{m}}}_i + a{{{\boldsymbol}{l}}}_i,$$ where $a$ is the lattice spacing, and we have the constraints ${{{\boldsymbol}{m}}}_i^2 = 1$ and ${{{\boldsymbol}{m}}}_i \cdot {{{\boldsymbol}{l}}}_i = 0$. Here, ${{{\boldsymbol}{m}}}_i$ and ${{{\boldsymbol}{l}}}_i$ are slowly varying, which allows us to take the continuum limit $a\to 0$. We can write ${{{\boldsymbol}{m}}}_{i+1} \approx {{{\boldsymbol}{m}}}_i + a (\partial_x {{{\boldsymbol}{m}}}_i)$ and similarly for ${{{\boldsymbol}{l}}}_i$. When inserting the ansatz into the action , we only need to keep terms to the lowest order in $a$. For the first term, this yields $$\frac{1}{2}({{{\boldsymbol}{n}}}_{i-1}+{{{\boldsymbol}{n}}}_i)^2 + \frac{1}{2}({{{\boldsymbol}{n}}}_i+{{{\boldsymbol}{n}}}_{i+1})^2 = a^2 \left((\partial_x {{{\boldsymbol}{m}}}_i)^2 + 4 {{{\boldsymbol}{l}}}_i^2 \right) + {\mathcal{O}}(a^3),$$ where we have grouped two neighboring interaction terms together to take advantage of the cancellation of additional terms. Correspondingly, the contributions from the second term $({{{\boldsymbol}{n}}}_i+{{{\boldsymbol}{n}}}_{i+1})^4$ will be of the order ${\mathcal{O}}(a^4)$. Hence, the second term can be ignored in the continuum (low-energy) limit, and the effective action has the same form as in the case of the Heisenberg antiferromagnet ($\theta=0$). The only change due to the biquadratic term is an effective rescaling of the coupling in the form $J(\theta) = \cos\theta - \sin\theta$. Thus, the remaining steps in the derivation for the mapping to the NL$\sigma$M are identical to the case of the Heisenberg antiferromagnet.
After taking the continuum limit for the Wess-Zumino terms as well, one can integrate out the fluctuations in the field ${{{\boldsymbol}{l}}}$, which yields an effective action $$\label{eq:action_afm_chain}
{\mathcal{S}}[{{{\boldsymbol}{m}}}] = \iint\!{\mathrm{d}}x\,{\mathrm{d}}t\,\frac{1}{2g}\left(v(\theta)\,(\partial_x {{{\boldsymbol}{m}}})^2 + \frac{1}{v(\theta)} (\partial_t {{{\boldsymbol}{m}}})^2\right) + i\phi {\mathcal{Q}}[{{{\boldsymbol}{m}}}],$$ where we have introduced the coupling constant $g=2/S$, the spin wave velocity $v(\theta)=2aJ(\theta)S$, and the topological angle $\phi=2\pi S$. The second term contains the topological charge or winding number of the field configuration $${\mathcal{Q}}[{{{\boldsymbol}{m}}}] = \frac{1}{8\pi} \iint\!{\mathrm{d}}x\,{\mathrm{d}}t\,\epsilon_{ij} {{{\boldsymbol}{m}}}\cdot(\partial_i {{{\boldsymbol}{m}}}\times \partial_j {{{\boldsymbol}{m}}}) \in \mathbb{Z}.$$ Note that for integer spin $S$, the imaginary part $\phi {\mathcal{Q}}[{{{\boldsymbol}{m}}}]$ in Eq. is always an integer multiple of $2\pi$, such that it does not affect the physics. In this case, the model is described by the first term, which is the standard $O(3)$ non-linear sigma model (NL$\sigma$M). For half-integer spin, however, the contributions to the path integral of configurations with an odd winding number ${\mathcal{Q}}$ are weighted by a factor $-1$. This leads to fundamentally different physics, which is at the core of Haldane’s conjecture [@haldane_1983_pla; @haldane_1983_prl; @affleck_1989]. While antiferromagnetic chains with integer spin are gapped, those with half-odd-integer spin are gapless.
In conclusion, the low-energy physics of the bilinear-biquadratic spin-1 chain should be described by the NL$\sigma$M, which predicts an excitation gap $\Delta(\theta) \propto J(\theta) e^{-\pi S}$ and a dispersion $\varepsilon(k) = \sqrt{\Delta^2 + v^2 (k-\pi)^2}$ for the single-magnon line near $k=\pi$. In the main text, we are testing the dependence of the gap and the spin wave velocity on the Hamiltonian parameter $\theta$, for which we summarize the NL$\sigma$M predictions $$\Delta(\theta) \propto (\cos\theta - \sin\theta)\qquad\text{and}\qquad v^2(\theta) \propto (\cos\theta - \sin\theta)^2.$$
MPS computation of dynamic structure factors {#sec:technique}
============================================
In this appendix, we briefly summarize the numerical techniques used to compute the dynamic structure factors $$\label{eq:appendix_dsf}
S(k, \omega) = \sum_x e^{-ikx} \int\!{\mathrm{d}}t\,e^{i\omega t} S(x, t) \qquad\text{with}\qquad S(x, t) = {\langle}\psi|e^{i{\hat{H}}t}{\hat{A}}_x e^{-i{\hat{H}}t} {\hat{B}}_0|\psi{\rangle}$$ presented in the main text. Here, $|\psi{\rangle}$ is the ground state and ${\hat{A}}_x$ and ${\hat{B}}_0$ are operators acting on sites $x$ and $0$, respectively, for which we probe the response of the system. In this paper, we compute dynamic *spin* and *quadrupolar* structure factors, for which ${\hat{A}}={\hat{S}}^z$ and ${\hat{A}}={\hat{Q}}={\mathrm{diag}}(1/3, -2/3, 1/3)$, respectively, and ${\hat{B}}={\hat{A}}$ in both cases. We use a real-time scheme to evaluate response functions of the form $$\label{eq:appendix_response}
S(x,t) = e^{iE_0 t}{\langle}\psi|{\hat{A}}_x e^{-i{\hat{H}}t}{\hat{A}}_0|\psi{\rangle}$$ for a range of distances $x$ and times $t$, where $E_0$ denotes the groundstate energy. We proceed as follows.
First, we compute a uniform infinite MPS (iMPS) approximation $|\psi{\rangle}$ of the ground state using the iDMRG algorithm [@white_1992; @white_1993; @mcculloch_2008]. To ensure convergence, we choose an MPS unit cell of two sites for the Haldane phase and three sites for the critical phase [@binder_2018]. Then, we initialize an appropriate spatial range or *window* with copies of the groundstate unit cell. In this range, the iMPS tensors will be allowed to vary as described in Refs. [@phien_2012; @milsted_2013].
To compute the response function , we apply the operator ${\hat{A}}$ at a site $i=0$ in the center of the window to get ${\hat{A}}_0|\psi{\rangle}$. For the simulation of the time evolution $|\psi'(t){\rangle}:= e^{-i{\hat{H}}t}{\hat{A}}_0|\psi{\rangle}$, we use tDMRG [@vidal_2004; @white_2004; @daley_2004] with infinite boundary conditions [@phien_2012; @milsted_2013]. All Hamiltonian terms that are supported outside the finite window are projected onto the reduced Hilbert space of the left or right block, and the MPS tensors outside of the window are kept invariant. This is possible because the perturbation only has a significant effect inside a causal cone, a finite spatial region growing linearly with time [@lieb_1972]. We choose the size of the heterogeneous window large enough to contain the causal cone for all simulation times. Hence, the wavefunction close to the boundary locally looks like the ground state.
Note that, in the $\{{\hat{S}}^z_i\}$ eigenbasis, the coefficients of the wavefunction $|\psi'(-t){\rangle}= e^{i{\hat{H}}t} {\hat{A}}_0|\psi{\rangle}$ are just the complex conjugates of the coefficients of $|\psi'(t){\rangle}$. So $S(x,t)$ can be evaluated very efficiently by computing overlaps of the time-evolved state with its complex conjugate, spatially shifted relative to each other by $x$ sites. See Ref. [@binder_2018] for details on how the corresponding contraction of iMPS tensors is performed. We obtain $$\label{eq:SxtInfinite}
S(x, 2t) = e^{iE_0(2t)}{\langle}\psi'(-t)|\hat{T}_{-x}|\psi'(t){\rangle},$$ where $\hat{T}_{-x}$ denotes the operator shifting by $-x$ sites. This approach has two advantages. First, by evolving $\psi'$ up to time $t$, we obtain response functions up to time $2t$. This is important because entanglement grows during the time evolution, leading to a corresponding increase in computation costs, which limits the accessible time range. Second, one can obtain the response function for all lattice sites with just one time-evolution run [@binder_2018], as compared to conventional finite-size simulations, where a separate time-evolution run is required for each lattice site $x$.
In the simulations, we use windows of size $L=255$ to $448$. For the time evolution, we employ a fourth-order Lie-Trotter-Suzuki decomposition [@trotter_1959; @suzuki_1976; @barthel_2019] of the time-evolution operator with time step $\tau=0.1$. To control the precision and the computation costs, we truncate components of the wavefunction with Schmidt coefficients $\lambda_k<\lambda_{\text{trunc}}$, where we choose the truncation threshold in the range $\lambda^2_{\text{trunc}}\sim 10^{-10} - 10^{-8}$, depending on the Hamiltonian parameter $\theta$. We typically evaluate the response function up to times in the range $t\approx 100$.
![image](fig006){width="90.00000%"}
It is known that, in the Haldane phase, the stable single-magnon excitation contributes significantly to the dynamics. This can be seen as a $\delta$-peak in the dynamic structure factor and as a nondecaying oscillation in the response function. As an example, we consider the Heisenberg antiferromagnet \[$\theta=0$ in Eq. \], and show the response function $S(k,t)=\sum_x e^{-ikx} S(x,t)$ for $k=0.8\pi$ in Fig. \[fig:technique\_haldane\_dsf\]. Taking the Fourier transform $S(k,\omega)=\int\!{\mathrm{d}}t\,e^{i\omega t}S(k,t)$ directly, using a finite window in the time domain, would lead to strong ringing artifacts in the spectrum. To avoid this, we split the response function into two parts, separating the contributions from the single-magnon state and the remainder such that $$\label{eq:technique_haldane_split}
S(k,t) =: a e^{-i\omega_0 t} + \tilde{S}(k,t) \quad\text{and}\quad
S(k,\omega) =: 2\pi a \delta(\omega-\omega_0) + \tilde{S}(k,\omega).$$ Here, $a$ and $\omega_0$ are real nonnegative parameters describing the amplitude and the frequency of the single-magnon peak, which are to be chosen such that the remainder vanishes for large times, $\tilde{S}(k,t)\to0$ for $t\to\infty$. As the contribution $\tilde{S}(k,\omega)$ to the structure factor captures broad multi-magnon continua, its signal becomes localized in the time representation, and $\tilde{S}(k,t)$ typically decays relatively fast. In our MPS simulations, we can hence reach the regime where $S(k,t)$ is dominated by the nondecaying oscillation $ae^{-i\omega_0t}$ and $\tilde{S}(k,t)$ becomes negligible. This allows to extract the parameters $a$ and $\omega_0$ by simply fitting $ae^{-i\omega_0t}$ to $S(k,t)$ for a suitably chosen time window. Then, we can obtain $\tilde{S}(k,t) = S(k,t) - ae^{-i\omega_0t}$ which, for our example, is shown in Fig. \[fig:technique\_haldane\_dsf\]. The contribution of the remainder is small compared to the single-magnon oscillation, and it decays fast as a function of $t$. Hence, we can compute $\tilde{S}(k,\omega)$ by direct Fourier transform and one can complement with linear prediction if necessary [@white_2008; @barthel_2009]. Note that in order to visualize the $\delta$-peak in the structure-factor plots, we replace it by a very narrow Gaussian peak centered at $\omega_0$ with total spectral weight $2\pi a$ as shown in Fig. \[fig:technique\_haldane\_dsf\].
The whole procedure has to be carried out separately for each momentum $k$ (for the full dynamic structure factors presented in the main text, we use momentum increments of $\Delta k=0.001\pi$). As the fit parameters $a$ and $\omega_0$ are smooth functions of $k$, it is advantageous to carry out the fits in a sweep through the Brillouin zone, and to initialize each fit with the resulting parameters from the previous momentum $k$. This way, one avoids local minima in the parameter optimization. Note that this approach is only applicable in the region where the single-magnon excitation is stable. In practice, we observe that the amplitude $a$ of the single-magnon contribution decreases and then vanishes as $k$ approaches the momentum $k_c$ where the single-magnon line enters the multi-magnon continuum.
Equal-time correlators {#sec:equaltime-correl}
======================
![image](fig007){width="\textwidth"}
In this appendix, we describe an alternative approach to characterizing the nature of the elementary excitations observed for $\theta\to\pi/2^-$ in the spin-1 chain . It is based on equal-time correlators and complements the analysis of block fidelities in the main text. For the computation of response functions, we apply a local operator (here ${\hat{S}}^z_0$) to the ground state $|\psi{\rangle}$, which adds some excitation energy $\varepsilon$ to the system. During the time evolution of the perturbed wave function $|\psi(t){\rangle}:= e^{-i{\hat{H}}t}{\hat{S}}^z_0|\psi{\rangle}$, the energy is distributed in the system in a causal cone. This is quantified by $h_x(t):={\langle}\psi(t)|{\hat{h}}(x,x+1)|\psi(t){\rangle}-h_0$, which is the expectation value of the local bond energy relative to the groundstate energy density $h_0$. The total excitation energy $\varepsilon:= \sum_x h_x(t)$ is a conserved quantity. The equal-time correlator $C_{x_1,x_2}(t):= {\langle}\psi(t)|\hat{h}_{x_1}\hat{h}_{x_2}|\psi(t){\rangle}$ quantifies correlations in the distribution of the excitation energy at fixed times $t$.
The equal-time correlator can be employed to distinguish elementary one-particle and two-particle excitations. Strong correlations $C_{x_1,x_2}(t)$ for $x_1$ and $x_2$ far apart are the signature of two-particle excitations, as components of the wave function contain both a left- and a right-traveling excitation. The absence of such strong correlations is an indicator of dominant one-particle excitations, where wave-function components contain either a particle traveling to the left or to the right.
We test this approach for the spin-1 Heisenberg antiferromagnet \[$\theta=0$ in \] and for the anisotropic spin-$1/2$ XXZ chain $$\label{eq:H_XXZ}
{\hat{H}}= \sum_i\left[ \frac{1}{2}\big({\hat{s}}^+_i {\hat{s}}^-_{i+1} + {\hat{s}}^-_i {\hat{s}}^+_{i+1}\big) + J_z {\hat{s}}^z_i {\hat{s}}^z_{i+1} \right]$$ with anisotropy $J_z=3$, which places the model in the gapped Néel phase. Then, we apply the technique to learn about the nature of excitations in the critical phase of the bilinear-biquadratic spin-1 chain in the limit $\theta\to\pi/2^-$.
The results are shown in Fig. \[fig:equal-time-correlators\]. In all three systems, the excitation energy spreads in a causal cone emanating from the place and time of the perturbation $(x,t)=(0,0)$. The spin-1 Heisenberg antiferromagnet has dominant single-magnon excitations and, correspondingly, correlations in $C_{x_1,x_2}(t)$ are weak except for the region where $x_1\approx x_2$. For the anisotropic spin-1/2 XXZ chain, the elementary excitations are spinons that are always created in pairs, leading to strong non-local correlations. These numerical results confirm the expectations for the two test cases. For the bilinear-biquadratic spin-1 chain with $\theta=\pi/2^-$, we observe no strong correlations for distant sites $x_1$ and $x_2$. Therefore, we conclude that the dynamics is dominated by elementary one-particle excitations. This provides further evidence supporting our result presented in the main text, where we reached the same conclusion through the analysis of subsystem fidelities.
[66]{}ifxundefined \[1\][ ifx[\#1]{} ]{}ifnum \[1\][ \#1firstoftwo secondoftwo ]{}ifx \[1\][ \#1firstoftwo secondoftwo ]{}““\#1””@noop \[0\][secondoftwo]{}sanitize@url \[0\][‘\
12‘\$12 ‘&12‘\#12‘12‘\_12‘%12]{}@startlink\[1\]@endlink\[0\]@bib@innerbibempty [****, ()](\doibase 10.1103/PhysRevLett.56.371) [****, ()](\doibase
10.1103/PhysRevB.42.4677) [****, ()](\doibase
10.1103/PhysRevLett.87.017202) [****, ()](\doibase
10.1103/PhysRevLett.69.3571) [****, ()](\doibase 10.1103/PhysRevB.50.9174) [****, ()](\doibase
10.1103/PhysRevLett.83.4176) [****, ()](\doibase 10.1103/PhysRevLett.85.2380) [****, ()](\doibase
10.1103/PhysRevLett.90.250402) [****, ()](\doibase 10.1103/PhysRevA.68.063602) [****, ()](\doibase 10.1103/PhysRevLett.93.250405) [****, ()](\doibase
10.1103/PhysRevB.40.4621) [****, ()](\doibase 10.1209/0295-5075/9/8/013) [****, ()](\doibase
10.1088/0953-8984/5/40/023) [****, ()](\doibase
10.1103/PhysRevB.74.144426) [****, ()](https://doi.org/10.1016/0375-9601(83)90631-X) [****, ()](https://doi.org/10.1103/PhysRevLett.50.1153) [****, ()](http://stacks.iop.org/0953-8984/1/i=19/a=001) [****, ()](\doibase 10.1103/PhysRevB.80.155131) [****, ()](\doibase
10.1103/PhysRevB.85.075125) @noop [****, ()]{} [****, ()](\doibase 10.1063/1.1666522) [****, ()](\doibase 10.1103/PhysRevB.12.3795) [****, ()](\doibase 10.1103/PhysRevB.44.11836) [****, ()](\doibase
10.1103/PhysRevB.47.872) [****, ()](\doibase 10.1103/PhysRevB.55.8295) [****, ()](\doibase
10.1103/PhysRevB.98.235114) [****, ()](\doibase
10.1103/PhysRevLett.69.2863) [****, ()](\doibase
10.1103/PhysRevB.48.10345) [****, ()](\doibase
10.1103/RevModPhys.77.259) [****, ()](\doibase
10.1103/PhysRevLett.93.040502) [****, ()](\doibase 10.1103/PhysRevLett.93.076401) [****, ()](http://stacks.iop.org/1742-5468/2004/i=04/a=P04005) @noop [ ()]{}, [****, ()](\doibase
10.1103/PhysRevB.86.245107) [****, ()](\doibase 10.1103/PhysRevB.45.4667) [****, ()](\doibase 10.1103/PhysRevB.60.11891) [****, ()](\doibase 10.1103/PhysRevB.62.3264) in [**](\doibase 10.1142/9789812775344_0020) (, ) pp. [****, ()](\doibase 10.1103/PhysRevLett.59.799) [****, ()](https://projecteuclid.org:443/euclid.cmp/1104161001) [****, ()](https://doi.org/10.1103/PhysRevB.77.134437) [****, ()](\doibase 10.1103/PhysRevB.98.235155) [****, ()](\doibase https://doi.org/10.1016/0375-9601(82)90764-2) [****, ()](\doibase https://doi.org/10.1016/0375-9601(82)90403-0) [****, ()](\doibase https://doi.org/10.1016/0550-3213(83)90668-5) [****, ()](\doibase
10.1103/PhysRevB.42.10499) @noop [ ()]{}, [****, ()](\doibase 10.1103/PhysRevLett.60.531) [****, ()](\doibase 10.1016/0375-9601(86)90300-2) [****, ()](\doibase 10.1088/1742-5468/2012/10/p10017) [****, ()](\doibase
https://doi.org/10.1016/j.nuclphysb.2013.06.015) [****, ()](http://dx.doi.org/10.1007/BF01341708) [****, ()](\doibase
10.1103/PhysRev.128.2131) [****, ()](\doibase https://doi.org/10.1063/1.1705048) [****, ()](https://doi.org/10.1143/PTP.41.880) [****, ()](\doibase 10.1103/PhysRevB.54.R12669) [****, ()](\doibase 10.1103/PhysRevB.57.11429) [****, ()](https://doi.org/10.1098/rspa.1971.0067) [****, ()](\doibase
10.1088/0305-4470/27/16/009) [**](https://doi.org/10.1017/CBO9781139015509) (, ) [****, ()](\doibase
10.1103/PhysRevB.88.155116) [****, ()](https://projecteuclid.org/euclid.cmp/1103858407) [****, ()](https://doi.org/10.1090/S0002-9939-1959-0108732-6) [****, ()](https://doi.org/10.1143/PTP.56.1454) [****, ()](\doibase https://doi.org/10.1016/j.aop.2020.168165) [****, ()](\doibase
10.1103/PhysRevB.79.245101)
|
---
address: |
University of Illinois, Department of Physics, 1110 West Green St., Urbana, IL 61801, USA\
E-mail: kpitts@uiuc.edu
author:
- 'K. T. PITTS'
title: |
MIXINGS, LIFETIMES, SPECTROSCOPY AND PRODUCTION\
OF HEAVY FLAVOR AT THE TEVATRON
---
=13.07pt
Introduction
============
In the 1980’s and 1990’s, the complementary $b$ physics programs of CLEO, ARGUS, the LEP experiments, SLD, DØ and CDF began to make significant contributions to our understanding of the production and decay of $B$ hadrons. With the successful turn-on of the BaBar and Belle experiments in the last few years, many experimental measurements in the $b$ sector have achieved impressive precision.
A number of questions remain, and it will again take an effort of complementary measurements to make further progress on our understanding of the $b$ system. The Fermilab Tevatron, along with the upgraded CDF and DØ detectors, offers a unique opportunity to study heavy flavor production and decay. In many cases, the measurements that can be performed at the Tevatron are complementary to those performed at the $e^+e^-$ $B$-factories.
In this document, we summarize some of the recent experimental progress in the measurements of $B$ lifetimes, spectroscopy, mixing, and heavy flavor production at the Tevatron. Since the details of many of these analyses are presented in other publications, we will attempt to include some background information in this summary that the reader might not find elsewhere.
Overview: $B$ Physics at the Tevatron
=====================================
In $p\overline{p}$ collisions at $\sqrt{s}=1.96\, \rm TeV$, the $b\overline{b}$ cross section is large ${\cal O}(50 \mu b)$, yet it is only about $1/1000^{\rm th}$ the total inelastic $p\overline{p}$ cross section. At a typical Tevatron instantaneous luminosity of ${\cal L} = 4\times 10^{31}\, \rm cm^{-2}s^{-1}$, we have a $b\overline{b}$ production rate of $\sim\! 2~{\rm kHz}$ compared to an inelastic scattering rate of $\sim\! 2~{\rm MHz}$.
---------------------------------------------------------------------------------------------------------
Name $\overline{b}$ hadron $b$ hadron
--------------------------- --------------------------------------- -------------------------------------
charged $B$ meson $B^+ =|\overline{b}u>$ $B^- = |b\overline{u}>$
neutral $B$ meson $B^0 = |\overline{b}d>$ $\overline{B^0} = |b\overline{d}> $
$B_s$ ($B$-sub-$s$) meson $B^0_s = |\overline{b}s>$ $\overline{B^0_s}=|b\overline{s}>$
$B_c$ ($B$-sub-$c$) meson $B^+_c = |\overline{b}c>$ $B^-_c = |b\overline{c}>$
$\Lambda_b$ (Lambda-$b$) $\overline{\Lambda_b} = |\overline{u} $\Lambda_b = |udb> $
\overline{d}\overline{b}> $
$\Upsilon$ (Upsilon)
---------------------------------------------------------------------------------------------------------
: $B$ mesons and baryons. This is an incomplete list, as there are excited states of the mesons and baryons (*e.g. ${B^*}^0$). Also, a large number of $b$-baryon states (*e.g. $\Sigma^-_b = | ddb >$) and bottomonium states (*e.g. $\eta_b$, $\chi_b$) are not listed.***[]{data-label="ta:b"}
The $b\overline{b}$ quarks are produced by the strong interaction, which preserves “bottomness”, therefore they are always produced in pairs.[^1] Unlike $e^+e^-$ collisions on the $\Upsilon(4S)$ resonance, the high energy collisions access all angular momentum states, so the $b$ and $\overline{b}$ are produced incoherently. As a consequence, lifetime, mixing and [*CP*]{}-violation measurements can be performed by reconstructing a single $B$ hadron in an event. The produced $b$ quarks can fragment into all possible species of $B$ hadrons, including $B_s$, $B_c$ and $b$-baryons. Table \[ta:b\] lists the most common species of $B$ hadrons, all of which are accessible at the Tevatron.
The transverse momentum ($p_T$) spectrum for the produced $B$ hadrons is a steeply falling distribution, which means that most of the $B$ hadrons have very low transverse momentum. For example, a fully reconstructed sample of $B\rightarrow J/\psi K$ decays has an average $B$ meson $p_T$ of about $10\, {\rm GeV}/c$. As a consequence, the tracks from these decays are typically quite soft, often having $p_T < 1\, {\rm GeV}/c$. One of the experimental limitations in reconstructing these modes is the ability to find charged tracks at very low momentum. $B$ hadrons with low transverse momentum do not necessarily have low total momentum. Quite frequently, the $B$ mesons have very large longitudinal momentum (the momentum component along the beam axis.) These $B$ hadrons are boosted along the beam axis and are consequently outside the acceptance of the detector.
To reconstruct the $B$ hadrons that do fall within the detector acceptance, the experiments need excellent tracking that extends down to low transverse momentum, excellent vertex detection to identify the long-lived hadrons containing heavy flavor, and high-rate trigger and data acquisition systems to handle the high rates associated with this physics. In the following section, we outline some of the relevant aspects of the Tevatron detectors.
The CDF and DØ Detectors
=========================
The CDF and DØ detectors are both axially symmetric detectors that cover about $98\% $ of the full $4\pi$ solid angle around the proton-antiproton interaction point. For Tevatron Run II, both experiments have axial solenoidal magnetic fields, central tracking, and silicon microvertex detectors. Additional details about the experiments can be found elsewhere.[@d0; @cdf] The strengths of the detectors are somewhat complementary to one another and are discussed briefly below.
DØ
--
The DØ tracking volume features a $2\, \rm T$ solenoid magnet surrounding a scintillating fiber central tracker that covers the region $|\eta| \le 1.7$, where $\eta$ is the pseudorapidity, $\eta = -ln(tan(\theta/2))$, and $\theta$ the polar angle measured from the beamline. The DØ silicon detector has a barrel geometry interspersed with disk detectors which extends the forward tracking to $|\eta| \le 3$. In addition, the DØ muon system covers $|\eta| \le 2$ and the uranium/liquid-argon calorimeter has very good energy resolution for electron, photon and hadronic jet energy measurements.
CDF
---
The CDF detector features a $1.4\, \rm T$ solenoid surrounding a silicon microvertex detector and gas-wire drift chamber. The CDF spectrometer has excellent mass resolution. These properties, combined with muon detectors and calorimeters, allow for excellent muon and electron identification, as well as precise tracking and vertex detection for $B$ physics.
Triggering
----------
Both experiments exploit heavy flavor decays which have leptons in the final state. Identification of dimuon events down to very low momentum is possible, allowing for efficient $J/\psi \rightarrow \mu^+\mu^-$ triggers. As a consequence, both experiments are able to trigger upon the $J/\psi$ decay, and then fully reconstruct decay modes such as $B^0_s\rightarrow J/\psi \phi$, with $\phi\rightarrow K^+K^-$. Triggering on dielectrons to isolate $J/\psi \rightarrow e^+e^-$ decays is also possible, although at low momentum the backgrounds become more problematic.
CDF has implemented a $J/\psi\rightarrow e^+e^-$ trigger requiring each electron have $p_T>2\, \rm GeV/{\it c}$. Because the triggering and selection cuts required to reduce background are more stringent than they are for the dimuon mode, the yield for the $J/\psi\rightarrow e^+e^-$ mode is about $1/10^{\rm th}$ the yield for the dimuon mode. However, since selection criteria isolate a dielectron mode is at higher momentum then the dimuon mode, the $B$ purity in the $J/\psi\rightarrow e^+e^-$ channel is higher. The analyses shown here use only the dimuon mode, although future analyses will supplement the signal sample with the dielectron mode.
Both experiments also have inclusive lepton triggers designed to accept semileptonic $B\rightarrow \ell \nu_\ell X$ decays. DØ has an inclusive muon trigger with excellent acceptance, allowing them to accumulate very large samples of semileptonic decays. The CDF semileptonic triggers require an additional displaced track associated with the lepton, providing cleaner samples with a smaller yields.
New to the CDF detector is the ability to select events based upon track impact parameter. The CDF Silicon Vertex Tracker (SVT) operates as part of the Level 2 trigger system. Tracks identified by the eXtremely Fast Tracker (XFT) are passed to the SVT, which appends silicon hits to the tracks to measure the impact parameter of each track. With the high trigger rate, it is very challenging to extract the data from the silicon detector and perform pattern recognition quickly. The SVT takes on average $25\, \mu\rm s$ per event to extract the data from the silicon, perform silicon clustering and track fitting. As shown in Fig. \[fig:svt\], the impact parameter resolution for tracks with $p_T>2\, {\rm GeV}/c$ is $47\, \mu\rm m$, which is a combination of the primary beam spot size ($30\, \mu m$) and the resolution of the device ($35\, \mu m$.) The CDF SVT has already shown that it will provide a number of new modes in both bottom and charm physics that were previously not accessible. DØ is currently commissioning a displaced track trigger as well.
Heavy Flavor Yields
===================
As mentioned in the previous section, the $b\overline{b}$ production cross section is very large at the Tevatron. Although triggering and reconstruction is very challenging, large samples can indeed be acquired. Table \[ta:yields\] shows approximate yields for several different modes at the Tevatron.[^2] As of this writing, each experiment has logged approximately ${\cal L}= 200 \, \rm pb^{-1}$ of integrated luminosity. The data sample is expected to more than double during the 2004 running period.
Hadron Decay mode Trigger Yield per pb$^{-1}$
------------- ----------------------- --------------- ---------------------
$D^0$ $K^-\pi^+ $ $d_0$ 6000
$D^+$ $K^-\pi^+\pi^+$ $d_0$ 5000
$B^-$ $D^0\pi^- $ $d_0$ 16
$B^0_s$ $ D^-_s \pi^+ $ $d_0$ 1
$B^0_s $ $ K^+K^- $ $d_0$ 1.5
$J/\psi $ $\mu^+\mu^-$ dimuon 7000
$B^+$ $J/\psi K^+$ dimuon 11
$B_s$ $J/\psi \phi$ dimuon 1
$\Lambda_b$ $J/\psi \Lambda $ dimuon 0.7
$B$ $D\ell\nu $ single lepton 400
$\Lambda_b$ $\Lambda_c \ell \nu $ single lepton 10
: Tevatron yields for various fully- and partially-reconstructed heavy flavor modes. The column labeled “trigger” lists the primary signature utilized to select the events. The trigger type $d_0$ is referring to the impact parameter described in the text. This list is not meant to be exhaustive, only to give the reader a feel for the sample sizes for heavy flavor analyses at the Tevatron. A number of $J/\psi$, semileptonic and all-hadronic decay modes are not included in this table.[]{data-label="ta:yields"}
Prompt Charm Cross Section
==========================
Previously published measurements of the $b$ production cross section at the Tevatron have consistently been significantly higher than the Next-to-Leading Order QCD predictions. Although there has been theoretical activity in this arena and the level of the discrepancy has been reduced, it is not yet clear that the entire scope of the problem is fully understood. Both experiments will again measure the $b$ and $b\overline{b}$ cross sections at higher center-of-mass energy.
To further shed light on this problem, CDF has recently presented a measurement of the charm production cross section.[@charmxsec] Using the secondary vertex trigger, CDF has been able to reconstruct very large samples of charm decays. Figure \[fig:charm\] shows a fully reconstructed $D^+\rightarrow K^-\pi^+\pi^+$ signal using $5.8\, \rm pb^{-1}$ of data from early in the run. In the full data sample available at the time of this Symposium (${\cal L}\sim \! 200\, \rm pb^{-1}$), CDF has a $D^0$ sample exceeding 2 million events. As this sample grows, competitive searches for [*CP*]{}-violation in the charm sector and $D^0/\overline{D^0}$ mixing are anticipated.
Since the events are accepted based upon daughter tracks with large impact parameter, it is clear that the sample of reconstructed charm decays contains charm from bottom ($p\overline{p}\rightarrow b\overline{b}X$, with $b\rightarrow c \rightarrow D$) in addition to prompt charm production ($p\overline{p}\rightarrow c\overline{c}X$, with $c\rightarrow D$.) To extract the charm meson cross section, it is necessary to extract the fraction of $D$ mesons that are coming from prompt charm production. This is done by measuring the impact parameter of the charm meson. If it arises from direct $c\overline{c}$ production, the charm meson will have a small impact parameter pointing back to the point of production, which was the collision vertex. If the charm meson arises from $b$ decay, it will typically not extrapolate back to the primary vertex.
Using this technique, along with a sample of $K^0_S\rightarrow \pi^+ \pi^-$ decays for calibration, it is determined that 80-90% (depending upon the mode) of the charm mesons arise from direct charm production. The shorter charm lifetime is more than compensated by the copious charm production in the high energy collisions.
The full analysis includes measurements of the differential cross sections for prompt $D^0$, $D^+$, $D^{*+}$ and $D^+_s$ meson production. The integrated cross section results of this study are summarized in Table \[ta:charm\]. Figure \[fig:cxsec\] shows the comparison between data and the NLO calculation for the differential $D^0$ cross section.[@nlo] The trend seen in this figure is the same for the other $D$ species. The prediction seems to follow the measured cross section in shape, but the absolute cross section is low compared to the measured results. This difference in magnitude between the measured and predicted charm meson cross section is similar to the difference in data and theory seen in the $B$ meson cross sections.
As an interesting aside, we can also compare the measured $B$ and $D$ cross sections at the Tevatron.[@bxsec] Looking at the charged mesons, the measured cross sections are: $$\sigma(D^+, p_T\ge 6~ {\rm GeV}/c, |y|\le 1)= 4.3\pm 0.7~\mu{\rm b}$$ $$\sigma(B^+, p_T\ge 6~ {\rm GeV}/c, |y|\le 1)= 3.6\pm 0.6~\mu{\rm b}.$$ For this momentum range, the $D^+$ cross section is only $20\%$ larger than the $B^+$ cross section. At very high transverse momentum (corresponding to high $Q^2$), we would expect that the mass difference between the bottom and charm quarks to be a small effect, yielding similar production cross sections. However these results show that even at lower $p_T$, the mass effects are not that significant.
Meson Momentum range Measured cross section(pb)
---------- ------------------------------ ----------------------------
$D^0$ $p_T>5.5 \, \rm GeV/{\it c}$ $13.3\pm 0.2\pm 1.5 $
$D^{*+}$ $p_T>6.0 \, \rm GeV/{\it c}$ $5.2\pm 0.1\pm 0.8 $
$D^+ $ $p_T>6.0 \, \rm GeV/{\it c}$ $4.3\pm 0.1\pm 0.7 $
$D^+_s$ $p_T>8.0 \, \rm GeV/{\it c}$ $0.75\pm 0.05\pm 0.22 $
: CDF measurement of the direct charm cross section. The results are for $D$ mesons with $|y|<1$.[]{data-label="ta:charm"}
$B$ Lifetimes {#se:lifetimes}
=============
In the spectator model for meson decays, where the light quark does not participate in the decay, all $B$ lifetimes are equal, since the lifetime is exclusively determined by the lifetime of the $b$ quark. In reality, non-spectator effects such as interference modify this expectation. The Heavy Quark Expansion (HQE) predicts the lifetime hierarchy for the $B$ hadrons as:
$$\tau (B^+) > \tau (B^0) \simeq \tau (B_s) > \tau (\Lambda_b) >>
\tau (B_c),$$
where the $B_c$ meson is expected to have the shortest lifetime because both the $b$ and the $c$ quarks are able to decay by the weak interaction.
The lifetimes of the light mesons, $B^0$ and $B^+$, are measured with a precision that is better than $1\%$. This impressive level of precision is dominated by the measurements of the Belle and Babar experiments.[@taubabar] The $B_s$ and $b$-baryon lifetimes have been measured by the LEP, SLD and CDF Run I experiments. One interesting puzzle that persists from those measurements is that the $\Lambda_b$ lifetime is significantly lower than expectation.[@hfag]
To measure the lifetime of a particle, the experiments utilize their precision silicon tracking to measure the flight distance of the hadron before it decays. At the Tevatron, this is done in the plane transverse to the beamline, and the two-dimensional flight distance is denoted as $L_{xy}$. Since the particle is moving at a high velocity in the lab frame, the decay time measured in the laboratory is dilated relative to the proper-decay time, which is the decay time of the particle in its rest frame. To extract the proper decay time, we must correct for the time dilation factor:
$$ct_{decay} = {L_{xy}\over{(\beta \gamma)_T}},$$
where $t_{decay}$ is the proper decay time in the rest frame of the particle, $c$ is the speed of light, $\beta\gamma = v/{c\sqrt{1-v^2/c^2}}$ is the relativistic correction for the time dilation. We write $(\beta\gamma)_T=p_T/m_B$, with $p_T$ the transverse momentum of the $B$ hadron and $m_B$ the mass of the $B$ hadron. The quantity $ct_{decay}$ is referred to as the proper decay length.
The uncertainty in the measurement of the proper decay length ($\sigma_{ct}$) has three terms:
$$\sigma_{ct} = ({m_B\over{p_T}})\sigma_{L_{xy}} \otimes
ct({\sigma_{p_T}\over{p_T}}) \otimes ({L_{xy}\over{p_T}})\sigma_{m_B}$$
where the $\otimes$ symbol indicates that the terms combine in quadrature. The final term, which is proportional to the uncertainty on the $B$ hadron mass $(\sigma_{m_B})$ is negligible in all cases. The first term is the uncertainty on the measured decay length. This depends upon the resolution of the detector as well as the topology and momentum of the decay mode. The middle term, proportional to the uncertainty transverse momentum of the $B$ hadron $(\sigma_{p_T})$ is effectively the uncertainty in the time dilation correction. For fully reconstructed modes where all of the $B$ daughter particles are accounted for, this term provides a negligible contribution to the uncertainty on the proper decay time. In the case of partially reconstructed modes, where some fraction of the $B$ daughters are not reconstructed, the uncertainty on the $B$ hadron momentum becomes a significant contributor to the lifetime uncertainty.
Fully reconstructed $J/\psi$ modes, such as $B^0\rightarrow J/\psi K^{*0}$, with $K^{*0}\rightarrow K^- \pi^+$ have the advantage of having small uncertainty in the $p_T$ of the $B$ hadron. The drawback, however, is that the signal yields are small due to the small branching ratio into the color-suppressed $J/\psi$ mode. Figure \[fig:d0taulamB\] shows a measurement of the $B_s$ lifetime from DØ in the mode $B_s\rightarrow J/\psi \phi$, with $J/\psi\rightarrow
\mu^+ \mu^-$ and $\phi\rightarrow K^+K^-$. In fitting for the lifetime, it is necessary to account for backgrounds from prompt sources as well as backgrounds from heavy flavor sources. In the case of the fully reconstructed modes, the lifetime fit can additionally utilize reconstructed mass information to properly weight signal versus background events. In the case of $J/\psi$ modes, the dominant backgrounds come from real $J/\psi$ decays from both prompt $c\overline{c}$ production as well as $B$ decays.
The statistical uncertainties on the DØ and CDF lifetime measurements are not yet competitive with the current world average for the $B_s$ and $\Lambda_b$ lifetimes. With larger data samples over the next 1-2 years, new results from the Tevatron will surpass the current level of precision.
Alternatively, semileptonic decays, such as $B^0_s\rightarrow D^-_s\ell\nu_\ell$, with $D^-\rightarrow
\phi \pi^-$ provide larger signal yields but suffer from uncertainty in the $B$ hadron $p_T$ due to the unreconstructed neutrino. With large data samples, the semileptonic modes will begin to become systematics limited due to the partial reconstruction, while the statistics limited fully reconstructed modes will continue to provide improved sensitivity.
Figure \[fig:tausemi\] shows a measurement of the inclusive $B$ lifetime in the $\mu D^0$ mode. Since this is a partial reconstruction, backgrounds can be more challenging than they are in the fully reconstructed mode. One technique to suppress backgrounds is to demand the proper charge correlation between the muon and the $D^0$ meson. The charge of the charm quark is carried through the $D^0$ decay ($D^0\rightarrow K^-\pi^+$ and $\overline{D^0}\rightarrow K^+\pi^-$) so the correlation that the charge of the muon be the same as the charge of the kaon can be enforced to reduce backgrounds. This works even without particle identification, because if the $K$ and $\pi$ masses are assigned incorrectly they typically do not reconstruct a $D^0$ mass.
One interesting background to the semileptonic analysis is $c\overline{c}$ production where one charm hadron decays semileptonically and the other fragments into a $D^0$. This background is only a problem if the charm pair is produced at a very small opening angle, which is exactly what occurs when a hard gluon splits into a $c\overline{c}$ pair. This “gluon splitting” contribution produces the “right-sign” charge correlation between the $D$ and muon. Even though the $D^0$ extrapolates back to the primary vertex in this case, the “fake” $\mu D^0$ vertex can look like it arose from a long-lived state. Further studies of $c\overline{c}$ production correlations are needed to fully understand this background source.
B Hadron Masses
===============
CDF has performed precision measurements of the masses of $B$ hadrons using fully reconstructed $B\rightarrow J/\psi X$ modes. High statistics $J/\psi\rightarrow \mu^+\mu^-$ and $\psi(2S)\rightarrow
J/\psi \pi^+\pi^-$ are used to calibrate tracking momentum scale and material in the tracking volume. The results are tabulated in Table \[ta:bmasses\]. Figure \[fig:mass\] shows the measurement of the $\Lambda_b$ baryon mass. Even with relatively small statistics (less than 100 events in $B_s$ and $\Lambda_b$ modes) these new results are the world’s best measurements of these masses.
$B$ hadron Decay mode Measured mass (${\rm MeV}/c^2$)
------------- ------------------- ----------------------------------
$B^+$ $J/\psi K^+$ $5279.32 \pm 0.68 \pm 0.94 $
$B^0$ $J/\psi K^{*0}$ $5280.30 \pm 0.92 \pm 0.96 $
$B^0_s$ $J/\psi \phi$ $5365.50 \pm 1.29 \pm 0.94 $
$\Lambda_b$ $J/\psi \Lambda $ $5620.4\ \pm 1.6 \ \pm 1.2\ $
: CDF results on masses of $B$ hadrons. These results come from fully reconstructed $J/\psi$ modes. The first error is statistical, the second systematic.[]{data-label="ta:bmasses"}
Hadronic Branching Ratios
=========================
Two-body Charmless $B$ Decays
-----------------------------
With the new SVT trigger, CDF has begun to measure $B$ decays with non-leptonic final states. One set of modes of particular interest are the charmless two-body modes. Requiring the final state to consist of two charged hadrons, the following modes can be accessed at the Tevatron:
- $B^0\rightarrow \pi^+\pi^-$, $BR\sim 5\times 10^{-6}$
- $B^0\rightarrow K^+\pi^-$, $BR\sim 2\times 10^{-5}$
- $B_s\rightarrow K^+K^-$, $BR\sim 1\times 10^{-5}$
- $B_s\rightarrow K^-\pi^+$, $BR\sim 2\times 10^{-6}$.
The $B^0$ states are accessible at the $e^+e^-$ facilities, but the $B_s$ modes are exclusive to the Tevatron.
The measurement presented here is the first observation of the decay $B_s\rightarrow K^+K^-$. As more data is accumulated, the longer term goal from these modes is to search for direct [*CP*]{}-violation as well as measure the $CKM$ angle $\gamma={\rm Arg}(V^*_{ub})$.[@fleisher]
Figure \[fig:pipi\] shows the reconstructed signal where all tracks are assumed to have the mass of the pion. A clear peak is seen, and the width of the peak is significantly larger ($41\, {\rm MeV}/c^2$) than the intrinsic resolution of the detector. This additional width is due to the $K^+\pi^-$ and $K^+K^-$ final states from $B^0$ and $B_s$ decays. To extract the relative contributions, kinematic information and $dE/dx$ particle identification is used.[^3] The particle identification is calibrated from a large sample of $D^{*+}\rightarrow D^0\pi^+$ decays, with $D^0\rightarrow K^-\pi^+$. The charge of the pion from the $D^*$ uniquely identifies the kaon and pion, providing an excellent calibration sample for the $dE/dx$ system. Although the $K$-$\pi$ separation is $1.3\sigma$, this is sufficient to extract the two-body $B$ decay contributions.
The results from ${\cal L}= 65\, \rm pb^{-1}$ are shown in Table \[ta:bpipi\]. The $B^0$ modes have been measured by CLEO, Babar and Belle. This is the first observation of the $B^0_s\rightarrow
K^+K^-$ decay mode. Turning the yields into a ratio of branching ratios, the result is:
$${BR(B^0_s\rightarrow K^+K^-)\over{BR(B^0_d\rightarrow K^+\pi^-)}}
= 2.71 \pm 1.15$$
where the error is the combined statistical and systematic uncertainty. To calculate this ratio, information about the relative production rates of $B^0_s$ and $B^0$ mesons must be included. The uncertainty includes the uncertainty on the relative production fractions.
Mode Fitted yield (events)
----------------------------- ---------------------------------
$B^0\rightarrow K^-\pi^+$ $148\pm 17(stat.)\pm 17(syst.)$
$B^0\rightarrow \pi^+\pi^-$ $39\pm 14(stat.)\pm 17(syst.)$
$B^0_s\rightarrow K^+K^-$ $90\pm 17(stat.)\pm 17(syst.)$
$B^0_s\rightarrow K^-\pi^+$ $3\pm 11(stat.)\pm 17(syst.)$
: CDF results on two-body charmless $B$ decays. The yields reported in this table are extracted by fitting the decay information for the events, including kinematic and particle identification information. Yields shown here are for ${\cal L} \sim \! 65 \, \rm pb^{-1}$.[]{data-label="ta:bpipi"}
$\Lambda_b \rightarrow \Lambda_c \pi^-$ Branching Ratio
-------------------------------------------------------
Using the SVT trigger, CDF has begun to measure $b$-baryon states. Figure \[fig:lblcpi\] shows a clean signal of the decay $\Lambda_b \rightarrow \Lambda_c \pi^-$, with $\Lambda_c \rightarrow pK^-\pi^+$. The reconstructed invariant mass plot has a very interesting structure, with almost no background above the peak and a background that rises steeply in going to lower mass. This structure is somewhat unique to baryon modes, which are the most massive weakly decaying $B$ hadron states. Because the SVT trigger specifically selects long-lived states, most of the backgrounds are coming from other heavy flavor ($b$ and $c$) decays. Since there are no weakly decaying $B$ hadrons more massive than the $\Lambda_b$, there is very little background above the peak. On the other hand, going to masses below the peak, lighter $B$ mesons begin to contribute. The background in this mode is growing at lower masses because there is more phase space for $B^+$, $B^0$, and $B^0_s$ to contribute.
To extract the number of signal events, $b\overline{b}$ Monte Carlo templates are used to account for the reflections seen in the signal window. The shapes of these templates are fixed by the simulation, but their normalization is allowed to float. The number of fitted signal events in this analysis is $96 \pm 13(stat.)^{+6}_{-7} (syst.)$. The primary result from this analysis is a measurement of the $\Lambda_b
\rightarrow \Lambda_c \pi^-$ branching ratio relative to the $B^0\rightarrow D^-\pi^+$ mode. We can take that ratio, along with PDG 2002[@pdg] values for measured branching ratios and production fractions, and extract the branching ratio $$BR(\Lambda_b\rightarrow \Lambda_c \pi^-) =
(6.5\pm 1.1 \pm 0.9 \pm 2.3)\times 10^{-3},$$ where the errors listed are statistical, systematic and the final uncertainty is arising from the uncertainty in the $B^0\rightarrow D^-\pi^+$ branching ratio.
Observation of the $X(3872)$ State
----------------------------------
At this Lepton-Photon Symposium, the Belle collaboration announced the observation of a neutral state decaying into $J/\psi \pi^+\pi^-$ with a mass of 3872 MeV.[@bellex3872] This state may be the $1^3D_2$ $c\overline{c}$ bound state, although the observed mass is higher than expected for that state. It has also been hypothesized that this is a loosely bound $D\overline{D}^*$ bound state, since the mass is right at the $D\overline{D}^*$ threshold.
Belle observes this state in $B$ decays. Their observation further indicates that the state is narrow, and favors large $\pi^+\pi^-$ mass in the decay.
CDF has searched for this state using a sample of ${\cal L}\sim\! 220 \, \rm pb^{-1}$ and sees a clear signal with a mass of $$3871.4 \pm 0.7(stat.) \pm 0.4 (syst.) {\rm MeV}/c^2.$$ Figure \[fig:x3872\] shows the CDF $J/\psi \pi^+\pi^-$ mass spectrum. This plot originates from a parent sample of approximately $2.2$ million $J/\psi \rightarrow \mu^+\mu^-$ decays. The large peak at $3.685\, {\rm MeV}/c^2$ is the $\psi(2S)$.[@cdfx3872]
The plot shows events where the dipion mass was required to be greater than $500 \, {\rm MeV}/c^2$. The significance of the signal without this cut is greater than $10\sigma$. CDF also reports that the state is narrow. The observed width of $4.3\, {\rm MeV}/c^2$ is consistent with detector resolution. Further studies are underway to investigate the $\pi^+\pi^-$ mass distribution as well as to determine whether or not the CDF signal is coming from a prompt source or though $B$ decays. Since all angular momentum states are accessible in high energy $p\overline{p}$ collisions, it is possible that this state is directly produced at the Tevatron, while it cannot be directly produced in $e^+e^-$ collisions.
Mixing
======
In the $K^0$ and $B^0$ systems, particle-antiparticle mixing has been observed and measured with great precision. This mixing is understood to occur because the weak interaction eigenstates are not the same as the strong interaction (or flavor) eigenstates. The weak eigenstates are then linear combinations of the flavor eigenstates, giving rise to an oscillation frequency that is proportional to the mass difference between the heavy and light states.
Recently, the Babar and Belle experiments have made very precise measurements of $B^0\overline{B^0}$ mixing and have significantly improved the world average. The mixing parameter is typically reported in terms of the heavy/light mass difference. For the $B^0$ system, the world average is:[@hfag] $$\Delta m_d = 0.502 \pm 0.006 \, \rm ps^{-1}.$$ From this, we see that a beam of pure $B^0$ mesons would result in a beam of pure $\overline{B^0}$ mesons in time $\Delta m_d t_{mix} = \pi$, which indicates that $t_{mix}\sim\! 4.1$ lifetimes, indicating that the oscillation is rather slow.
Mixing proceeds via a second-order weak transition as shown in Fig. \[fig:mix\]. The box diagram includes the $V_{td}$ matrix element for $B^0/\overline{B^0}$ mixing, which is replaced by $V_{ts}$ for $B^0_s/
\overline{B^0_s}$ mixing. Experimentally, we know that $V_{ts}$ is larger than $V_{td}$: $$Re(V_{ts})\simeq 0.040 > Re(V_{td})\simeq 0.007,$$ so we expect the $B^0_s$ system to oscillate with a much higher frequency than the $B^0$ system. Indeed this is the case, the $B^0_s$ system oscillates so quickly that the oscillations have not yet been resolved. The current combined world limit is:[@hfag] $$\Delta m_s > 14.4 \, {\rm ps^{-1}}~@ 95\%~{\rm CL},$$ which means a beam of $B^0_s$ mesons would fully become a beam of $\overline{B^0_s}$ mesons in less than $1/7^{\rm th}$ of one lifetime! For the next several years, the Tevatron will be the exclusive laboratory for $B_s$ meson studies, including the search for $B_s$ mixing.
Measuring Mixing
----------------
To measure $B_s$ mixing, four ingredients are needed.
- [**Flavor at the time of production.**]{} It is necessary to know whether the meson was produced as a $B^0_s$ or a $\overline{B^0_s}$.
- [**Flavor at the time of decay.**]{} It is also necessary to know whether the meson was a $B^0_s$ or $\overline{B^0_s}$ when it decayed. This, combined with the flavor at time of production, tells us whether the $B_s$ had decayed as mixed or unmixed.[^4]
- [**Proper decay time.**]{} It is necessary to know the proper decay time for the $B_s$, since we are attempting to measure the probability to mix as a function of decay time. The $B_s$ system mixes too quickly to resolve using time-integrated techniques.
- [**Large $B_s$ samples.**]{} We must map out the probability to mix as a function of decay time for at least part of the decay time spectrum. Because each the previous three items have shortcomings requiring more statistics, this analysis requires large samples of $B_s$ decays.
In the following subsections, we will discuss each of these pieces necessary to measure $B_s$ mixing.
Flavor Tagging
--------------
The first two items in our list of requirements have to do with determining the flavor of the $B_s$ meson at the time of production and at the time of decay, referred to as initial-state and final-state flavor tagging.
For initial-state flavor tagging, we infer the flavor of the $B_s$ meson at the time of production from other information in the event. Here we can take advantage of what we know about $b\overline{b}$ production. By measuring the flavor of the other $B$ hadron in the event, we can infer the flavor of the $B_s$ at the time of production. This technique is imprecise, and also suffers from the fact that quite often ($\sim\! 75\%$) the other $B$ hadron is outside the acceptance of the detector.
Another technique is to look at fragmentation tracks near the $B_s$ meson. For a $\overline{b}$ quark to become a $B^0_s$ meson, it must grab an $s$ quark from the vacuum. When the $s$ is popped from the vacuum, an $\overline{s}$ is popped with it, which could potentially turn into a $K^+$ meson. We can then use the charge of the kaon to infer the flavor of the $B$ hadron. Again, this is an inexact technique, since other fragmentation tracks can confuse this correlation and also the charge information could be lost into neutral particles, like $K^0_S$. Figure \[fig:bstar\] shows an example of same-side tagging. In this case, a fully reconstructed $B^+\rightarrow J/\psi K^+$ sample is used, where the flavor of the $B$ hadron is known. The plot then shows the mass difference between the $\pi$-$B^+$ and the $B^+$. A clear opposite-sign excess is seen over the entire range of the plot, which is attributed to the fragmentation correlation. In addition, a clear peak is seen near $0.4\, {\rm GeV}/c^2$ which is attributed to the $B^{**}$ state.
The efficacy of the initial-state flavor tagging is classified by the tagging power: $\epsilon D^2$, where $\epsilon$ is the fraction of times the algorithm was able to arrive at a tagging decision and $D$ is the dilution, which is a measure of the probability that the tag is correct. The dilution is written as: $D=(N_R-N_W)/(N_R+N_W)$, where $N_R$($N_W$) are the number of right (wrong) tags. The dilution is related to the mistag fraction, $w$ as $D=1-2w$. Tagging power is proportional to $D^2$ because incorrectly measuring the sign removes the event from the “correct” charge bin and puts the event into the “wrong” charge bin.
For Babar and Belle, $\epsilon D^2\simeq 27\%$, whereas for the Tevatron $\epsilon D^2 \simeq 5\%$. The large difference arises because of the nature and cleanliness of the $e^+ e^-$ environment. As an example, if an experiment has 1000 signal events with $\epsilon D^2 = 5\%$, then the statistical power of that sample is equivalent to a sample of 50 events where the tag is known absolutely.
For final-state flavor tagging, there are three primary classes of $B_s$ decays:
- $B^0_s\rightarrow D^-_s \pi^+$, with $D^-_s\rightarrow \phi \pi^-$,
- $B^0_s\rightarrow D^-_s \ell^+ \nu_\ell$, with $D^-_s\rightarrow \phi \pi^-$,
- $B^0_s\rightarrow J/\psi \phi$, with $J/\psi \rightarrow \mu^+\mu^-$.
In the first two cases, the flavor of the $B_s$ is immediately evident from the charge of the decay products, which are referred to as a “self-tagging” final states. In the third case, there is no way know whether the meson decayed as a $B^0_s$ or $\overline{B^0_s}$, so charge-symmetric modes are of no use for the mixing analysis.[^5]
Proper Decay Time Resolution
----------------------------
We know the $B_s$ oscillates very quickly, therefore we need proper time resolution that is smaller than the oscillation frequency. As discussed in detail in Sec. \[se:lifetimes\], the two primary components contributing to the proper time resolution are vertex ($L_{xy}$) resolution and the time dilation correction ($(\beta \gamma)_T= p_T/m_B$).
For the semileptonic samples, the time dilation correction factor limits the proper time resolution. For fully reconstructed samples (no missing neutrino) the time dilation correction is a negligible effect and only the $L_{xy}$ resolution contributes to the uncertainty on the proper decay time measurement.
If the true value of $\Delta m_s$ is close to the current limit $\Delta m_s \! \sim \! $14$-$18$ \, \rm ps^{-1}$, then both fully reconstructed and semileptonic samples will contribute to the measurement of $\Delta m_s$. However, if the true value of $\Delta m_s$ is $20\, \rm ps^{-1}$ or higher, then the proper time resolution becomes the limiting factor in resolving the oscillations. At $\Delta m_s$ values this high, only the fully reconstructed samples are useful, and in fact the vertex resolution becomes the limiting factor.
Yields
------
As stated previously, the statistics of the semileptonic samples are higher than those of the fully reconstructed modes. For lower values of $\Delta m_s$ the larger semileptonic event yields somewhat offset the poorer proper time resolution. As a consequence, both CDF and DØ are continuing to acquire semileptonic samples, both for flavor tagging calibration and for the $B_s$ mixing search. A semi-muonic sample enriched in $B_s$ decays is shown in Fig. \[fig:lds\], demonstrating that large semileptonic samples are being acquired.
The fully reconstructed states offer fewer signal events, but the improved proper time resolution compensates for this at values of $\Delta m_s$ that we are considering. CDF has accumulated a sample of fully reconstructed $B_s$ decays as shown in Figure \[fig:bsdspi\]. The plot on the left is the signal, a clear $B_s$ peak can be seen. The broad peak below the $B_s$ is the $B_s\rightarrow D^*_s \pi$, where the photon from the $D^*_s$ decay is not reconstructed. The plot on the right shows the expected contributions from a $b\overline{b}$ Monte Carlo. In the data, the signal and sidebands are fit using the shapes from the Monte Carlo letting the normalizations float. The Monte Carlo clearly provides a very good description of the signal and heavy flavor backgrounds. This is possible because the SVT trigger provides very pure heavy flavor samples. Even though the hadron collider environment is challenging, clean samples of heavy flavor decays can be isolated.
$B_s$ Mixing Status and Prospects
---------------------------------
Both experiments have now commissioned the detectors and accumulated the first portion of Tevtron Run 2 data. If we take the current performance of the trigger, reconstruction and flavor tagging, it appears that with a sample of ${\cal L}\sim\! 500 \, \rm pb^{-1}$, the $B_s$ sensitivity will be comparable to the current combined world limit. To observe or exclude a value of $\Delta m_s > 15 \, \rm ps^{-1}$ will require additional data and improvements along with further progress on triggering, reconstruction and flavor tagging.
With modest improvements to the current running configuration, we estimate that it will take $2$-$3\, \rm fb^{-1}$ of integrated luminosity to “cover” the region of $\Delta m_s$ that is currently preferred by indirect fits.[@indirect] It is important to recall that the current combined world limit consists of contributions from 13 different measurements, and is the culmination of the LEP, SLD and CDF Run I programs. Thanks to additional luminosity and upgraded detectors, it will be possible to extend this search for $B_s$ mixing, however this is a very challenging measurement that will take time, effort and a significant data sample.
Summary
=======
The results shown in this summary provide a snapshot of the heavy flavor results coming from the Tevatron. The period of commissioning is now complete, and the experiments are slightly more than one year into a multi-year run that will continue to accumulate large data samples. The upgraded detectors are performing well, and many of the upgrades specific to $B$ physics are beginning to pay off. Over the next several years, CDF and DØ will make significant contributions in our understanding of production and decay of heavy flavors.
Acknowledgments {#acknowledgments .unnumbered}
===============
I would like to thank the Lepton-Photon 2003 organizers for the opportunity to speak at this excellent conference. I would also like to thank and acknowledge the collaborators of the Babar, Belle, CDF and DØ experiments. This work is supported by the U.S. Department of Energy Grant DE-FG02-91ER40677.
[99]{}
DØ Collaboration, FERMILAB-Pub-96/357-E, 1996.
CDF Collaboration, FERMILAB-Pub-96/390-E, 1996; Fermilab-Proposal-909, 1998.
CDF Collaboration (D. Acosta [*et al.*]{}), [*Phys. Rev.*]{} [**D65**]{}, 052005, (2002).
CDF Collaboration (D. Acosta *et al.), FERMILAB-PUB-03-217-E, hep-ex/0307080, July 2003.*
M. Cacciari and P. Nason, hep-ph/0306212, [*JHEP*]{} 0309:006 (2003), June 2003.
BABAR Collaboration (B. Aubert *et al.), [*Phys. Rev. Lett.*]{} [**87**]{}, 201803 (2001); BELLE Collaboration (K. Abe *et al.), [*Phys. Rev. Lett.*]{} [**88**]{}, 171801 (2002).**
Heavy Flavor Averaging Group, http://www.slac.\
stanford.edu/xorg/hfag/index.html.
R. Fleischer, [*Phys. Lett.*]{} [**B459**]{}, 306-320 (1999).
Particle Data Group, http://pdg.lbl.gov/.
BELLE Collaboration (K. Abe *et al.), hep-ex/0308029, August 2003; S.-K. Choi *et al., hep-ex/0309032, September 2003, submitted to PRL.**
CDF Collaboration, http://www-cdf.fnal.gov/\
physics/new/bottom/030224.blessed-x3872/
M. Ciuchini, *et al., hep-ph/0307195, April 2003.*
Jonathan L. Rosner
: (University of Chicago): What are the prospects for seeing fully reconstructed $B_c$ mesons, e.g. in $J/\psi \pi^\pm$?
Kevin Pitts:
: In Run I with ${\cal L} \simeq 100\, \rm pb^{-1}$, the $B_c$ was observed through the decay $B_c \rightarrow J/\psi \ell \nu_\ell$, with $\ell=e,\mu$ and $J/\psi \rightarrow
\mu^+\mu^-$, providing a tri-lepton final state. In addition, there was a hint of a signal in $B_c \rightarrow J/\psi \pi^\pm$. In both of these modes, the $b$ quark decays to charm, providing the $J/\psi$ in the final state. In Run II, with ${\cal L}\sim \! 220 \, \rm pb^{-1}$ already on tape the prospects are quite good both for semileptonic decay and for fully reconstructed decays. In addition, with the enough statistics in the hadronic trigger, it is likely that CDF can reconstruct the $B_c$ in modes where the charm quark decays, such as $B^+_c \rightarrow B^0_s \pi^+$. This mode might be especially interesting as a new tagging mode for $B^0_s$ mixing.
Vivek Sharma
: (University of California at San Diego): What is the timescale for a $10\%$ measurement of $\Lambda_b$ lifetime? In particular can you use the various fully reconstructed hadronic $\Lambda_b$ modes given the reflection and impact parameter bias?
Kevin Pitts:
: Performing a simple average of the CDF and DØ measurements of the $\Lambda_b$ lifetime in $\Lambda_b\rightarrow J/\psi \Lambda$, the result is $\tau_{\Lambda_b} = 1.13 \pm 0.18 \, \rm ps$, which is a $16\%$ measurement on the $\Lambda_b$ lifetime. These results do not yet use the entire Run II data samples available. Assuming all errors scale like $1/\sqrt{N}$, the combined result from the two experiments using the full ${\cal L} \sim \! 220 \, \rm pb^{-1}$ on tape as of this conference, the combined result would be a $10\%$ measurement on the $\Lambda_b$ lifetime. With more data coming in, it seems safe to expect that each experiment will have a measurement in the neighborhood of $10\%$ by the summer of 2004.
As for the hadronic modes, significant progress has been made in understanding both the reflections and the impact parameter bias. The understanding of the reflections has been shown in the context of the $\Lambda_b \rightarrow \Lambda_c \pi$ branching ratio measurement. Understanding the lifetime bias coming from the SVT trigger is a necessity for $B_s$ mixing, and all lifetime measurements will benefit from ongoing progress on that front.
Vera Luth
: (SLAC): For charm or $B$ decays you can normalize your measured $BR$ to other measurements. How are you planning to obtain absolute $BR$ and production rates for $B_s$, $\Lambda_b$, etc?
Kevin Pitts:
: We can still normalize our branching ratios to other modes, but you point out an additional complication coming about due to the lack of precision in our measurements of the relative production fractions. For example, in the CDF measurement of the branching ratio for $\Lambda_b \rightarrow \Lambda_c \pi$, the number of signal events in two modes is measured: $N(\Lambda_b \rightarrow \Lambda_c \pi)$ and $N(B^0 \rightarrow D^-\pi^+)$. The efficiencies and acceptances are calculated, and the PDG values for the $BR(D^-\rightarrow K^-\pi^+\pi^+)$ and $BR(\Lambda_c \rightarrow pK^- \pi^+$ are used. Combining this, the measured ratio of corrected yields gives: $${f_{baryon} \times BR(\Lambda_b \rightarrow \Lambda_c \pi^-)\over{
f_d \times BR(B^0\rightarrow D^- \pi^+)}}$$ where $f_{baryon}$ ($f_d$) are the fraction of produced $\Lambda_b$ ($B^0$) hadrons in $p\overline{p}$ collisions.
The extra piece that comes about in taking this ratio is the ratio of production fractions. These are not known very well, and it is important to improve upon our knowledge of the production fractions. This is easier said than done. One way to improve upon our knowledge of $f_s$/$f_d$ is to measure $\overline{\chi}$, the time-integrated $B/\overline{B}$ mixing parameter. Other analyses have looked at lepton+$D_s$, lepton+$D^0/D^\pm$ and lepton+$\Lambda_c$ correlations to attempt to extract the species fractions. Once statistics have improved, the uncertainty in species fractions will be one of the dominant uncertainties in attempting to extract absolute branching ratios.
[^1]: It is possible to produce $b$’s singly through weak decays such as $W^-\rightarrow \overline{c}b$ and $W^-\rightarrow\overline{t}b$. The cross sections for these processes are several orders of magnitude below the cross section for direct $b\overline{b}$ production by the strong interaction.
[^2]: Throughout this note, we will write charge specific decay modes for clarity. All analyses presented here include the charge-conjugate modes.
[^3]: CDF has a time-of-flight system for particle identification with very good $\pi$-$K$ separation for track $p_T < 1.6\, {\rm GeV}/c$. The tracks from these two-body decay modes have $p_T > 2 \, {\rm GeV}/c$ and therefore the time-of-flight system does not provide additional particle identification information for this analysis.
[^4]: A meson that “decayed as unmixed” could have mixed and mixed back, undergoing one or more complete cycles. This necessarily comes out of the time-dependent analysis.
[^5]: The charge-neutral $B_s$ modes are important for other analyses, such as the search for a lifetime difference $\Delta \Gamma_s$ in [*CP*]{}-even and [*CP*]{}-odd decays.
|
---
author:
- 'Juan Herrero-Garcia,[^1]'
- 'Andre Scaffidi,[^2]'
- 'Martin White,[^3]'
- 'and Anthony G. Williams,[^4]'
bibliography:
- '2DM\_DAMA.bib'
title: 'Reproducing the DAMA/LIBRA phase-2 results with two dark matter components'
---
=1
Introduction {#sec:intro}
============
Dark matter (DM) is one of the greatest mysteries of nature. Until now evidence for its existence stems from its gravitational interactions. There is one potential exception coming from direct detection experiments: the long-standing annual modulation signal observed by the DAMA/LIBRA collaboration [@Bernabei:2010mq; @Bernabei:2013xsa] (denoted by DAMA in the following). The significance for the annual modulation of the complete data set is $12.9\,\sigma$. DAMA is unable to distinguish, on an event-by-event basis, electron recoils from nuclear recoils on its sodium iodide (NaI) crystals. However, by measuring the annual modulation (c.f., the total rate) and by performing different checks (only single hit events, phase close to June 2nd, period of 1 year, amplitude, etc.) the DAMA collaboration rejects the possibility that modulated backgrounds (neutrons, muons, solar neutrinos, radon, etc.) are the cause of the signal. Also several independent studies have been performed in the literature without finding any conclusive explanation [@FernandezMartinez:2012wd; @Klinger:2016tlo; @Belli:2017hqs; @Bernabei:2018drd; @McKinsey:2018xdb]. Until the very latest phase-2 results, under reasonable particle physics and astrophysical assumptions the signal was consistent in both amplitude and phase with that expected from Weakly Interacting Massive Particles (WIMPs). Assuming the Standard Halo Model (SHM) and elastic scattering, the best-fit masses and cross sections are well-known: a light DM with mass $\sim 10$ GeV scattering mainly on sodium (light mass solution), or a heavy DM with mass $\sim 70$ GeV scattering mainly on iodine (heavy mass solution) [@Bottino:2003cz; @Bottino:2003iu; @Gondolo:2005hh; @Fairbairn:2008gz; @Kopp:2009qt; @Schwetz:2011xm; @DelNobile:2015lxa; @Herrero-Garcia:2015kga].
The main issue with a DM interpretation of the DAMA modulation is that no other independent experiments have observed any DM signals, whereas compatibility with DAMA would require such signals to have been seen under currently understood particle physics and astrophysics scenarios. For example, LUX [@Akerib:2016vxi], XENON1T [@Aprile:2017iyp] and PandaX [@Cui:2017nnn]) appear to exclude a DM origin of the DAMA observations. There is no currently accepted explanation that reconciles DAMA’s signal with the absence of results in all other experiments, e.g., see Refs. [@Kopp:2009qt; @Savage:2010tg; @Schwetz:2011xm]. Furthermore, in the last few years halo-independent methods have been designed and applied to DAMA and appear to exclude a DM origin of the results independent of the assumed velocity distribution [@McCabe:2011sr; @Frandsen:2011ts; @Frandsen:2011gi; @HerreroGarcia:2011aa; @HerreroGarcia:2012fu; @Bozorgnia:2013hsa; @Gelmini:2016pei]. This has motivated a large experimental effort to try to reproduce the DAMA experiment with NaI crystals in order to independently either confirm it or reject it [@Amare:2014jta; @Fushimi:2015sew; @Shields:2015wka; @Angloher:2016ooq; @Thompson:2017yvq]. The SABRE experiment [@Shields:2015wka] plans to have a northern and southern hemisphere pair of NaI detectors to search for a seasonal correlation or anti-correlation of any DAMA-like annual modulation signal [@Froborg:2016ova]. Interestingly, the COSINUS experiment [@Angloher:2016ooq] aims to also measure the constant rate by developing a cryogenic detector. As studied in Ref. [@Kahlhoefer:2018knc], a null result in the latter experiment may rule out model-independently a DM explanation of DAMA.
Very recently, the DAMA collaboration released the long-waited phase-2 results with a lower energy threshold, with three new energy bins below 2 keVee [@DAMAphase2]. As first pointed out in version 3 of Ref. [@Kahlhoefer:2018knc], and studied also in Ref. [@Baum:2018ekm], the consistency of the DM interpretation of DAMA’s signal is under question both for the light and the heavy DM mass solutions mentioned above, even before considering its compatibility with other null results experiments. The issue is that, below 2 keVee the two standard DM solutions behave very differently with decreasing energy: the light DM gives rise to scatterings off iodine, increasing its rate significantly in the case of spin-independent (SI) interactions, while for the heavy DM, that scatters only off iodine, the modulation amplitude decreases giving rise eventually to a phase flip. This was already pointed out in Ref. [@Kelso:2013gda]. With the new data, we also obtain in this work that the one-component spin-independent (SI) DM explanation of the annual modulation is disfavoured or excluded unless isospin-violating (IV) couplings are invoked. In particular, we find that the light DM solution is excluded at [$5.3 \sigma$]{}, while the heavy one is excluded at [$3.0\sigma$]{}, in agreement with Refs. [@Kahlhoefer:2018knc; @Baum:2018ekm]. In the case of IV couplings, an adequately tuned destructive interference between the DM couplings to neutrons and protons can still correctly reproduce the spectrum by suppressing the interactions with iodine.
The main observation of this short letter is that the modulation amplitude observed by DAMA at low energies can be reproduced in a natural way by a combination of two DM particles, without the need to invoke fine-tuned IV couplings. As we quantify explicitly below, two DM components give an excellent fit to the data for SI isospin-conserving (IC) couplings, equal to, or better than, that of the one-component scenario with IV couplings. From a theoretical perspective, it is also not difficult to envisage a dark sector, which accounts for $25\%$ of the energy density of the Universe, to consist of more than one light stable state, similar to the way in which the visible sector (which accounts for only $5\%$ of the energy budget) has a few stable particles: protons, electrons, the lightest neutrino, Helium nuclei, etc. Only a few works have studied the case of multi-component DM in direct detection, focusing on constant event rates [@Profumo:2009tb; @Batell:2009vb; @Adulpravitchai:2011ei; @Dienes:2012cf; @Chialva:2012rq; @Bhattacharya:2016ysw; @Bhattacharya:2017fid; @Herrero-Garcia:2017vrl]. In the following, we adopt a purely phenomenological approach to try to reproduce the observed modulated signal without going into further details regarding the model building, which would affect the interactions and the abundances of the DM components. This will be studied in a future work [@future].
This letter is structured as follows. In Sec. \[sec:annualmod\] we introduce the relevant notation to describe the DM annual modulation signal in direct detection experiments. We discuss the fitting procedure used in Sec. \[sec:implementation\]. In Sec. \[sec:fit1DM\] we show results of the fits of the vanilla one-component scenario with and without IV couplings to the full DAMA data set. We show the main results of this paper for two-component DM in Sec. \[sec:fit2DM\]. We present best-fit values and two-dimensional confidence level contours for two-component DM. This is done for the simplest case with IC couplings and also for the more general case with IV interactions. We give our conclusions and final remarks in Sec. \[sec:conc\].
The dark matter annual modulation signal {#sec:annualmod}
========================================
In this section, we present the relevant expressions for the direct detection of DM comprising two components with masses $m_{1,2}$ (we take $m_1<m_2$ such that component 1 is always lighter than component 2), SI cross-sections with protons $\sigma_{1,2}^p$, and local energy densities $\rho_{1,2}$. We impose the constraint $\rho_1+\rho_2=\rho_{\rm loc}$, where $\rho_{\rm loc}$ is the observed DM mass density, which we fix to $0.4 \, {\rm GeV/cm}^3$. Following the notation of Ref. [@Herrero-Garcia:2017vrl], it is useful to define the following ratios $$\label{eq:rhos}
r_{\rho} \equiv \frac{\rho_2}{\rho_1}\,,\qquad r_{\sigma} \equiv \frac{\sigma_2^p}{\sigma_1^p}\,,$$ such that $\rho_2 = \rho_{\rm loc}\,r_\rho/(1+r_\rho)$. In this work, we focus on the DM annual modulation signal [@Drukier:1986tm; @Freese:1987wu]. For the case of two-component DM, we can write the amplitude of the annual modulation rate in a detector with target nucleus labelled by $(A,Z)$, for SI interactions, as (see for instance Refs. [@HerreroGarcia:2011aa; @Freese:2012xd; @HerreroGarcia:2012fu; @Herrero-Garcia:2015kga]) $$\begin{aligned}
\label{eq:rate_tot}
\mathcal{M}_{A}(E_{R}) &=\mathcal{M}_{A}^{(1)}(E_R) + \mathcal{M}_{A}^{(2)}(E_R) \nonumber\\&= \frac{x_A\,\rho_{\rm loc}\,\sigma_1^p}{2 \,(1+r_\rho)\,\mu_{p1}^2} \,F_{A}^2(E_{R}) \,\left(A_{\rm eff,1}^2\,\frac{\delta\eta(v_{m,A}^{(1)})}{m_1}+\,r_{\rho}\,r_{\sigma}\,A_{\rm eff,2}^2\, \frac{\mu_{p1}^2}{\mu_{p2}^2}\,\frac{\delta\eta(v_{m,A}^{(2)})}{m_2}\right)\,.\end{aligned}$$ Here $F_A(E_{R})$ is the SI nuclear form factor of element $A$, for which we use the Helm parametrisation [@LEWIN199687; @PhysRev.104.1466]. $x_A$ is the mass fraction of the target A in the detector. The total modulation amplitude is given by $\mathcal{M}(E_{R})=\sum_A \mathcal{M}_{A}(E_{R})$. The modulated amplitude for DM consisting of just one component is obtained from Eq. in the limit $r_\rho=r_\sigma=0$.
In the following we use the label $\alpha=1,2$ for the two DM components. In addition to the local energy densities, the astrophysics enters in Eq. through $$\label{eq:Aeta}
\delta\eta (v_{m,A}^{(\alpha)}) = \frac{1}{2}\left[\eta (v_{m,A}^{(\alpha)},t_{\rm J})- \eta (v_{m,A}^{(\alpha)},t_{\rm J}+0.5) \right] \,,$$ where $t_{\rm J} (t_{\rm J}+0.5)$ measured in years corresponds to June (December) 2nd, which for minimum velocities $v_{m,A}^{(\alpha)}$ above $\sim 200$kms$^{-1}$ is the time of the year when the velocity of the WIMP flow in the Earth’s frame is maximum (minimum). For $v_{m,A}^{(\alpha)} <200$kms$^{-1}$, the situation is the opposite: $t_{\rm J}\,(t_{\rm J}+0.5)$ corresponds to minimum (maximum) WIMP flow, such that $\delta\eta (v_{m,A}^{(\alpha)})$ and therefore the modulation amplitude $\mathcal{M}_{A}^{(\alpha)}(E_R)$ become negative. In other words, for small enough $v_{m,A}^{(\alpha)}$, the phase of the modulation flips by 6 months. As we show below, this is precisely what happens for heavy DM components scattering off iodine in DAMA at the lowest energies. Here we defined the halo integral as $$\label{eq:eta}
\eta (v_{m,A}^{(\alpha)},t) =
\int_{v > v_{m,A}^{(\alpha)}} \negthickspace \negthickspace d^3 v \frac{f^{(\alpha)}_{\rm det}(\vec{v},t)}{v} \, \qquad \text{with}\qquad v^{(\alpha)}_{m,A}=\sqrt{ \frac{m_A E_{R}}{2 \mu_{\alpha A}^2}}\,,$$ where $v_{m,A}^{(\alpha)}$ is the minimal velocity of the DM particle $\alpha$ required to produce a recoil of energy $E_R$ in element $A$, and $f_{\rm det}(\vec v, t)$ describes the distribution of DM particle velocities in the detector rest frame, which we assume to be equal for both DM species. It can be written in terms of the galactic velocity distributions by doing a Galilean boost, $f_{\rm det}(\vec{v},t) = f_{\rm gal}(\vec{v} + \vec{v}_e(t))$, where $\vec{v}_e(t)$ is the velocity vector of the Earth in the galaxy rest-frame. We use the SHM, with a Maxwellian velocity distribution $$\begin{aligned}
f_{\rm gal}(\vec{v})=\frac{1}{N_\text{esc}}\left(\frac{3}{2 \pi {\sigma}^2_{H_\alpha}}\right)^{3/2} \exp{\Big(-\frac{3\vec{v}^2}{2{\sigma}_{H_\alpha}^2}\Big)}\,,\end{aligned}$$ with a cut-off at the escape velocity $v_{\rm esc} =550\,\rm km\, s^{-1}$ and $N_\text{esc} = \text{erf}(z)-\frac{2}{\sqrt{\pi}}ze^{-z^2}$, where $z = \sqrt{3/2}\,v_\text{esc}/\sigma_{H_\alpha}$. The velocity dispersion of each DM component, ${\sigma}_{H_\alpha}$, may in principle be different. Indeed, if both components reach equilibrium in the halo, the velocity dispersions are expected to become mass dependent [@Foot:2012cs]. In this case one can write ${\sigma}_{H_\alpha} = \bar{\sigma}_H\sqrt{\bar{m}/m_\alpha}$, where $\bar{m} = \sum n_\alpha m_\alpha/\sum n_\alpha$, with $n_\alpha=\rho_\alpha/m_\alpha$ the number density of the DM particle $\alpha$, and $ \bar{\sigma}_{H}\sim 270$ kms$^{-1}$ is the canonical velocity dispersion. Throughout the paper we generally assume equal velocity dispersions for both components, ${\sigma}_{H_1}={\sigma}_{H_2}= \bar{\sigma}_{H}$, but we briefly discuss how our results change in the case in which they are given by the mass-dependent relationship provided above.
In order to properly fit a model to the DAMA data we must first take into account the finite energy resolution of the detector. This is done via the convolution of the theoretical modulation amplitude $\mathcal{M}(E_R)$ and the differential response function $\phi(E_R,E_{\text{ee}})$[^5] $$\begin{aligned}
\label{eq::eres}
M(E_\text{ee}) = \int_0^\infty\,dE_R\:\epsilon(E_\text{ee})\,\phi(E_R,E_{\text{ee}})\,\mathcal{M}(E_R)\,,\end{aligned}$$ where $\epsilon(E_{\text{ee}})$ is the detector efficiency and $E_\text{ee}$ is the electron equivalent energy, related to the true recoil energy $E_R$ through the target-dependent quenching factors $E_\text{ee} = Q_A\,E_R$. We use $Q_{\rm Na}= 0.3$ and $Q_{\rm I} = 0.09$ for the quenching factors of sodium and iodine. We also use the differential response function provided in Ref. [@Baum:2018ekm], so that we can compare our results to the one-component case studied there. The modulation amplitude in bin $i$, $M_i$, is obtained by averaging the corrected modulation amplitude of Eq. \[eq::eres\] over each bin.
Fitting the full DAMA/LIBRA data set {#sec:results}
====================================
In the next section, we briefly discuss details of our numerical scan and the data used. Afterwards, we show results of the fits to the one and the two-component DM scenarios in Secs. \[sec:fit1DM\] and \[sec:fit2DM\], respectively.
Analysis methods {#sec:implementation}
----------------
In order to test the compatibility of a particular DM model with a given set of experimental data, one can employ the method of maximum likelihood. For the case of DAMA we parameterise the likelihood function with a binned Gaussian distribution $$\begin{aligned}
\label{Likelihood:Defn}
\mathcal{L}(\vec{\mu}\,|\,\theta) =\prod\limits^N_{i=1}\, \frac{1}{\sqrt{2\pi}\sigma_i}e^{-\frac{\left[\mu_i-M_i (\theta) \right]^2}{2\sigma_i^2} }\,,\end{aligned}$$ where $N$ is the number of bins and $M_i(\theta)$ is the expected modulation amplitude in bin $i$, defined below Eq. \[eq::eres\], which depends on different DM parameters denoted by $\theta=(\theta_1, ..., \theta_n)$, where $n$ is the total number of fitted parameters. $\mu_i$ is the central value of the observed annual modulation amplitude in bin $i$, and $\sigma_i$ its error. Maximising the likelihood in Eq. with respect to the parameters $\theta$ is equivalent to minimising -2 times the log-likelihood as follows: $$\begin{aligned}
\label{chisquare:Defn}
\min_{\theta}\big(-2\ln\mathcal{L}\big) =\min_{\theta} \sum_{i=1}^N\frac{\left[\mu_i-M_i(\theta)\right]^2}{\sigma_i^2} \equiv \min_{\theta} \chi^2(\theta)\,.\end{aligned}$$ This is the familiar *chi-square* test statistic which, as shown by Pearson [@Pearson], in the limit of large $\mu_i$ follows a $\chi^2$ distribution with $N-n$ degrees of freedom (dof). Since the mean of a $\chi^2$ distribution is the number of degrees of freedom (dof), the *goodness of fit* can determined by the value of the $\chi^2$ at the minimum, $\chi_{\rm min}^2$, divided by the number of dof.
In our $\chi^2$ fit we use the whole DAMA annual modulation data set, which combines results from DAMA/NaI and DAMA/LIBRA phases 1 and 2. The total exposure is $2.46\,\text{ton\,y}$. The results are provided in slide 22 of Ref. [@DAMAphase2]. We use the data of Tab. I of Ref. [@Baum:2018ekm], which gives the observed modulation amplitude $M_i$ in $N=10$ bins in the energy range \[1, 20\] keVee.
Parameter Prior Range Prior type
------------ ----------------------------------------- -------------
$m_1$ \[1, 50\] GeV Flat
$m_2$ \[$m_1$, 1000\] GeV Flat
$\sigma_p$ \[$10^{-42}$, $10^{-38}$\] ${\rm cm}^2$ Logarithmic
$r_\rho$ \[0.001, 1000\] Logarithmic
: Parameters and prior ranges used for the two-component dark matter fits. In the four-dimensional fit, the four parameters are used with $\kappa_1=\kappa_2=r_\sigma=1$.[]{data-label="tab:priors"}
We use the open source software [Minuit]{} [@James:1994vla] to compute the best-fit points, which we give in Tabs. \[tab:fit1DAMASI\] and \[tab:fit2DAMA\]. For completeness, we also compute the corresponding p-values for the fits, as well as the corresponding number of equivalent two-sided Gaussian standard deviations $\mathcal{Z}$. We consider a ‘good’ fit as one that yields a p-value $> 1\times10^{-3}$, which corresponds to $\mathcal{Z} < 1.96$. Of course, this is to some extent an arbitrary choice. For the two-component scenarios, we also employ the [MultiNest]{} implementation of the nested sampling algorithm [@Feroz:2008xx; @Feroz:2007kg; @Feroz:2013hea]. The latter is well suited for dealing with a highly multimodal target function, which is the case for two-component scenarios. We show results with $10^4$ live points and a tolerance of 0.01. For the two-component DM fits we use the priors shown in Tab. \[tab:priors\]. We determine the distribution of the profile likelihood ratio (PLR) $\mathcal{L}/\mathcal{L}_\text{max}$ throughout the parameter space from the obtained samples, and derive the frequentist 1 and 2$\sigma$ C.L. contours using [pippi]{} [@Scott:2012qh].[^6]
One-component dark matter {#sec:fit1DM}
-------------------------
We perform fits of one-component DM (1DM) to the DAMA data. We do this for SI interactions with both IC and IV couplings. The former scenario involves just 2 free parameters, while the latter has 3. We show the results for the light and the heavy DM mass solutions. In Tab. \[tab:fit1DAMASI\] we show the best-fit points obtained, as well as the values of $\chi_{\rm min}^2/{\rm dof}$, the p-values and $Z$. As can be observed, unless IV couplings are invoked, there is no good fit to DAMA.
1DM $m$ $\sigma_1^p$ $\kappa$ $\chi_{\rm min}^2/{\rm dof}$ p-value $\mathcal{Z}$
---------- ------ -------------- ---------- ------------------------------ --------------------- --------------- -- -- --
IC light 8.40 1.25 - 48.0/8 $1.0\times 10^{-7}$ 5.3
IC heavy 54.0 0.058 - 23.1/8 $3.2\times 10^{-3}$ 3.0
IV light 10.9 75.8 -0.67 12.1/7 $0.10$ 1.7
IV heavy 50.4 30.0 -0.65 13.1/7 $0.07$ 1.8
: Results of the one-component dark matter fit to DAMA data. The DM mass $m$ is in GeV and the scattering cross section with protons $\sigma_1^p$ is in units of $10^{-40}\,{\rm cm}^2$. The different fits are for both the light and the heavy dark matter mass solutions, which correspond to scatterings mainly in Na and I, respectively. We show isospin-conserving (IC) and isospin-violating (IV) couplings. The dashes refer to the parameters that are fixed to 1 and therefore are not varied in the fit. The last two columns provide the p-value and the number of Gaussian-equivalent standard deviations $\mathcal{Z}$ (two-sided).[]{data-label="tab:fit1DAMASI"}
It is quite easy to visualise how a combination of both light and heavy DM could give a spectrum that reproduces the observed signal in the lowest energy bins. This is the main idea of this paper, which we study in the next section.
Two-component dark matter {#sec:fit2DM}
-------------------------
In this section we perform a fit of the annual modulation amplitude generated by two DM particles (2DM) to the DAMA data. We first take IC couplings and fix $r_\sigma=1$, leaving $r_\rho$ free. These models have 4 free parameters. The energy density and the cross section enter identically in the rate, $ \mathcal{M}^{(\alpha)} \propto \rho_\alpha \sigma_\alpha$. Therefore, apart from the overall normalisation, one can fix $r_\sigma$ and consider $r_\rho$ as a free parameter. We have also checked that the other scenario (fixing $r_\rho=1$ and leaving $r_\sigma$ free) gives a similar fit with a cross section that is roughly twice that of the case of $r_\sigma=1$ because $r_\rho \simeq 3$, confirming the expectations. We then do a general fit with free $r_\rho$ and $r_\sigma$ and IV couplings, which involves 7 free parameters.
We show in Tab. \[tab:fit2DAMA\] the results of the fits of two-component DM (2DM) to the annual modulation signal observed by DAMA. We show the best-fit points of the different fits performed, as well as the values of $\chi_{\rm min}^2/{\rm dof}$, the p-values and the Gaussian-equivalent number of standard deviations $Z$. In the first two rows we show the two IC best-fit solutions, while in the last one we show the more general case of IV ones. For IC couplings two solutions exist. One can observe that solution B, with the larger mass splitting between the two DM particles, is a better fit than solution A. Although the general IV model (last row) produces a smaller $\chi_{\rm min}^2$ than both IC cases, its larger number of free parameters yields a smaller p-value and hence a worse fit. Therefore, in the following we only focus on the four-parameter IC cases.
2DM $m_1$ $m_2$ $\sigma_1^p$ $r_\rho$ $r_\sigma$ $\kappa_1$ $\kappa_2$ $\chi_{\rm min}^2/{\rm dof}$ p-value $\mathcal{Z}$
------------ ------- ------- -------------- ---------- ------------ ------------ ------------ ------------------------------ --------- --------------- --
IC A 19.8 76.1 0.13 3.04 - - - 10.7/6 $0.10$ 1.7
IC B 8.19 170 2.55 0.07 - - - 9.17/6 $0.16$ 1.4
IV general 17.8 70.7 5.84 32.2 1.15 0.03 -0.52 7.80/3 0.05 2.0
: Same as Tab. \[tab:fit1DAMASI\] for the two-component dark matter fit to DAMA data. The different fits are performed for the isospin-conserving (IC) simplified case (first two rows), as well as in the most general isospin-violation (IV) one (last row). Two solutions, A and B, exist in the case of IC couplings.[]{data-label="tab:fit2DAMA"}
Let us also briefly mention how our results change in the case of mass-dependent velocity dispersions. To investigate their impact, we performed a dedicated Minuit fit around the parameters given by the IC solution A in Tab. \[tab:fit2DAMA\], in which the velocity dispersions were allowed to be mass-dependent. We obtain $\chi_{\rm min}^2/{\rm dof}=10.69$, with best-fit values $m_1=19.66$ GeV, $m_2=75.8$, $\sigma_1^p=0.13$ and $r_\rho=3.02$. Therefore, the results are very similar to the mass-independent velocity-dispersion case.
![Best-fit spectra of the modulation amplitude for a two-component dark matter model with isospin-conserving couplings (solid black line). Two solutions are possible, A (*left panel*) and B (*right panel*). The DAMA experimental results are shown as red points. We also show the individual contributions in dotted green (component 1) and dashed blue (component 2). Below the phase flip, which occurs at $1.6\,(3.6)$ keVee for solution A (B), the contributions of the two components partially neutralize each other in the combined modulation amplitude.[]{data-label="fit:2DM"}](Figures/4D_fit_L.pdf "fig:") ![Best-fit spectra of the modulation amplitude for a two-component dark matter model with isospin-conserving couplings (solid black line). Two solutions are possible, A (*left panel*) and B (*right panel*). The DAMA experimental results are shown as red points. We also show the individual contributions in dotted green (component 1) and dashed blue (component 2). Below the phase flip, which occurs at $1.6\,(3.6)$ keVee for solution A (B), the contributions of the two components partially neutralize each other in the combined modulation amplitude.[]{data-label="fit:2DM"}](Figures/4D_fit_H.pdf "fig:")
In Fig. \[fit:2DM\] we show with a solid black line the binned modulation amplitude for the best-fit points for two-component DM in the case of IC couplings for solution A (left panel) and solution B (right panel). We also show the individual contributions in dotted green (component 1) and dashed blue (component 2) lines, and the DAMA experimental results as red points. We have checked that solution A corresponds to scattering of both DM components *dominantly* off iodine. The fact that the lighter component scatters dominantly off iodine with a negligible contribution from sodium is due to two factors: first, the smaller quenching factor in iodine compensates its larger mass, translating into smaller $v^{(1)}_{\rm m,I} (E_R)$ and thus into larger $\eta(v^{(1)}_{\rm m,I})$; second, the $A^2$ enhancement factor for iodine due to coherent SI scatterings. For this solution, component 1 dominates for energies roughly below $2$ keVee. On the other hand, for solution B, component 2 scatters off iodine for all shown recoil energies, while component 1 scatters off sodium above roughly $2$ keVee, with a significant contribution from iodine scatterings below that energy, which explains the observed rapid increase in its spectrum at low energies. For this solution, component 1 always has a larger modulation amplitude than component 2. Above $\sim4$ keVee, one observes that the modulation amplitude of the combination of both DM components is roughly equal to that of component 2 (1) for solution A (B). As expected in the SHM, the phase flip for the heavy state (component 2) occurs at an electron equivalent energy $E_{\text{ee}}$ approximately equal to $1.6\,(3.6)$ keVee for solution A (B). For energies below such phase flips, the modulation amplitude of component 2 becomes negative and therefore there is a partial cancellation in the combined modulation amplitude between the individual DM contributions. As can be seen, such two-component DM combinations reproduce very well the modulated spectrum observed by DAMA.
(letter) [![Profile likelihood ratio (PLR) density overlaid with 1 and 2$\sigma$ C.L. contours for the two-component isospin-conserving dark matter fit to DAMA data. We show four illustrative two-dimensional slices of the four-dimensional parameter space. We indicate the two distinct regions of high PLR density corresponding to the two best-fit solutions A and B (see Tab. \[tab:fit2DAMA\].)[]{data-label="regions:4D"}](Figures/4D_4_3.pdf "fig:")]{}; (letter) at (0, -0.95) [B]{}; (letter) at (-1.6, 0.8) [A]{};
(letter) [![Profile likelihood ratio (PLR) density overlaid with 1 and 2$\sigma$ C.L. contours for the two-component isospin-conserving dark matter fit to DAMA data. We show four illustrative two-dimensional slices of the four-dimensional parameter space. We indicate the two distinct regions of high PLR density corresponding to the two best-fit solutions A and B (see Tab. \[tab:fit2DAMA\].)[]{data-label="regions:4D"}](Figures/4D_5_2.pdf "fig:")]{}; (letter) at (0.25, -0.97) [A]{}; (letter) at (-1.4, 0.8) [B]{};
\
(letter) [![Profile likelihood ratio (PLR) density overlaid with 1 and 2$\sigma$ C.L. contours for the two-component isospin-conserving dark matter fit to DAMA data. We show four illustrative two-dimensional slices of the four-dimensional parameter space. We indicate the two distinct regions of high PLR density corresponding to the two best-fit solutions A and B (see Tab. \[tab:fit2DAMA\].)[]{data-label="regions:4D"}](Figures/4D_3_2.pdf "fig:")]{}; (letter) at (0, -0.95) [A]{}; (letter) at (-1.6, 0.8) [B]{};
(letter) [![Profile likelihood ratio (PLR) density overlaid with 1 and 2$\sigma$ C.L. contours for the two-component isospin-conserving dark matter fit to DAMA data. We show four illustrative two-dimensional slices of the four-dimensional parameter space. We indicate the two distinct regions of high PLR density corresponding to the two best-fit solutions A and B (see Tab. \[tab:fit2DAMA\].)[]{data-label="regions:4D"}](Figures/4D_5_4.pdf "fig:")]{}; (letter) at (0.29, -1.1) [A]{}; (letter) at (-1.35, 0.5) [B]{};
In Fig. \[regions:4D\] we show the profile likelihood ratio (PLR) density $\mathcal{L}/\mathcal{L}_\text{max}$ overlaid with 1 and 2$\sigma$ C.L. contours. We only show four illustrative slices of the full four dimensional parameter space. One immediately notices two distinct regions of high PLR density which correspond to the two solutions A and B, which we indicate in the plot. There is a slight preference for solution B (see Tab. \[tab:fit2DAMA\]). One can observe that the 1 and 2$\sigma$ regions are quite extended, with a significant range of parameters being able to reproduce the DAMA results.
In the $m_1$–$m_2$ plane (*top left panel*) one can observe that, for each of the solutions, one of the DM components has a narrow range of masses (for example component 1 for solution B, which always gives the largest contribution at all energies) while the other one (component 2 for solution B) spans a large mass range, with a contribution that is subdominant. There is a sizable range of mass splittings that provide a good fit. In the $\log_{10}(r_\rho)$–$\log_{10}(\sigma_1^p)$ plane (*top right panel*) one can observe that the two solutions are aligned in a band with negative correlation, i.e., the larger the cross section the smaller the ratio of energy densities.[^7] In the $\log_{10}(r_\rho)$–$m_2$ plane (*bottom left panel*) we see that it is solution B (with lighter $m_1$ and heavier $m_2$) that has the larger cross section. In the $\log_{10}(r_\rho)$-$m_2$ plane (*bottom right panel*) the large degeneracy in the case of solution B between $m_2$ and $r_\rho$ is clearly visible. It is due to the fact that, for $m_2 \gg m_I$, the minimum velocity of the second component in iodine, $v^{(2)}_{m,I}$, approaches a constant value, so that its contribution to the amplitude only depends on the ratio $r_\rho/m_2$.
Conclusions {#sec:conc}
===========
The observed annual modulation in the low energy bins below 2 keVee in the Phase-2 run DAMA/ LIBRA results is no longer compatible with a one-component dark matter interpretation in the simplest spin-independent isospin-conserving scenario. The reason is easy to understand: a light DM particle that scatters predominantly off sodium produces a spectrum that steeply increases at low recoil energies due to iodine scatterings becoming allowed; conversely, a heavy DM mass produces a spectrum that falls off at the lowest energy bins. The issue is that the observed values in the lowest energy bins happen to lie, roughly speaking, between the expected modulated spectrum of a light and that of a heavy DM state. It then seems natural that a two-component model comprising both a light and heavy component can revive the DM interpretation of the low threshold data. Indeed, below a certain energy value, the modulation amplitude of the heavy DM state becomes negative, so that in the combined modulation amplitude the individual DM contributions counterbalance each other to some extent. Of course, their masses and their relative local energy densities (or similarly, the relative strength of their interactions) play an important role regarding the compatibility of the two-component DM scenario as a solution to the observed spectrum by DAMA.
In this short letter we performed a fit of two-component DM to DAMA data. Contrary to the case of one-component DM where now only IV couplings provide a good fit, we find very good agreement to the data also for IC couplings. Two different solutions with qualitatively different spectral behaviour are found, shown in Fig. \[fit:2DM\]. On one of them (A), scatterings are predominantly on iodine, with a crossing between the spectrum of the two individual DM components at roughly 2 keVee. On the other one (B), the lighter component scatters off both sodium and iodine at the lowest energies and the individual spectra do not cross. The results of the fits are summarised in Tab. \[tab:fit2DAMA\] and Fig. \[regions:4D\], which involve reasonable values of the relative energy densities and the cross section. Although model-dependent assumptions regarding the velocity dispersions or the relic abundance of the different DM components may modify the results, the main conclusions are expected to hold, that is, that the DAMA spectrum can be well reproduced by two DM particles.[^8]
Two-component DM therefore looks like a natural solution to the first part of the DAMA puzzle: the compatibility of the spectrum with that expected from DM under standard astrophysical and particle physics assumptions. The second part of the DAMA puzzle, that is, the compatibility of DAMA data with other null results is not solved. It is interesting to speculate, however, how it might be somewhat improved in the scenario considered here.
This work is supported by the Australian Research Council through the Centre of Excellence for Particle Physics at the Terascale CE110001004. MW is supported by the Australian Research Council Future Fellowship FT140100244. MW wishes to thank Lucien Boland and Sean Crosby for their administration of, and ongoing assistance with, the MPI-enabled computing cluster on which this work was performed.
[^1]: <http://orcid.org/0000-0002-3300-0029>
[^2]: <http://orcid.org/0000-0002-1203-6452>
[^3]: <http://orcid.org/0000-0001-5474-4580>
[^4]: <http://orcid.org/0000-0002-1472-1592>
[^5]: Note that the data presented by the DAMA collaboration is already corrected for the detector efficiency, so we conduct our analysis taking $\epsilon(E_\text{ee})=1$.
[^6]: We have checked the consistency of the best-fit points obtained with [Multinest]{} and with [Minuit]{} by adequately choosing the search ranges in the latter. The reason is that [Multinest]{} is well suited to finding different local minima in multi-dimensional parameter spaces, but, unless a very low tolerance and a high efficiency are used, which would make the scans very computationally expensive, the best-fit points may not be as accurate as those given by [Minuit]{}. Therefore in the tables we show those obtained with the latter.
[^7]: Therefore the figures in the $\log_{10}(r_\rho)$–$m_1$ and the $m_2$–$\log_{10}(\sigma_1^p)$ planes can be easily obtained from the ones in Fig. \[regions:4D\] and we do not show them.
[^8]: In this work we focused on the amplitude of the annual modulation. Let us also mention that the phase of the modulation is also sensitive to the DM mass via gravitational focusing, and thus in theory also to the existence of one or two DM components, and to their masses. Although this effect is expected to be small [@Alenazi:2006wu; @Lee:2013wza; @Bozorgnia:2014dqa], it would be interesting to study it in more detail.
|
---
abstract: 'The asymptotic behaviour of the Néron-Tate height of Heegner points on a rational elliptic curve attached to an arithmetically normalized new cusp form $f$ of weight $2$, level $N$ and trivial character is studied in this paper. By Gross-Zagier formula, this height is related to the special value at the critical point for the derivative of the Rankin-Selberg convolution of $f$ with a certain weight one theta series attached to some ideal class of some imaginary quadratic field. Asymptotic formula for the first moments asociated to these Dirichlet series are proved and experimental results are carefully discussed.'
author:
- 'Guillaume Ricotta - Thomas Vidick'
date: Version of
title: Hauteur asymptotique des points de Heegner
---
[*En l’honneur du Professeur Henryk Iwaniec, pour l’ensemble de son oeuvre analytique et pour son titre de Docteur Honoris Causa de l’Université de Bordeaux 1.*]{}
Description de la problématique
===============================
L’étude calculatoire systématique de la hauteur de Néron-Tate des points de Heegner sur différentes courbes elliptiques montre de grandes disparités. Si l’on considère deux courbes elliptiques de même conducteur $N$ alors les points de Heegner sur ces deux courbes sont l’image des mêmes points spéciaux de la courbe modulaire de niveau $N$ notée $\text{X}_0(N)$ par la paramétrisation modulaire et on s’attend donc à ce que leurs hauteurs soient dans le même rapport que les degrés de ces paramétrisations. Cependant, la figure \[37A-1\] montre que le comportement des hauteurs est très irrégulier: même si les courbes $37\text{A}$ et $37\text{B}$ (dans la notation de Cremona ([@cremona])) ont le même conducteur et le même degré, les points paraissent légèrement plus gros sur la $37\text{B}$ que sur la $37\text{A}$.
Nous allons montrer que l’intuition se vérifie asymptotiquement: à conducteur fixé[^1], la moyenne sur une certaine sous-classe de discriminants de la hauteur des points de Heegner est proportionnelle au degré.
Pour cela, nous allons procéder à partir de la formule de Gross-Zagier ([@GZ]) et raisonner sur les séries de Dirichlet et les fonctions $L$ en nous servant principalement d’un résultat de H. Iwaniec ([@iwaniec]).
![Comportement des points de Heegner sur les courbes $37\text{A}$ et $37\text{B}$[]{data-label="37A-1"}](37AB-points.eps){height="10cm"}
[**Remerciements**.]{} Les auteurs remercient chaleureusement H. Darmon pour leur avoir suggéré cette problématique et pour ses nombreux conseils et encouragements. Ce travail a été réalisé à l’occasion d’un stage d’études à McGill University (Montréal) pour le second auteur et d’un stage post-doctoral à l’Université de Montréal (Montréal) pour le premier auteur. Les excellentes conditions de travail offertes par ces deux institutions ont fortement contribué à la réalisation de cet article. Le premier auteur a largement profité de la générosité et des conseils Mathématiques avisés de A. Granville lors de son stage post-doctoral.
Préliminaires
=============
Soit $E$ une courbe elliptique définie sur ${\mathbb{Q}}$. Supposons pour simplifier que son conducteur $N$ est sans facteurs carrés et considérons l’ensemble $$\mathcal{D}:= {\mathnormal}{\{d\in\mathbb{Z}_-^*, \mu^2(d)=1, d\equiv \nu^2 \mod 4N, (\nu,4N)=1\}}$$ de discriminants. Pour $d$ dans $\mathcal{D}$, notons $\mathbb{H}_d$ le corps de classe de Hilbert du corps quadratique imaginaire $\mathbb{K}_d:={\mathbb{Q}}(\sqrt{d})$ et souvenons-nous que $G_d:=\text{Gal}(\mathbb{H}_d\vert\mathbb{K}_d)$ est isomorphe au groupe de classes de $\mathbb{K}_d$ de cardinal le nombre de classes $h_d$. Soit $X_0(N)$ la courbe modulaire de niveau $N$ classifiant les paires de courbes elliptiques $(E_1,E_2)$ reliées par une isogénie cyclique de degré $N$. Une description analytique sur $\mathbb{C}$ de cette courbe est donnée par le quotient du demi-plan de Poincaré complété $\mathbb{H}\cup\left(\mathbb{Q}\cup\left\{\infty\right\}\right)$ par l’action par homographies du groupe de congruence $\varGamma_0(N)$.
Un *point de Heegner de niveau $N$ et de discriminant $d$* est un couple ordonné $(E_1,E_2)$ de courbes elliptiques muni d’une isogénie cyclique de degré $N$ tel que $E_1$ et $E_2$ aient multiplication complexe par l’anneau des entiers $\mathcal{O}_d$ de $\mathbb{K}_d$.
Fixons une fois pour toute une racine carrée $s_d$ de $d$ modulo $4N$ et désignons par ${\mathfrak}{n}_d$ l’idéal entier primitif de norme $N$ suivant: $${\mathfrak}{n}_d{\mathnormal}{:={\ensuremath{\left(N,{{\ensuremath{\mathchoice {\dfrac{s_d+\sqrt{d}}{2}}
{\dfrac{s_d+\sqrt{d}}{2}}
{\frac{s_d+\sqrt{d}}{2}}
{\frac{s_d+\sqrt{d}}{2}}
}}}\right)}}}.$$ À $s_d$ fixé, l’ensemble des points de Heegner de niveau $N$ et de discriminant $d$ est en bijection avec le groupe de classes de $\mathbb{K}_d$ au sens suivant: si $[\mathfrak{a}]$ est l’élément du groupe de classes de $\mathbb{K}_d$ associé à l’idéal entier primitif $\mathfrak{a}$ de $\mathcal{O}_d$ alors $({\mathbb{C}}/{\mathfrak}{a}{\mathnormal}{,{\mathbb{C}}/}{\mathfrak}{a}\mathfrak{n}_d{^{-1}})$ est un point de Heegner de niveau $N$ et de discriminant $d$. Le point du demi-plan de Poincaré modulo $\varGamma_0(N)$ correspondant à ce point de Heegner est donné par $\frac{-B+\sqrt{d}}{2A}$ modulo $\varGamma_0(N)$ où $(A,B,C)$ est la forme quadratique de discriminant $d$ correspondant à $[\mathfrak{a}]$ et où $N\mid A$ et $B\equiv s_d\mod 2N$.
Un *point de Heegner de niveau $N$ et de discriminant $d$ sur $E$* est l’image par la paramétrisation modulaire $\Phi_{N,E}: X_0(N)\rightarrow E$ d’un point de la forme $$(E_1,E_2)=({\mathbb{C}}/{\mathfrak}{a}{\mathnormal}{,{\mathbb{C}}/}{\mathfrak}{a}\mathfrak{n}_d{^{-1}}),$$ où ${\mathfrak}{a}$ est un idéal de l’anneau des entiers $\mathcal{O}_d$ de $\mathbb{K}_d$.
Notons P$_d=\Phi_{N,E}\left(\left(E_1,E_2\right)\right)$, ce point ne dépend que de la classe de ${\mathfrak}{a}$ dans le groupe de classes de $\mathbb{K}_d$ (une fois $E$ et $d$ fixés). Notons également Tr$_d=$Tr$_{\mathbb{H}_d\vert\mathbb{K}_d}($P$_d)$. Les points de Heegner sont définis sur $E(\mathbb{H}_d)$ et sont permutés par $G_d$ ([@gross]). Si $R$ est un corps de nombres alors $\hat{h}_R(P)$ désigne la hauteur de Néron-Tate (comme définie dans [@si86], VIII.9) prise sur $R$ du point $P$ à coordonnées dans $R$. On rappelle que si $S$ est une extension de degré fini de $R$ alors $\hat{h}_S(P)=[S:R]\hat{h}_R(P)$. Le but de cet article est d’étudier la valeur en moyenne, sur les $d$ dans $\mathcal{D}$ des deux objets suivants:
- $\hat{h}_{\mathbb{H}_d}($P$_d)$, hauteur de Néron-Tate sur $\mathbb{H}_d$ d’un quelconque des $h_d$ points de Heegner définis ci-dessus. Puisque $\hat{h}_{\mathbb{H}_d}$ est invariante sous l’action de $G_d$, cette hauteur est indépendante du point choisi,
- $\hat{h}_{\mathbb{K}_d}($Tr$_d)$, hauteur de Néron-Tate de la trace sur $\mathbb{K}_d$ d’un quelconque des points de Heegner P$_d$ définis ci-dessus.
B.H. Gross et D. Zagier ([@GZ]) ont relié $\hat{h}_{\mathbb{H}_d}($P$_d)$ à la valeur de la dérivée en $1$ de la série de Dirichlet obtenue en effectuant le produit de la fonction $L$ de Dirichlet associée au caractère primitif réel $\chi_d(m):={\ensuremath{\left({{\ensuremath{\mathchoice {\dfrac{d}{m}}
{\dfrac{d}{m}}
{\frac{d}{m}}
{\frac{d}{m}}
}}}\right)}}$ de conducteur $\vert d\vert$ du corps $\mathbb{K}_d$ (le caractère de Kronecker du corps) par la convolution de Rankin-Selberg de $L(E\vert\mathbb{Q},s)$ avec la fonction zeta $\sum_{n\geqslant 1}r_d(n)n^{-s}$ de la classe des idéaux principaux de $\mathbb{K}_d$ c’est-à-dire $$\begin{aligned}
\label{lesd}
L_d(E,s):=\left(\sum_{\substack{m\geqslant 1 \\
(m,N)=1}}\frac{\chi_d(m)}{m^{2s-1}}\right)\times\left(\sum_{n\geqslant 1}{{\ensuremath{\mathchoice {\dfrac{a_n r_d(n)}{n^s}}
{\dfrac{a_n r_d(n)}{n^s}}
{\frac{a_n r_d(n)}{n^s}}
{\frac{a_n r_d(n)}{n^s}}
}}}\right)\end{aligned}$$ où pour tout entier naturel non-nul $n$, $r_d(n)$ désigne le nombre d’idéaux principaux de $\mathbb{K}_d$ de norme $n$.
Si $E$ est une courbe elliptique définie sur ${\mathbb{Q}}$ et $d$ est dans $\mathcal{D}$ alors $$\begin{aligned}
\label{gzpoint}
L^\prime_d(E,1)= {{\ensuremath{\mathchoice {\dfrac{2\Omega_{E,N}}{u^2\sqrt{-d}}}
{\dfrac{2\Omega_{E,N}}{u^2\sqrt{-d}}}
{\frac{2\Omega_{E,N}}{u^2\sqrt{-d}}}
{\frac{2\Omega_{E,N}}{u^2\sqrt{-d}}}
}}} \hat{h}_{\mathbb{H}_d}(\emph{P}_d),\\
\label{gztrace}
L^\prime(E\vert\mathbb{K}_d,1)={{\ensuremath{\mathchoice {\dfrac{2\Omega_{E,N}}{u^2\sqrt{-d}}}
{\dfrac{2\Omega_{E,N}}{u^2\sqrt{-d}}}
{\frac{2\Omega_{E,N}}{u^2\sqrt{-d}}}
{\frac{2\Omega_{E,N}}{u^2\sqrt{-d}}}
}}} \hat{h}_{\mathbb{K}_d}(\emph{Tr}_d)\end{aligned}$$ où $L(E\vert\mathbb{K}_d,s)$ est la fonction $L$ de $E$ sur le corps $\mathbb{K}_d$, $2u$ est le nombre de racines de l’unité de $\mathbb{K}_d$ et $\Omega_{E,N}=Im(\omega_1\bar{\omega}_2)$ est le volume complexe de $E$ (c’est-à-dire le double de l’aire d’un parallélogramme fondamental de $E(\mathbb{C})$).
Nous rappelons finalement la relation évidente $$\hat{h}_{\mathbb{K}_d}(\text{Tr}_d)=\hat{h}_{\mathbb{H}_d}(\text{P}_d)+\sum_{\sigma\in G_d\backslash \{Id\}} <\text{P}_d,\text{P}_d^{\sigma}>_{\mathbb{H}_d}$$ entre nos deux objets d’étude ($<\cdot,\cdot>_{\mathbb{H}_d}$ désigne la forme bilinéaire de Néron-Tate sur $\mathbb{H}_d$). Le troisième terme représente l’angle formé par les points de Heegner entre eux et il sera intéressant de l’analyser (voir paragraphe \[5\]).
Les traces {#traces}
==========
Mise en place
-------------
Il est ici nécessaire de raisonner selon le rang de la courbe $E$. En effet, soit $L(E\vert\mathbb{Q},s):=\sum_{n\geqslant 1}a_n n^{-s}$ sa série $L$ sur ${\mathbb{Q}}$ définie a priori sur $\Re{(s)}>\frac{3}{2}$. Selon les travaux de A. Wiles et de R. Taylor (**[@Wi]** et **[@TaWi]**), il existe une forme primitive cuspidale $f$ de niveau $N$, de poids $2$ et de caractère trivial telle que $$L(E\vert\mathbb{Q},s)=L(f,s)$$ sur $\Re{(s)}>\frac{3}{2}$. Par conséquent, $L(E\vert\mathbb{Q},s)$ admet un prolongement holomorphe à $\mathbb{C}$ et satisfait l’équation fonctionnelle $$\left(\frac{\sqrt{N}}{2\pi}\right)^s\varGamma(s)L(E\vert\mathbb{Q},s)=\omega\left(\frac{\sqrt{N}}{2\pi}\right)^{2-s}\varGamma(2-s)L(E\vert\mathbb{Q},2-s)$$ où $\omega=\pm 1$ est une valeur propre d’Atkin-Lehner de $f$. Définissons pour tout discriminant $d$ dans $\mathcal{D}$ la fontion $L$ de $E$ sur $\mathbb{Q}$ tordue par $\chi_d$ sur $\Re{(s)}>\frac{3}{2}$ par $$L(E\vert\mathbb{Q}\times\chi_d,s):=\sum_{n\geqslant 1}\frac{a_n\chi_d(n)}{n^s}.$$ $L(E\vert\mathbb{Q}\times\chi_d,s)$ admet un prolongement holomorphe à $\mathbb{C}$ et satisfait l’équation fonctionnelle $$\left(\frac{\vert d\vert\sqrt{N}}{2\pi}\right)^s\varGamma(s)L(E\vert\mathbb{Q}\times\chi_d,s)=\omega_d\left(\frac{\vert d\vert\sqrt{N}}{2\pi}\right)^{2-s}\varGamma(2-s)L(E\vert\mathbb{Q}\times\chi_d,2-s)$$ où $\omega_d:=\omega\chi_d(-N)=-\omega$ (voir [@IwKo]). Avec ces notations, la factorisation $$\label{lesurk}
L(E\vert\mathbb{K}_d,s)=L(E\vert\mathbb{Q},s)L(E\vert\mathbb{Q}\times\chi_d,s)$$ est valide (voir [@Da]) d’où $$L^\prime(E\vert\mathbb{K}_d,1)=L^\prime(E\vert\mathbb{Q},1)L(E\vert\mathbb{Q}\times\chi_d,1)+L(E\vert\mathbb{Q},1)L^\prime(E\vert\mathbb{Q}\times\chi_d,1).$$ Ainsi, l’étude de la hauteur des traces des points de Heegner en moyenne sur les discriminants $d$ dans $\mathcal{D}$ est ramenée à l’étude de la valeur au point critique des fonctions $L$ tordues lorsque $E$ est de rang analytique $1$ et à l’étude des dérivées au point critique des fonctions $L$ tordues lorsque $E$ est de rang analytique $0$ car alors $\omega=+1$.
Courbes de rang analytique $0$
------------------------------
Rappelons le théorème obtenu par H. Iwaniec dans [@iwaniec]. Avant cela, fixons une fois pour toutes les deux notations $$\label{gammaN}
\gamma(4N):=\#\{d\mod 4N, d\equiv \nu^2\mod 4N, (\nu,4N)=1\},$$ et $$\label{cN}
c_N:={{\ensuremath{\mathchoice {\dfrac{3\gamma(4N)}{\pi^2N}}
{\dfrac{3\gamma(4N)}{\pi^2N}}
{\frac{3\gamma(4N)}{\pi^2N}}
{\frac{3\gamma(4N)}{\pi^2N}}
}}}\prod_{\substack{p\in\mathcal{P} \\
p\mid 2N}} {\ensuremath{\left(1-{{\ensuremath{\mathchoice {\dfrac{1}{p^2}}
{\dfrac{1}{p^2}}
{\frac{1}{p^2}}
{\frac{1}{p^2}}
}}}\right)}}^{-1}.$$
\[iwa\] Si $E$ une courbe elliptique rationnelle de conducteur $N$ sans facteurs carrés et de rang analytique $\,\,0$ et $F$ est une fonction lisse à support compact dans $\mathbb{R}_+$ et de moyenne strictement positive alors $$\sum_{d\in\mathcal{D}} L'(E\vert\mathbb{Q}\times\chi_d,1)F\left(\frac{\vert d\vert}{Y}\right)=\alpha_NY\log{Y}+\beta_NY+\mathcal{O}_\varepsilon\left(N^{\frac{23}{14}+\varepsilon}Y^{\frac{13}{14}+\varepsilon}\right)$$ pour tout $\varepsilon>0$ où $$\label{alphaN}
\alpha_N:=c_NL(1)\int_0^{+\infty}F(t)\mathrm{d}t\neq 0$$ et $$\beta_N:=c_N\int_0^{+\infty}F(t)\left(L^\prime(1)+L(1)\left(\log{\left(\frac{\sqrt{N}t}{2\pi}\right)}-\gamma\right)\right)\mathrm{d}t$$ avec $$L(s):={{\ensuremath{\mathchoice {\dfrac{L\left(\text{Sym}^2E,2s\right)}{\zeta^{(N)}(4s-2)}}
{\dfrac{L\left(\text{Sym}^2E,2s\right)}{\zeta^{(N)}(4s-2)}}
{\frac{L\left(\text{Sym}^2E,2s\right)}{\zeta^{(N)}(4s-2)}}
{\frac{L\left(\text{Sym}^2E,2s\right)}{\zeta^{(N)}(4s-2)}}
}}}\left(\prod_{\substack{p\in\mathcal{P} \\
p\mid N}} {\ensuremath{\left(1-{{\ensuremath{\mathchoice {\dfrac{a_p}{p^s}}
{\dfrac{a_p}{p^s}}
{\frac{a_p}{p^s}}
{\frac{a_p}{p^s}}
}}}\right)}}^{-1}{\ensuremath{\left(1-{{\ensuremath{\mathchoice {\dfrac{a_{p^2}}{p^{2s}}}
{\dfrac{a_{p^2}}{p^{2s}}}
{\frac{a_{p^2}}{p^{2s}}}
{\frac{a_{p^2}}{p^{2s}}}
}}}\right)}}\right)\mathcal{P}(s)$$ et $$\label{P}
\mathcal{P}(s):=\prod_{\substack{p\in\mathcal{P} \\
p\nmid 2N}} {\ensuremath{\left({{\ensuremath{\mathchoice {\dfrac{1}{1+1/p}}
{\dfrac{1}{1+1/p}}
{\frac{1}{1+1/p}}
{\frac{1}{1+1/p}}
}}}+{\ensuremath{\left({{\ensuremath{\mathchoice {\dfrac{1}{1+p}}
{\dfrac{1}{1+p}}
{\frac{1}{1+p}}
{\frac{1}{1+p}}
}}}\right)}}{\ensuremath{\left({{\ensuremath{\mathchoice {\dfrac{1+p^{2-4s}-(a_p^2-2p)p^{-2s}}{1+p^{1-2s}}}
{\dfrac{1+p^{2-4s}-(a_p^2-2p)p^{-2s}}{1+p^{1-2s}}}
{\frac{1+p^{2-4s}-(a_p^2-2p)p^{-2s}}{1+p^{1-2s}}}
{\frac{1+p^{2-4s}-(a_p^2-2p)p^{-2s}}{1+p^{1-2s}}}
}}}\right)}}\right)}}.$$
Dans [@iwaniec], la fonction $L$ est définie par la série de Dirichlet suivante $$L(s)=\sum_{\substack{n=k\ell^2 \\
k\mid N^\infty \\
(\ell,N)=1}}\frac{b_n}{n^{s}}$$ avec $$b_n:=a_n\prod_{\substack{p\in\mathcal{P} \\
p\mid n \\
p\nmid 2N}} {\ensuremath{\left(1+{{\ensuremath{\mathchoice {\dfrac{1}{p}}
{\dfrac{1}{p}}
{\frac{1}{p}}
{\frac{1}{p}}
}}}\right)}}^{-1}$$ pour tout entier naturel non-nul $n$. Celle-ci peut se réécrire sous la forme $$L(s)=\prod_{\substack{p\in\mathcal{P} \\
p\mid N}} {\ensuremath{\left(1-{{\ensuremath{\mathchoice {\dfrac{a_p}{p^s}}
{\dfrac{a_p}{p^s}}
{\frac{a_p}{p^s}}
{\frac{a_p}{p^s}}
}}}\right)}}^{-1} \prod_{\substack{p\in\mathcal{P} \\
p\nmid 2N}} {\ensuremath{\left(1+{\ensuremath{\left(1+{{\ensuremath{\mathchoice {\dfrac{1}{p}}
{\dfrac{1}{p}}
{\frac{1}{p}}
{\frac{1}{p}}
}}}\right)}}^{-1}{\ensuremath{\left(\sum_{i=1}^{\infty} {{\ensuremath{\mathchoice {\dfrac{a_{p^{2i}}}{p^{2is}}}
{\dfrac{a_{p^{2i}}}{p^{2is}}}
{\frac{a_{p^{2i}}}{p^{2is}}}
{\frac{a_{p^{2i}}}{p^{2is}}}
}}}\right)}}\right)}} \prod_{\substack{p\in\mathcal{P} \\
p\mid(2,N-1)}} {\ensuremath{\left(1+\sum_{i=1}^{\infty} {{\ensuremath{\mathchoice {\dfrac{a_{p^{2i}}}{p^{2is}}}
{\dfrac{a_{p^{2i}}}{p^{2is}}}
{\frac{a_{p^{2i}}}{p^{2is}}}
{\frac{a_{p^{2i}}}{p^{2is}}}
}}}\right)}}$$ ce qui permet de retrouver l’expression de $L$ donnée dans le théorème en fonction du carré symétrique de $E$ et du produit Eulérien $\mathcal{P}$. Signalons que l’holomorphie de la fonction $L\left(\text{Sym}^2E,s\right)$ dans tout le plan complexe a été prouvée par Shimura dans [@shimura] et que le produit Eulérien $\mathcal{P}(s)$ est absolument convergeant sur $\Re{(s)}>\frac{3}{4}$ et y définit une fonction holomorphe.
La valeur de la fonction $L\left(\text{Sym}^2E,s\right)$ au bord de la bande critique est reliée au degré de la paramétrisation modulaire $\Phi_{N,E}:X_0(N)\rightarrow E$ de $E$ (cf. [@watkins] (1-1)) par la formule suivante analogue à celle du nombre de classes de Dirichlet $${{\ensuremath{\mathchoice {\dfrac{L\left(\text{Sym}^2E,2\right)}{\pi\Omega_{E,N}}}
{\dfrac{L\left(\text{Sym}^2E,2\right)}{\pi\Omega_{E,N}}}
{\frac{L\left(\text{Sym}^2E,2\right)}{\pi\Omega_{E,N}}}
{\frac{L\left(\text{Sym}^2E,2\right)}{\pi\Omega_{E,N}}}
}}}={{\ensuremath{\mathchoice {\dfrac{\text{deg}(\Phi_{N,E})}{Nc_E(N)^2}}
{\dfrac{\text{deg}(\Phi_{N,E})}{Nc_E(N)^2}}
{\frac{\text{deg}(\Phi_{N,E})}{Nc_E(N)^2}}
{\frac{\text{deg}(\Phi_{N,E})}{Nc_E(N)^2}}
}}}$$ où $c_E(N)$ est la $\varGamma_0(N)$-constante de Manin de $E$ qui est un entier relatif uniformément borné ([@Ed]). Ici, on utilise le fait que $N$ est sans facteurs carrés à deux reprises:
- les fonctions $L$ du carré symétrique de $E$ motivique et analytique coincident car il n’y a pas de termes correctifs en les nombres premiers dont le carré divise le conducteur $N$,
- lorsque $E$ est une courbe de Weil $X_0(N)$-forte et $N$ est impair, la constante de Manin vaut $\pm 1$ selon les travaux de A. Abbes et E. Ullmo ([@AbUl]) alors que J. Manin ([@Ma]) a conjecturé que cette constante vaut $\pm 1$ pour toute courbe de Weil $X_0(N)$-forte[^2].
Il semblerait que quelques petites erreurs de frappe dans $\alpha_N$ (et en fait dans $c_N$ et $L$) se soient glissées dans [@iwaniec]. La valeur donnée ici est corrigée.
Dans [@iwaniec], la dépendance en le conducteur $N$ de la courbe dans le terme d’erreur n’est pas explicite. Cependant, il suffit de reprendre les différentes majorations pour restituer celle-ci.
\[rang0\] Si $E$ est une courbe elliptique rationnelle de conducteur $N$ sans facteurs carrés et de rang analytique $0$ alors $$\begin{gathered}
\label{eqrg0}
\sum_{\substack{d\in\mathcal{D} \\
\vert d\vert\leqslant Y}}\hat{h}_{\mathbb{K}_d}(\emph{Tr}_d)=C_{\emph{Tr}}^{(0)}Y^{\frac{3}{2}}\log{Y}+\frac{C_{\emph{Tr}}^{(0)}}{2}\log{(N)}Y^{\frac{3}{2}} \\
+\frac{L(E\vert\mathbb{Q},1)c_N}{3\Omega_{E,N}}\left(L^\prime(1)-L(1)\left(\frac{2}{3}+\log{(2\pi)}+\gamma\right)\right)Y^{\frac{3}{2}}+\mathcal{O}_\varepsilon\left(\frac{L(E\vert\mathbb{Q},1)}{\Omega_{E,N}}N^{\frac{23}{14}+\varepsilon}Y^{\frac{20}{14}+\varepsilon}\right)\end{gathered}$$ pour tout $\varepsilon>0$ où $C_{\emph{Tr}}^{(0)}$ est la constante définie par $$C_{\emph{Tr}}^{(0)}:={{\ensuremath{\mathchoice {\dfrac{2}{\pi}}
{\dfrac{2}{\pi}}
{\frac{2}{\pi}}
{\frac{2}{\pi}}
}}}c_N\mathcal{P}(1)L(E\vert\mathbb{Q},1){{\ensuremath{\mathchoice {\dfrac{L(\text{Sym}^2 E,2)}{\pi\Omega_{E,N}}}
{\dfrac{L(\text{Sym}^2 E,2)}{\pi\Omega_{E,N}}}
{\frac{L(\text{Sym}^2 E,2)}{\pi\Omega_{E,N}}}
{\frac{L(\text{Sym}^2 E,2)}{\pi\Omega_{E,N}}}
}}} L_{E}$$ en notant $L_{E}$ un produit de facteurs locaux correspondant aux facteurs Eulériens de $L(E\vert\mathbb{Q},s)$ et de $L(\text{Sym}^2(E),s)$ en les nombres premiers divisant le conducteur $$\label{LE}
L_{E}:=\prod_{\substack{p\in\mathcal{P} \\
p\mid N}} {\ensuremath{\left(1-{{\ensuremath{\mathchoice {\dfrac{a_p}{p}}
{\dfrac{a_p}{p}}
{\frac{a_p}{p}}
{\frac{a_p}{p}}
}}}\right)}}^{-1}{\ensuremath{\left(1-{{\ensuremath{\mathchoice {\dfrac{1}{p^2}}
{\dfrac{1}{p^2}}
{\frac{1}{p^2}}
{\frac{1}{p^2}}
}}}\right)}}^{-1}{\ensuremath{\left(1-{{\ensuremath{\mathchoice {\dfrac{a_{p^2}}{p^2}}
{\dfrac{a_{p^2}}{p^2}}
{\frac{a_{p^2}}{p^2}}
{\frac{a_{p^2}}{p^2}}
}}}\right)}}.$$
[**Preuve du corollaire \[rang0\].**]{} Il suffit d’appliquer la formule de Gross-Zagier et de prendre pour $F$ une approximation lisse de la fonction qui vaut ${{\ensuremath{\mathchoice {\dfrac{3}{2}}
{\dfrac{3}{2}}
{\frac{3}{2}}
{\frac{3}{2}}
}}}\sqrt{t}$ sur $[0,1]$ et $0$ en dehors de cet intervalle. Le corollaire découle alors de .
$\blacksquare$
À conducteur $N$ fixé, le terme principal dans est $C_{\emph{Tr}}^{(0)}Y^{\frac{3}{2}}\log Y$ avec $$C_{\emph{Tr}}^{(0)}=\left(\frac{6}{\pi^3c_E(N)^2}\mathcal{P}(1)\prod_{\substack{p\in \mathcal{P} \\
p \mid 2N}}\left(1-\frac{1}{p^2}\right)^{-1}\right)\times L(E\vert\mathbb{Q},1)\,L_{E}\times\frac{\gamma(4N)}{N^2}\text{deg}(\Phi_{N,E})$$ et est donc proportionnel au degré de la paramétrisation modulaire. Par contre, si $N=Y^a$ avec $0<a<\frac{1}{23}$ alors le deuxième terme de est du même ordre de grandeur que le premier et le terme principal devient $$\left(1+\frac{a}{2}\right)C_{\emph{Tr}}^{(0)}Y^{\frac{3}{2}}\log{Y}.$$ C’est la principale raison pour laquelle nous avons rendu explicite la dépendance en le conducteur $N$ de la courbe dans le terme d’erreur. Il serait aussi intéressant d’étudier l’influence des valeurs extrémales de $L(E\vert\mathbb{Q},1)$ sur la moyenne des hauteurs des traces des points de Heegner. Malheureusement, cela ne semble pas vérifiable numériquement étant donné le temps de calcul nécessaire par les algorithmes à utiliser.
Le produit Eulérien $\mathcal{P}(s)$ varie peu. En effet, si l’on effectue un développement limité du facteur local en le nombre premier $p$ de $\mathcal{P}(1)$, on obtient $$\mathcal{P}_{p}(1)=1-{{\ensuremath{\mathchoice {\dfrac{a_p^2-2p}{p^3}}
{\dfrac{a_p^2-2p}{p^3}}
{\frac{a_p^2-2p}{p^3}}
{\frac{a_p^2-2p}{p^3}}
}}}+\mathcal{O}{\ensuremath{\left({{\ensuremath{\mathchoice {\dfrac{1}{p^2}}
{\dfrac{1}{p^2}}
{\frac{1}{p^2}}
{\frac{1}{p^2}}
}}}\right)}}$$ d’où l’existence d’une constante absolue $C>0$ ne dépendant pas de la courbe $E$ considérée telle que $$C\prod_{p\in\mathcal{P}}{\ensuremath{\left(1-{{\ensuremath{\mathchoice {\dfrac{2}{p^2}}
{\dfrac{2}{p^2}}
{\frac{2}{p^2}}
{\frac{2}{p^2}}
}}}\right)}}<\mathcal{P}(1)<C\prod_{p\in\mathcal{P}}{\ensuremath{\left(1+{{\ensuremath{\mathchoice {\dfrac{2}{p^2}}
{\dfrac{2}{p^2}}
{\frac{2}{p^2}}
{\frac{2}{p^2}}
}}}.\right)}}$$ Finalement, les termes importants sont le degré de la paramétrisation modulaire, la valeur de la fonction $L$ de la courbe au point critique $1$ ainsi que le conducteur qui intervient en $\gamma(4N)/N^2$.
![Hauteur des traces en moyenne sur les courbes $26\text{A}$ et $26\text{B}$ : pratiques et théoriques.[]{data-label="26A-traces"}](26-traces.eps){height="10cm"}
À conducteur et degré fixés, les petits nombres premiers qui divisent le conducteur ont un rôle décisif et les hauteurs des traces ont alors tendance à être plus ou moins grandes suivant que $E$ a réduction *multiplicative déployée* ou *non-déployée* en ces nombres premiers. Cette influence se voit très bien dans le cas des deux courbes de conducteur $26$ (de rang analytique $0$) dont les paramétrisations modulaires ont même degré $2$ alors que les traces sont presque trois fois plus grosses sur la $26\text{B}$ que sur la $26\text{A}$. Ceci s’explique analytiquement par le fait que $26$ est divisible par $2$ et que la $26\text{A}$ a réduction multiplicative déployée en $2$ alors que la $26\text{B}$ a réduction multiplicative non-déployée. Ceci induit un facteur $3$ entre les produits pour les nombres premiers $p$ divisant le conducteur de ces courbes des facteurs Euleriens de leur fonction $L$ en $p$. La figure \[26A-traces\] illustre cette différence de comportement. On voit également que la courbe représentant la somme des hauteurs des traces est très proche de la courbe théorique même si elle est relativement irrégulière. Rappelons que $0.0018$ et $0.0050$ sont les valeurs numériques de la constante $C_{\text{Tr}}^{(0)}$ apparaissant dans le corollaire \[rang0\] pour les courbes $26\text{A}$ et $26\text{B}$.
Courbes de rang analytique $1$
------------------------------
Lorsque l’on considère une courbe de rang $1$, on est amené à estimer la moyenne des valeurs en $1$ des fonctions $L$ tordues et non de leurs dérivées.
\[iwa1\] Si $E$ est une courbe elliptique rationnelle de conducteur $N$ sans facteurs carrés et de rang analytique $\,\,1$ et $F$ est une fonction lisse à support compact dans $\mathbb{R}_+$ et de moyenne strictement positive alors $$\sum_{d\in\mathcal{D}} L(E\vert\mathbb{Q}\times\chi_d,1)F\left(\frac{\vert d\vert}{Y}\right)=\alpha_NY+\mathcal{O}_\varepsilon\left(N^{\frac{23}{14}+\varepsilon}Y^{\frac{13}{14}+\varepsilon}\right)$$ pour tout $\varepsilon>0$ où $\alpha_N$ est définie en .
[**Idée de preuve du théorème \[iwa1\].**]{} Il n’est pas difficile d’adapter la démonstration de [@iwaniec] à ce cas. En reprenant les notations de l’article, il suffit de remplacer la fonction $V(X)$ définie au paragraphe 4 page $369$ par la fonction $$\widetilde{V}(X)=e^{-X}.$$ On a alors[^3], $$\mathcal{A}{\mathnormal}{(X,\chi_d)={{\ensuremath{\mathchoice {\dfrac{1}{2i\pi}}
{\dfrac{1}{2i\pi}}
{\frac{1}{2i\pi}}
{\frac{1}{2i\pi}}
}}}\int_{(3/4)} L(s+1,E,\chi_d)\Gamma(s){\ensuremath{\left({{\ensuremath{\mathchoice {\dfrac{X}{2\pi}}
{\dfrac{X}{2\pi}}
{\frac{X}{2\pi}}
{\frac{X}{2\pi}}
}}}\right)}}^s ds}.$$ La majoration $\mathcal{A}{\mathnormal}{(X,\chi_d)\ll \sqrt{X}}$, qui découle de l’inégalité de Hölder et d’une estimation des $a_n$ tient toujours, et ainsi les majorations successives effectuées dans la démonstration ne posent pas de problème. La seule différence notable vient à la page 374 lors du calcul de $\mathcal{B}{\mathnormal}{(X)}$. On a alors $$res_{s=0} L(s+1)\Gamma(s){\ensuremath{\left({{\ensuremath{\mathchoice {\dfrac{X}{2\pi}}
{\dfrac{X}{2\pi}}
{\frac{X}{2\pi}}
{\frac{X}{2\pi}}
}}}\right)}}^s=L(1)$$ et il n’apparaît pas de terme en $\log{X}$.
$\blacksquare$
On en déduit comme pour le rang $0$ le corollaire suivant.
\[rang1\] Si $E$ est une courbe elliptique rationnelle de conducteur $N$ sans facteurs carrés et de rang analytique $1$ alors $$\sum_{\substack{d\in\mathcal{D} \\
\vert d\vert\leqslant Y}}\hat{h}_{\mathbb{K}_d}(\emph{Tr}_d)=C_{\emph{Tr}}^{(1)}Y^{3/2}+\mathcal{O}_\varepsilon\left(\frac{L(E\vert\mathbb{Q},1)}{\Omega_{E,N}}N^{\frac{23}{14}+\varepsilon}Y^{\frac{20}{14}+\varepsilon}\right)$$ pour tout $\varepsilon>0$ où $C_{\emph{Tr}}^{(1)}$ est la constante définie par $$C_{\emph{Tr}}^{(1)}:={{\ensuremath{\mathchoice {\dfrac{2}{\pi}}
{\dfrac{2}{\pi}}
{\frac{2}{\pi}}
{\frac{2}{\pi}}
}}}c_N\mathcal{P}(1)L^\prime(E\vert\mathbb{Q},1){{\ensuremath{\mathchoice {\dfrac{L(\text{Sym}^2E,2)}{\pi\Omega_{E,N}}}
{\dfrac{L(\text{Sym}^2E,2)}{\pi\Omega_{E,N}}}
{\frac{L(\text{Sym}^2E,2)}{\pi\Omega_{E,N}}}
{\frac{L(\text{Sym}^2E,2)}{\pi\Omega_{E,N}}}
}}}L_E$$ où $c_N$ est définie en , $L_E$ en et $\mathcal{P}$ en .
À conducteur $N$ fixé, le terme principal dans est $C_{\emph{Tr}}^{(1)}Y^{\frac{3}{2}}$ avec $$C_{\emph{Tr}}^{(1)}=\left(\frac{6}{\pi^3c_E(N)^2}\mathcal{P}(1)\prod_{\substack{p\in \mathcal{P} \\
p \mid 2N}}\left(1-\frac{1}{p^2}\right)^{-1}\right)\times L^\prime(E\vert\mathbb{Q},1)\,L_{E}\times\frac{\gamma(4N)}{N^2}\text{deg}(\Phi_{N,E})$$ et est donc proportionnel au degré de la paramétrisation modulaire.
Il est intuitivement étonnant que les traces en moyennes des points de Heegner sur une courbe elliptique $E$ sont asymptotiquement plus grosses par un facteur logarithmique si le rang de la courbe elliptique est minimal.
Les deux courbes elliptiques de conducteur $91$ ont même rang $1$ et on a représenté sur la figure \[91AB-traces\] les hauteurs des traces Tr$_d$ sur ces deux courbes ainsi que la courbe théorique donnée par le corollaire ci-dessus. On constate que les courbes sont beaucoup plus irrégulières que dans le cas du rang $0$ (figure \[26A-traces\]) même si elles suivent la courbe théorique de très près. La figure \[37AB-traces\] illustre les différences de croissance des hauteurs des traces sur les courbes $37\text{A}$ (rang $1$) et $37\text{B}$ (rang $0$). On remarque que le comportement est très irrégulier. Ceci est en partie dû à la division par $Y^{3/2}$ qui rend les irrégularités plus apparentes que dans la figure \[26A-traces\]. De plus, pour la courbe $37\text{A}$ de rang $1$, il arrive fréquemment que la trace soit nulle ce qui <<casse la moyenne>>. Il est conjecturé que la proportion d’annulation de $L(E\vert\mathbb{Q}\times\chi_d,1)$ (ce qui correspond aux cas de trace nulle selon la formule de Gross-Zagier et la conjecture de Birch et Swinnerton-Dyer) tend vers $0$ lorsque le discriminant tend vers l’infini[^4] mais cela ne se voit pas dans l’échelle de discriminants étudiée.
![Hauteur des traces en moyenne sur les courbes $91\text{A}$ et $91\text{B}$.[]{data-label="91AB-traces"}](91AB-traces.eps){height="10cm"}
![Hauteur des traces en moyenne sur les courbes $37\text{A}$ et $37\text{B}$.[]{data-label="37AB-traces"}](37AB-traces.eps){height="10cm"}
Estimation asymptotique de la hauteur des points de Heegner
===========================================================
Nous démontrons une formule asymptotique pour les hauteurs des points de Heegner P$_d$ semblable à celle que nous avons donnée pour les traces. Soient $E$ une courbe elliptique définie sur $\mathbb{Q}$, de conducteur $N$ et $L(E\vert{\mathbb{Q}},s):=\sum_{n\geqslant 1}a_nn^{-s}$ sa fonction $L$ (de rang analytique quelconque). On s’intéresse à la valeur moyenne des dérivées en $1$ des séries de Dirichlet $L_d(E,s)$ définies par .
\[traceana\] Si $E$ est une courbe elliptique rationnelle de conducteur $N$ sans facteurs carrés et de rang analytique quelconque et $F$ est une fonction lisse à support compact dans $\mathbb{R}_+$ et de moyenne strictement positive alors $$\sum_{d\in\mathcal{D}}L^\prime_d(E,1)F\left(\frac{\vert d\vert}{Y}\right)=\widetilde{\alpha_N} Y\log{Y}+\widetilde{\beta_N}Y+\mathsf{Error}+\mathcal{O}_\varepsilon\left(N^{\frac{15}{4}+\varepsilon}Y^{\frac{19}{20}+\varepsilon}\right)$$ où $$\label{conjerror}
\mathsf{Error}=\mathcal{O}_\varepsilon\left(NY\left(\log{(NY)}\right)^{\frac{1}{2}+\varepsilon}\right)$$ pour tout $\varepsilon>0$ et où $$\label{defbeta}
\widetilde{\alpha_N}:=c_N\widetilde{L}(1)\int_0^{+\infty}F(t)\mathrm{d}t\neq 0$$ et $$\widetilde{\beta_N}:=c_N\int_0^{+\infty}F(t)\left(\widetilde{L}^\prime(1)+\widetilde{L}(1)\left(\log{\left(\frac{Nt}{4\pi^2}\right)}-2\gamma\right)\right)\mathrm{d}t$$ où la constante $c_N$ est définie en avec $$\widetilde{L}(s):=\frac{L(\text{Sym}^2E,2s)}{\zeta^{(N)}(2s)}\widetilde{\mathcal{P}}(s)\times\begin{cases}
\frac{4}{3} & \text{ si $N$ est impair,} \\
1 & \text{ sinon}
\end{cases}$$ et $$\widetilde{\mathcal{P}}(s):=\prod_{\substack{p\in\mathcal{P} \\
p\nmid 2N}}\left(1+\left(1+\frac{1}{p}\right)^{-1}(p^{4s-2}-1)^{-1}\right).$$
\[rq\] À conducteur $N$ fixé, il semble que l’on obtienne un développement asymptotique du premier moment par rapport à $Y$ à un seul terme et non à deux termes comme dans le Théorème \[iwa\] de H. Iwaniec. Cependant, les résultats numériques décrits dans la dernière partie du paragraphe \[5\] et notre intuition analytique nous permettent de conjecturer que $$\mathsf{Error}=o_\varepsilon(NY).$$ Pour pouvoir prouver cela, il faudrait notamment être en mesure de déterminer le comportement asymptotique de moyennes de la forme $$\sum_{\substack{1\leqslant u\leqslant U \\
1\leqslant v\leqslant V}}\sum_{d\in\mathcal{D}}a_{u,v}\chi_d(u)r_d(v)F\left(\frac{\vert d\vert}{Y}\right)$$ pour tous nombres réels strictement positifs $U$, $V$ et toute suite de nombres complexes $\left(a_{u,v}\right)_{\substack{1\leqslant u\leqslant U \\
1\leqslant v\leqslant V}}$ (se reporter également en page ). Les auteurs projettent de s’intéresser dans un avenir proche à ce type de moyennes qui sont en réalité un cas particulier de quantités beaucoup plus générales. En outre, il ne fait aucun doute que la dépendance en $N$ dans $\mathcal{O}_\varepsilon\left(N^{\frac{15}{4}+\varepsilon}Y^{\frac{19}{20}+\varepsilon}\right)$ peut être améliorée en étant plus soigneux mais une croissance au plus polynomiale en le conducteur nous suffit.
[**Preuve du théorème \[traceana\].**]{} B.H. Gross et D. Zagier ([@GZ]) ont prouvé que la série de Dirichlet $L_d(E,s)$ définie a priori sur $\Re{(s)}>\frac{3}{2}$ admet un prolongement holomorphe à $\mathbb{C}$ et satisfaisait l’équation fonctionnelle $$\forall s\in\mathbb{C}, \quad \Lambda_d(E,s)=-\chi_d(N)\Lambda_d(E,2-s)$$ où $\Lambda_d(E,s):=(N\vert d\vert)^s\left((2\pi)^{-s}\Gamma(s)\right)^2L_d(E,s)$ est la série de Dirichlet complétée. Remarquons que comme $d$ est un carré modulo $N$, le signe de l’équation fonctionnelle vaut $-1$ d’où $L_d(E,1)=0$. Ceci va nous permettre d’exprimer $L^\prime_d(E,1)$ en terme de deux sommes convergeant exponentiellement vite en suivant une procédure analytique désormais classique (confer Théorème 5.3. de [@IwKo] pour plus de détails). Pour $X>0$, posons $$V(X):={{\ensuremath{\mathchoice {\dfrac{1}{2i\pi}}
{\dfrac{1}{2i\pi}}
{\frac{1}{2i\pi}}
{\frac{1}{2i\pi}}
}}}\int_{(3/4)} \Gamma(s)^2 X^{-s} ds,$$ et $$\mathcal{A}_d{\mathnormal}{(E,X):={{\ensuremath{\mathchoice {\dfrac{1}{2i\pi}}
{\dfrac{1}{2i\pi}}
{\frac{1}{2i\pi}}
{\frac{1}{2i\pi}}
}}}\int_{(3/4)}L_d(E,s+1)\Gamma(s)^2{\ensuremath{\left({{\ensuremath{\mathchoice {\dfrac{X}{4\pi^2}}
{\dfrac{X}{4\pi^2}}
{\frac{X}{4\pi^2}}
{\frac{X}{4\pi^2}}
}}}\right)}}^s ds.}$$ Le développement de $L_d(E,s)$ en série de Dirichlet absolument convergente sur $\Re{(s)}>\frac{3}{2}$ assure que $$\mathcal{A}_d{\mathnormal}{(E,X)=\sum_{n=1}^{\infty} {{\ensuremath{\mathchoice {\dfrac{a_n r_d(n)}{n}}
{\dfrac{a_n r_d(n)}{n}}
{\frac{a_n r_d(n)}{n}}
{\frac{a_n r_d(n)}{n}}
}}}\sum_{(m,N)=1} \chi_d(m){{\ensuremath{\mathchoice {\dfrac{1}{m}}
{\dfrac{1}{m}}
{\frac{1}{m}}
{\frac{1}{m}}
}}} V{\ensuremath{\left({{\ensuremath{\mathchoice {\dfrac{4\pi^2 nm^2}{X}}
{\dfrac{4\pi^2 nm^2}{X}}
{\frac{4\pi^2 nm^2}{X}}
{\frac{4\pi^2 nm^2}{X}}
}}}\right)}}.}$$ On déplace la ligne d’intégration jusqu’à $\Re(s)=-3/4$ croisant un unique pole en $s=0$ de résidu égal à $L^\prime_d(E,1)$ puis on revient en $s=3/4$ par le changement de variables $s\mapsto -s$. L’équation fonctionnelle entraîne alors que $$L^\prime_d(E,1)=\mathcal{A}_d{\mathnormal}{(E,X)+}\mathcal{A}_d{\mathnormal}{\left(E,{{\ensuremath{\mathchoice {\dfrac{(Nd)^2}{X}}
{\dfrac{(Nd)^2}{X}}
{\frac{(Nd)^2}{X}}
{\frac{(Nd)^2}{X}}
}}}\right),}$$ et en particulier que $$L^\prime_d(E,1)=2\mathcal{A}_d{\mathnormal}{(E,\vert d\vert N).}$$ Bornons $V(X)$ pour $X>0$ de la manière suivante : $$\begin{aligned}
\label{V}
V(X)& ={{\ensuremath{\mathchoice {\dfrac{1}{2i\pi}}
{\dfrac{1}{2i\pi}}
{\frac{1}{2i\pi}}
{\frac{1}{2i\pi}}
}}}\int_{u=0}^{\infty}{{\ensuremath{\mathchoice {\dfrac{e^{-u}}{u}}
{\dfrac{e^{-u}}{u}}
{\frac{e^{-u}}{u}}
{\frac{e^{-u}}{u}}
}}} \int_{v=0}^{\infty} e^{-v}{\ensuremath{\left( \int_{(3/4)} {\ensuremath{\left({{\ensuremath{\mathchoice {\dfrac{uv}{X}}
{\dfrac{uv}{X}}
{\frac{uv}{X}}
{\frac{uv}{X}}
}}}\right)}}^s {{\ensuremath{\mathchoice {\dfrac{1}{s}}
{\dfrac{1}{s}}
{\frac{1}{s}}
{\frac{1}{s}}
}}} ds\right)}} dv du \nonumber \\
& \ll \int_{0}^{\infty} {{\ensuremath{\mathchoice {\dfrac{e^{-(u+X/u)}}{u}}
{\dfrac{e^{-(u+X/u)}}{u}}
{\frac{e^{-(u+X/u)}}{u}}
{\frac{e^{-(u+X/u)}}{u}}
}}}du \nonumber \\
& \ll X^{-1/4}\exp{\left(-2\sqrt{X}\right)}\end{aligned}$$ et en fait $X^jV^{(j)}(X)\ll_jX^{-1/4}\exp{\left(-2\sqrt{X}\right)}$ pour tout entier naturel $j$. Posons $$S_N(Y):=\sum_{d\in\mathcal{D}}L^\prime_d(E,1)F\left(\frac{\vert d\vert}{Y}\right).$$ Comme $d$ est dans $\mathcal{D}$, $d$ est un carré modulo $4$ et est premier à $4$ donc $d$ est congru à $1$ modulo $4$. Ainsi, $\mathcal{O}_d=\mathbb{Z}+\frac{1+\sqrt{d}}{2}\mathbb{Z}$ et un calcul élémentaire montre que $$r_d(n)=\#\left\{(u,v)\in\left({\mathbb{N}}^{*}\times{\mathbb{Z}}\right)\cup\left(\{0\}\times{\mathbb{N}}\right), u^2+\vert d\vert v^2=4n\right\}.$$ On observe que si $n$ est un carré alors $(2\sqrt{n},0)$ est une solution de l’équation ci-dessus alors qu’il n’existe pas de solutions de la forme $(0,*)$ ce qui prouve que $$r_d(n)=1+\#\left\{v\in{\mathbb{Z}}^*, \vert d\vert v^2=4n\right\}+\#\left\{(u,v)\in\left({\mathbb{N}}^*\times{\mathbb{Z}}^*\right), u^2+\vert d\vert v^2=4n\right\}:=1+r_d^\prime(n)$$ et que $$\sum_{\substack{d\in \mathcal{D} \\
Y\ll\vert d\vert\ll Y}}r_d(n)\geqslant\#\left\{d\in\mathcal{D}, Y\ll\vert d\vert \ll Y\right\}\sim_{Y\rightarrow +\infty}CY$$ pour une constante absolue $C>0$. Si $n$ n’est pas un carré alors $r_d(n)=r_d^\prime(n)$. Dans chacun des cas, si $(u,v)$ est une solution contribuant à $r_d^\prime(n)$ pour un $d$ inférieur à $Y$ dans $\mathcal{D}$ alors $u\leqslant 2\sqrt{n}$ et à chaque tel $u$ correspond au plus deux couples $(d,v)$ (car $d$ est supposé sans facteurs carrés) d’où $$\label{rdprime}
\sum_{\substack{d\in\mathcal{D} \\
Y\ll\vert d\vert\ll Y}}r_d^\prime(n)\leqslant 4\sqrt{n}.$$ On écrit alors[^5] $S_N(Y):=\mathsf{TP}_1+\mathsf{Err}_1$ où $$\mathsf{TP}_1:=2\sum_{n\geqslant 1}{{\ensuremath{\mathchoice {\dfrac{a_{n^2}}{n^2}}
{\dfrac{a_{n^2}}{n^2}}
{\frac{a_{n^2}}{n^2}}
{\frac{a_{n^2}}{n^2}}
}}}\sum_{(m,N)=1} {{\ensuremath{\mathchoice {\dfrac{1}{m}}
{\dfrac{1}{m}}
{\frac{1}{m}}
{\frac{1}{m}}
}}} \sum_{d\in \mathcal{D}}\chi_d(m)V{\ensuremath{\left({{\ensuremath{\mathchoice {\dfrac{4\pi^2 n^2 m^2}{N\vert d\vert}}
{\dfrac{4\pi^2 n^2 m^2}{N\vert d\vert}}
{\frac{4\pi^2 n^2 m^2}{N\vert d\vert}}
{\frac{4\pi^2 n^2 m^2}{N\vert d\vert}}
}}}\right)}}F\left(\frac{\vert d\vert}{Y}\right)$$ et $$\label{err1}
\mathsf{Err}_1:=2\sum_{d\in \mathcal{D}}\sum_{n\geqslant 1}{{\ensuremath{\mathchoice {\dfrac{a_{n}r'_d(n)}{n}}
{\dfrac{a_{n}r'_d(n)}{n}}
{\frac{a_{n}r'_d(n)}{n}}
{\frac{a_{n}r'_d(n)}{n}}
}}}\sum_{(m,N)=1}{{\ensuremath{\mathchoice {\dfrac{1}{m}}
{\dfrac{1}{m}}
{\frac{1}{m}}
{\frac{1}{m}}
}}}\chi_d(m)V{\ensuremath{\left({{\ensuremath{\mathchoice {\dfrac{4\pi^2 n m^2}{N\vert d\vert}}
{\dfrac{4\pi^2 n m^2}{N\vert d\vert}}
{\frac{4\pi^2 n m^2}{N\vert d\vert}}
{\frac{4\pi^2 n m^2}{N\vert d\vert}}
}}}\right)}}F\left(\frac{\vert d\vert}{Y}\right).$$ [**Estimation du terme d’erreur $\mathbf{\mathsf{Err}_1}$.**]{} Découpons $\mathsf{Err}_1$ de la façon suivante: $$\mathsf{Err}_1=2\sum_{(m,N)=1}\sum_{1\leqslant n\leqslant\frac{NY\psi(NY)}{m^2}}\cdots+2\sum_{(m,N)=1}\sum_{n>\frac{NY\psi(NY)}{m^2}}\cdots:=\mathsf{Error}+\mathsf{Err}_3$$ pour toute fonction $\psi$ positive et tendant vers $0$ en $+\infty$. Les estimations et assurent que $$\mathsf{Error}\ll(NY)^{1/4}\sum_{1\leqslant m\leqslant\sqrt{NY\psi(NY)}}{{\ensuremath{\mathchoice {\dfrac{1}{m^{3/2}}}
{\dfrac{1}{m^{3/2}}}
{\frac{1}{m^{3/2}}}
{\frac{1}{m^{3/2}}}
}}}\sum_{1\leqslant n\leqslant\frac{NY\psi(NY)}{m^2}}\frac{\vert a_n\vert}{n^{\frac{3}{4}}}\exp{\left(-4\pi\frac{m}{\sqrt{NY}}\sqrt{n}\right)}.$$ L’inégalité de Cauchy-Schwarz entraîne que le carré de la somme en $n$ est borné par $$\left(\sum_{1\leqslant n\leqslant\frac{NY\psi(NY)}{m^2}}{{\ensuremath{\mathchoice {\dfrac{a_n^2}{n^2}}
{\dfrac{a_n^2}{n^2}}
{\frac{a_n^2}{n^2}}
{\frac{a_n^2}{n^2}}
}}}\right)\left(\sum_{1\leqslant n\leqslant\frac{NY\psi(NY)}{m^2}}\sqrt{n}\exp{\left(-8\pi\frac{m}{\sqrt{NY}}\sqrt{n}\right)}\right).$$ La première somme est estimée par $\mathcal{O}\left(\log{(NY\psi(NY))}\right)$ alors que la deuxième est trivialement inférieure à $$\int_{1}^{\frac{NY\psi(NY)}{m^2}}\sqrt{t}\exp{\left(-8\pi\frac{m}{\sqrt{NY}}\sqrt{t}\right)}\mathrm{d}t\ll\left(\frac{NY\psi(NY)}{m^2}\right)^{\frac{3}{2}}.$$ Ainsi, on a prouvé que $\mathsf{Error}\ll NY(\psi(NY))^{\frac{3}{4}}\left(\log{(NY\psi(NY))}\right)^{\frac{1}{2}}$. De la même manière, $$\mathsf{Err}_3\ll(NY)^{1/4}\sum_{m\geqslant 1}{{\ensuremath{\mathchoice {\dfrac{1}{m^{3/2}}}
{\dfrac{1}{m^{3/2}}}
{\frac{1}{m^{3/2}}}
{\frac{1}{m^{3/2}}}
}}}\left(\sum_{n>\frac{NY\psi(NY)}{m^2}}\frac{a_n^2}{n^{2+\varepsilon}}\right)^{\frac{1}{2}}\left(\sum_{n>\frac{NY\psi(NY)}{m^2}}n^{\frac{1}{2}+\varepsilon}\exp{\left(-8\pi\frac{m}{\sqrt{NY}}\sqrt{n}\right)}\right)^{\frac{1}{2}}$$ pour tout $\varepsilon>0$. Une intégration par parties assure que $$\mathsf{Err}_3\ll(NY)^{1+\varepsilon}(\psi(NY))^{\frac{1}{2}+\varepsilon}\exp{\left(-\frac{1}{2}\sqrt{\psi(NY)}\right)}$$ et on choisit alors $\psi(x):=(\log{x})^a$ avec $2\varepsilon<a<\frac{2}{3}$ de sorte que $\mathsf{Err}_3=o(\mathsf{Error})$ et que $\mathsf{Error}\ll (NY)(\log{(NY)})^{\frac{3a}{4}+\frac{1}{2}}=o((NY)\log{(NY)})$.[**Contribution du terme principal $\mathbf{\mathsf{TP}_1}$.**]{} Déterminons le comportement asymptotique de $\mathsf{TP}_1$ en appliquant une méthode développée dans [@iwaniec]:
- la condition $d$ sans facteurs carrés est supprimée en introduisant $\sum_{a^2\mid d}\mu(a)$ puis la somme est coupée selon la taille des diviseurs $a$ de $d$ ($a\leqslant A$ et $a>A$) sachant que l’on revient à des discriminants sans facteurs carrés dans le cas des grands diviseurs;
- pour tout entier $m=m_1m_2^2$ avec $(m_1m_2,N)=1$ et $m_1$ sans facteurs carrés, remarquons que $\chi_d(m)=\chi_d(m_1)$ si $(m_2,d)=1$ (et $0$ sinon) puis que le développement de Fourier du caractère $\chi_.(m_1)$ en terme de caractères additifs de module $m_1$ s’écrit $$\chi_d(m_1)=\frac{\overline{\varepsilon_{m_1}}}{\sqrt{m_1}}\sum_{0\leqslant\vert r\vert<\frac{m_1}{2}}\chi_{Nr}(m_1)e\left(\frac{\overline{N}rd}{m_1}\right)$$ où $\overline{N}$ est l’inverse de $N$ modulo $m_1$ et $\varepsilon_{m_1}$ est le signe de la somme de Gauss de $\chi_.(m_1)$.
La contribution principale provient alors du terme $r=0$ pour lequel $\chi_0(m_1)$ vaut $0$ si $m_1>1$ et $1$ sinon. En résumé, $$\mathsf{TP}_1=\mathsf{TP}_2+\mathsf{Err}_4+\mathsf{Err}_5$$ où $$\begin{aligned}
\mathsf{TP}_2 & := & 2\sum_{n\leqslant 1}{{\ensuremath{\mathchoice {\dfrac{a_{n^2}}{n^2}}
{\dfrac{a_{n^2}}{n^2}}
{\frac{a_{n^2}}{n^2}}
{\frac{a_{n^2}}{n^2}}
}}}\sum_{\substack{a\leqslant A \\
(a,4N)=1}}\mu(a)\sum_{(m_2,aN)=1}{{\ensuremath{\mathchoice {\dfrac{1}{m_2^2}}
{\dfrac{1}{m_2^2}}
{\frac{1}{m_2^2}}
{\frac{1}{m_2^2}}
}}}\sum_{q\mid m_2}\mu(q)\sum_{\substack{qd\in\mathcal{D}^\prime \\
(d,m_2)=1}}V{\ensuremath{\left({{\ensuremath{\mathchoice {\dfrac{4\pi^2 n^2 m_2^4}{Na^2\vert d\vert q}}
{\dfrac{4\pi^2 n^2 m_2^4}{Na^2\vert d\vert q}}
{\frac{4\pi^2 n^2 m_2^4}{Na^2\vert d\vert q}}
{\frac{4\pi^2 n^2 m_2^4}{Na^2\vert d\vert q}}
}}}\right)}}F\left(\frac{a^2\vert d\vert q}{Y}\right), \\
\mathsf{Err}_5 & := & 2\sum_{n\leqslant 1}{{\ensuremath{\mathchoice {\dfrac{a_{n^2}}{n^2}}
{\dfrac{a_{n^2}}{n^2}}
{\frac{a_{n^2}}{n^2}}
{\frac{a_{n^2}}{n^2}}
}}}\sum_{(b,4N)=1}\sum_{\substack{a\mid b \\
a>A}}\mu(a)\sum_{(m,N)=1} {{\ensuremath{\mathchoice {\dfrac{1}{m}}
{\dfrac{1}{m}}
{\frac{1}{m}}
{\frac{1}{m}}
}}} \sum_{d\in \mathcal{D}}\chi_{b^2d}(m)V{\ensuremath{\left({{\ensuremath{\mathchoice {\dfrac{4\pi^2 n^2 m^2}{Nb^2\vert d\vert}}
{\dfrac{4\pi^2 n^2 m^2}{Nb^2\vert d\vert}}
{\frac{4\pi^2 n^2 m^2}{Nb^2\vert d\vert}}
{\frac{4\pi^2 n^2 m^2}{Nb^2\vert d\vert}}
}}}\right)}}F\left(\frac{b^2\vert d\vert}{Y}\right)\end{aligned}$$ et $$\begin{gathered}
\mathsf{Err}_4:=2\sum_{n\leqslant 1}{{\ensuremath{\mathchoice {\dfrac{a_{n^2}}{n^2}}
{\dfrac{a_{n^2}}{n^2}}
{\frac{a_{n^2}}{n^2}}
{\frac{a_{n^2}}{n^2}}
}}}\sum_{\substack{a\leqslant A \\
(a,4N)=1}}\mu(a)\sum_{\substack{m=m_1m_2^2 \\
(m,aN)=1}}{{\ensuremath{\mathchoice {\dfrac{\mu^2(m_1)}{m}}
{\dfrac{\mu^2(m_1)}{m}}
{\frac{\mu^2(m_1)}{m}}
{\frac{\mu^2(m_1)}{m}}
}}}\sum_{q\mid m_2}\mu(q)\sum_{\substack{qd\in\mathcal{D}^\prime \\
(d,m_2)=1}} \\
\frac{\overline{\varepsilon_{m_1}}}{\sqrt{m_1}}\sum_{1\leqslant\vert r\vert<\frac{m_1}{2}}\chi_{Nrq}(m_1)e\left(\frac{\overline{N}rd}{m_1}\right)V{\ensuremath{\left({{\ensuremath{\mathchoice {\dfrac{4\pi^2 n^2 m^2}{Na^2\vert d\vert q}}
{\dfrac{4\pi^2 n^2 m^2}{Na^2\vert d\vert q}}
{\frac{4\pi^2 n^2 m^2}{Na^2\vert d\vert q}}
{\frac{4\pi^2 n^2 m^2}{Na^2\vert d\vert q}}
}}}\right)}}F\left(\frac{a^2\vert d\vert q}{Y}\right)\end{gathered}$$ avec $$\mathcal{D}^\prime:= {\mathnormal}{\{d\in\mathbb{Z}_-^*, d\equiv \nu^2 \mod 4N, (\nu,4N)=1\}}.$$ [**Estimation du terme d’erreur $\mathbf{\mathsf{Err}_5}$.**]{} Pour commencer, l’inégalité de Hölder implique que $$\mathsf{Err}_5\ll\sum_{n\ll\left(NY\right)^{\frac{1}{2}}}{{\ensuremath{\mathchoice {\dfrac{\left\vert a_{n^2}\right\vert}{n^2}}
{\dfrac{\left\vert a_{n^2}\right\vert}{n^2}}
{\frac{\left\vert a_{n^2}\right\vert}{n^2}}
{\frac{\left\vert a_{n^2}\right\vert}{n^2}}
}}}\sum_{\substack{b\geqslant 1 \\
a\mid b \\
a>A}}\left(\sum_{\substack{d\in \mathcal{D} \\
\vert d\vert\ll\frac{Y}{b^2}}}1\right)^{\frac{3}{4}}\left(\sum_{\substack{d\in \mathcal{D} \\
\vert d\vert\ll\frac{Y}{b^2}}}\chi_{b^2d}(m)\left\vert\sum_{m\ll\left(\frac{NY}{n^2}\right)^{\frac{1}{2}}} {{\ensuremath{\mathchoice {\dfrac{1}{m}}
{\dfrac{1}{m}}
{\frac{1}{m}}
{\frac{1}{m}}
}}}\right\vert^4\right)^{\frac{1}{4}}$$ d’où trivialement $$\mathsf{Err}_5\ll Y^{\frac{3}{4}}\sum_{n\ll\left(NY\right)^{\frac{1}{2}}}{{\ensuremath{\mathchoice {\dfrac{\left\vert a_{n^2}\right\vert}{n^2}}
{\dfrac{\left\vert a_{n^2}\right\vert}{n^2}}
{\frac{\left\vert a_{n^2}\right\vert}{n^2}}
{\frac{\left\vert a_{n^2}\right\vert}{n^2}}
}}}\sum_{\substack{b\geqslant 1 \\
a\mid b \\
a>A}}\frac{1}{b^{\frac{3}{2}}}\left(\sum_{\substack{d\in \mathcal{D} \\
\vert d\vert\ll\frac{Y}{b^2}}}\chi_{b^2d}(m)\left\vert\sum_{m\ll\left(\frac{NY}{n^2}\right)^{\frac{1}{2}}} {{\ensuremath{\mathchoice {\dfrac{1}{m}}
{\dfrac{1}{m}}
{\frac{1}{m}}
{\frac{1}{m}}
}}}\right\vert^2\right)^{\frac{1}{2}}.$$ L’inégalité du grand crible ([@Bo]) pour les caractères réels assure alors que $$\mathsf{Err}_5\ll_\varepsilon \frac{Y^{\frac{5}{4}+\varepsilon}}{A^{\frac{3}{2}}}+\frac{N^{\frac{1}{4}+\varepsilon}Y^{1+\varepsilon}}{A^{\frac{1}{2}}}$$ pour tout $\varepsilon>0$.[**Estimation du terme d’erreur $\mathbf{\mathsf{Err}_4}$.**]{} Posons $\Delta:=\inf{\left(\frac{1}{2},\frac{a^2q}{Y^{1-\varepsilon}}\right)}$ pour tout nombre réel $\varepsilon>0$ et découpons $\mathbf{\mathsf{Err}_4}$ selon que la sommation en $r$ est restreinte par $$\begin{aligned}
1\leqslant\vert r\vert<\Delta m_1 & \rightsquigarrow & \mathsf{Err}_6, \\
\Delta m_1\leqslant\vert r\vert<\frac{m_1}{2} & \rightsquigarrow & \mathsf{Err}_7.\end{aligned}$$ On estime $\mathsf{Err}_7$ en bornant la somme sur les discriminants grâce au lemme 2 page 372 de [@iwaniec] puis trivialement la somme en $n$ et $m$ ce qui entraine que $$\mathsf{Err}_7\ll_\varepsilon\gamma(4N)Y^\varepsilon N^{\frac{5}{4}+\varepsilon}\frac{ \inf{(A,Y^{\frac{1-\varepsilon}{2}}) }}{Y^{\varepsilon-\frac{1}{4}}}+\gamma(4N)Y^\varepsilon N^{\frac{5}{4}+\varepsilon}\frac{A^3}{Y^{\frac{3}{4}}}$$ où $\gamma(4N)$ est le cardinal de l’ensemble des classes d’équivalence de $\mathcal{D}^\prime$ modulo $4N$. On estime $\mathsf{Err}_6$ de façon triviale par $$\mathsf{Err}_6\ll_\varepsilon N^{\frac{3}{8}+\varepsilon}Y^{\frac{1}{2}+\varepsilon}A^{\frac{3}{2}}.$$ [**Contribution du terme principal $\mathbf{\mathsf{TP}_2}$.**]{} Intéressons-nous au terme principal $\mathsf{TP}_2$ et plus précisément à la somme sur les discriminants intervenant dans cette somme. Pour cela, on note $\mathcal{D}^\prime(4N)$ l’ensemble des classes d’équivalence de $\mathcal{D}^\prime$ modulo $4N$ et on se souvient que $\#\mathcal{D}^\prime(4N)=\gamma(4N)$. La formule de Poisson assure que $$\begin{gathered}
\sum_{qd\in\mathcal{D}^\prime}V{\ensuremath{\left({{\ensuremath{\mathchoice {\dfrac{4\pi^2 n^2 m_2^4}{Na^2\vert d\vert q}}
{\dfrac{4\pi^2 n^2 m_2^4}{Na^2\vert d\vert q}}
{\frac{4\pi^2 n^2 m_2^4}{Na^2\vert d\vert q}}
{\frac{4\pi^2 n^2 m_2^4}{Na^2\vert d\vert q}}
}}}\right)}}F\left(\frac{a^2\vert d\vert q}{Y}\right)=\frac{Y}{4Na^2q}\sum_{[d_0]\in\mathcal{D}^\prime(4N)}\sum_{\ell\in\mathbb{Z}} \\
\times\int_{\mathbb{R}}V{\ensuremath{\left({{\ensuremath{\mathchoice {\dfrac{4\pi^2 n^2 m_2^4}{NY\left\vert \frac{a^2qd_0}{Y}+t\right\vert}}
{\dfrac{4\pi^2 n^2 m_2^4}{NY\left\vert \frac{a^2qd_0}{Y}+t\right\vert}}
{\frac{4\pi^2 n^2 m_2^4}{NY\left\vert \frac{a^2qd_0}{Y}+t\right\vert}}
{\frac{4\pi^2 n^2 m_2^4}{NY\left\vert \frac{a^2qd_0}{Y}+t\right\vert}}
}}}\right)}}F\left(\left\vert\frac{a^2qd_0}{Y}+t\right\vert\right)e\left(\frac{Y\ell}{4Na^2q}t\right)\mathrm{d}t.\end{gathered}$$ On isole alors le terme $\ell=0$ et on effectue deux intégrations par parties pour chaque terme $\ell\neq 0$ afin de rendre absolument convergente la série en $\ell$ (il ne reste pas de termes entre crochets car $F$ est à support compact). On obtient alors $$\begin{gathered}
\mathsf{TP}_2=\frac{\gamma(4N)Y}{2N}\int_0^{+\infty}F(t)\left(\sum_{n\geqslant 1}{{\ensuremath{\mathchoice {\dfrac{a_{n^2}}{n^2}}
{\dfrac{a_{n^2}}{n^2}}
{\frac{a_{n^2}}{n^2}}
{\frac{a_{n^2}}{n^2}}
}}}\sum_{\substack{a\leqslant A \\
(a,4N)=1}}\frac{\mu(a)}{a^2}\sum_{(m_2,aN)=1}{{\ensuremath{\mathchoice {\dfrac{1}{m_2^2}}
{\dfrac{1}{m_2^2}}
{\frac{1}{m_2^2}}
{\frac{1}{m_2^2}}
}}}\sum_{q\mid m_2}\frac{\mu(q)}{q}V{\ensuremath{\left({{\ensuremath{\mathchoice {\dfrac{4\pi^2 n^2 m_2^4}{NYt}}
{\dfrac{4\pi^2 n^2 m_2^4}{NYt}}
{\frac{4\pi^2 n^2 m_2^4}{NYt}}
{\frac{4\pi^2 n^2 m_2^4}{NYt}}
}}}\right)}}\right)\mathrm{d}t \\
+\mathcal{O}\left(\frac{N^{\frac{5}{4}}\gamma(4N)A^3}{Y^{\frac{3}{4}}}\right)\end{gathered}$$ Finalement, $$\mathsf{TP}_2=c_N Y \int_0^{+\infty}F(t)\mathcal{B}(NYt)dt+\mathcal{O}\left(\frac{N^{\frac{1}{4}}\gamma(4N)Y^{\frac{5}{4}}}{A}+\frac{N^{\frac{5}{4}}\gamma(4N)A^3}{Y^{\frac{3}{4}}}\right)$$ avec $$\mathcal{B}{\mathnormal}{(X)=\sum_{n\geqslant 1}{{\ensuremath{\mathchoice {\dfrac{a_{n^2}}{n^2}}
{\dfrac{a_{n^2}}{n^2}}
{\frac{a_{n^2}}{n^2}}
{\frac{a_{n^2}}{n^2}}
}}}}\sum_{(m,N)=1}{{\ensuremath{\mathchoice {\dfrac{b_{m^2}}{m^2}}
{\dfrac{b_{m^2}}{m^2}}
{\frac{b_{m^2}}{m^2}}
{\frac{b_{m^2}}{m^2}}
}}}V{\ensuremath{\left({{\ensuremath{\mathchoice {\dfrac{4\pi^2n^2m^4}{X}}
{\dfrac{4\pi^2n^2m^4}{X}}
{\frac{4\pi^2n^2m^4}{X}}
{\frac{4\pi^2n^2m^4}{X}}
}}}\right)}}$$ et $$b_m=\prod_{\substack{p\in\mathcal{P} \\
p\mid m \\
p\neq 2}} {\ensuremath{\left(1+{{\ensuremath{\mathchoice {\dfrac{1}{p}}
{\dfrac{1}{p}}
{\frac{1}{p}}
{\frac{1}{p}}
}}}\right)}}^{-1}.$$ En revenant à la définition intégrale de la fonction $V$, on remarque que $$\mathcal{B}(X)={{\ensuremath{\mathchoice {\dfrac{1}{2i\pi}}
{\dfrac{1}{2i\pi}}
{\frac{1}{2i\pi}}
{\frac{1}{2i\pi}}
}}}\int_{(3/4)} \Gamma(s)^2 X^{-s}L(s+1)\mathrm{d}s$$ et que le produit Eulérien intervenant dans la fonction $L$ est absolument convergeant sur $\Re{(s)}>\frac{3}{4}$ et y définit une fonction holomorphe. En décalant le contour jusqu’à $\left(-\frac{1}{4}+\varepsilon\right)$ pour tout $\varepsilon>0$, on ne croise qu’un pôle en $s=0$ ce qui prouve que $$\mathcal{B}(X)=-2(\gamma+\log{(2\pi)})L(1)+L^\prime(1)+L(1)\log{(X)}+\mathcal{O}_\varepsilon\left(\left(\frac{N}{X}\right)^{\frac{1}{4}+\varepsilon}\right).$$ [**Bilan et choix des paramètres.**]{} On a prouvé que $$S_N(Y)=\widetilde{\alpha_N} Y\log{Y}+\widetilde{\beta_N}Y+\mathcal{O}_\varepsilon\left(NY\left(\log{(NY)}\right)^{\frac{1}{2}+\varepsilon}\right)+\mathsf{Err}$$ où $$\mathsf{Err}\ll_\varepsilon(NY)^\varepsilon\left(\frac{Y^{\frac{5}{4}}}{A^{\frac{3}{2}}}+\frac{N^{\frac{1}{4}}Y}{A^{\frac{1}{2}}}+\frac{N^{\frac{9}{4}}\inf{(A,Y^{\frac{1-\varepsilon}{2}})}}{Y^{\varepsilon-\frac{1}{4}}}+\frac{N^{\frac{9}{4}}A^3}{Y^{\frac{3}{4}}}+N^{\frac{3}{8}}Y^{\frac{1}{2}+\varepsilon}A^{\frac{3}{2}}+\frac{N^{\frac{5}{4}}Y^{\frac{5}{4}}}{A}\right)$$ et on choisit alors $A:=N^{\frac{1}{2}}Y^{\frac{3}{10}-\frac{2\varepsilon}{5}}$ ce qui achève la preuve.
$\blacksquare$
Appliquons finalement la formule de Gross-Zagier pour obtenir une estimation asymptotique de la hauteur en moyenne des points de Heegner de la même forme que celle que l’on avait obtenue pour les traces.
\[hpointstheo\] Si $E$ est une courbe elliptique rationnelle de conducteur $N$ sans facteurs carrés et de rang analytique quelconque alors $$\sum_{\substack{d\in\mathcal{D} \\
\vert d\vert\leqslant Y}}\hat{h}_{\mathbb{H}_d}(\emph{P}_d)=C_{\emph{P}}Y^{\frac{3}{2}}\log{Y}+C_{\emph{P}}^\prime Y^{\frac{3}{2}}+\frac{1}{3\Omega_{E,N}}\sqrt{Y}\mathsf{Error}+\mathcal{O}_\varepsilon\left(N^{\frac{15}{4}+\varepsilon}Y^{\frac{29}{20}+\varepsilon}\right)$$ où $$\frac{1}{3\Omega_{E,N}}\sqrt{Y}\mathsf{Error}=\mathcal{O}_\varepsilon\left(NY^{\frac{3}{2}}\left(\log{(NY)}\right)^{\frac{1}{2}+\varepsilon}\right)$$ pour tout $\varepsilon>0$ et où $C_{\emph{P}}$ est la constante définie par $$C_{\emph{P}}:={{\ensuremath{\mathchoice {\dfrac{2}{\pi}}
{\dfrac{2}{\pi}}
{\frac{2}{\pi}}
{\frac{2}{\pi}}
}}}c_N\mathcal{Q}(N){{\ensuremath{\mathchoice {\dfrac{L(\text{Sym}^2 E,2)}{\pi\Omega_{E,N}}}
{\dfrac{L(\text{Sym}^2 E,2)}{\pi\Omega_{E,N}}}
{\frac{L(\text{Sym}^2 E,2)}{\pi\Omega_{E,N}}}
{\frac{L(\text{Sym}^2 E,2)}{\pi\Omega_{E,N}}}
}}}\prod_{\substack{p\in\mathcal{P} \\
p\mid N}}\left(1-\frac{1}{p^2}\right)^{-1}$$ avec $$\mathcal{Q}(N):=\prod_{\substack{p\in\mathcal{P} \\
p\nmid 2N}}\left(1+\left(1+\frac{1}{p}\right)^{-1}(p^{2}-1)^{-1}\right)\times\begin{cases}
\frac{4}{3} & \text{ si $N$ est impair,} \\
1 & \text{ sinon}
\end{cases}$$ et $$C_{\emph{P}}^\prime:=C_{\emph{P}}\left(\log{\left(\frac{N}{4\pi^2}\right)}-\frac{2}{3}-2\gamma\right)+\frac{c_N}{3\Omega_{E,N}}\widetilde{L}^\prime(1).$$
En accord avec la remarque \[rq\], on peut conjecturer que $$\frac{1}{3\Omega_{E,N}}\sqrt{Y}\mathsf{Error}=o_\varepsilon\left(NY^{\frac{3}{2}}\right)$$ et le corollaire précédent semble alors nous munir d’un développement asymptotique à deux termes de la hauteur en moyenne des points de Heegner.
En remplaçant $c_N$ et $L(\text{Sym}^2 E,2)$ par leur expression on peut réécrire $$C_{\text{P}}=\left({\ensuremath{\left({{\ensuremath{\mathchoice {\dfrac{8}{\pi^3c_E(N)^2}}
{\dfrac{8}{\pi^3c_E(N)^2}}
{\frac{8}{\pi^3c_E(N)^2}}
{\frac{8}{\pi^3c_E(N)^2}}
}}}\right)}}\mathcal{Q}{\mathnormal}{(N)}\prod_{\substack{p\in\mathcal{P} \\
p\mid N}}{\ensuremath{\left(1-{{\ensuremath{\mathchoice {\dfrac{1}{p^2}}
{\dfrac{1}{p^2}}
{\frac{1}{p^2}}
{\frac{1}{p^2}}
}}}\right)}}^{-2}\right){{\ensuremath{\mathchoice {\dfrac{\gamma(4N)}{N^2}}
{\dfrac{\gamma(4N)}{N^2}}
{\frac{\gamma(4N)}{N^2}}
{\frac{\gamma(4N)}{N^2}}
}}}\text{deg}(\phi_{N,E}).$$ Ainsi, contrairement à la constante $C_{\text{Tr}}$ intervenant lorsque l’on considère les traces, à conducteur fixé $C_{\text{P}}$ ne dépend que du degré de la paramétrisation modulaire, puisque le produit $\mathcal{Q}{\mathnormal}{(N)}$ ne dépend que de $N$. Par contre, lorsque l’on varie le conducteur, il n’y a plus une dépendance directe sur le degré. Il est clair que $\mathcal{Q}{\mathnormal}{(N)\geqslant 1}$ ; d’autre part $$\mathcal{Q}{\mathnormal}{(N) < {{\ensuremath{\mathchoice {\dfrac{4}{3}}
{\dfrac{4}{3}}
{\frac{4}{3}}
{\frac{4}{3}}
}}} \prod_p {\ensuremath{\left(1+{{\ensuremath{\mathchoice {\dfrac{1}{p^2}}
{\dfrac{1}{p^2}}
{\frac{1}{p^2}}
{\frac{1}{p^2}}
}}}\right)}} < {{\ensuremath{\mathchoice {\dfrac{4}{3}}
{\dfrac{4}{3}}
{\frac{4}{3}}
{\frac{4}{3}}
}}} \zeta(2)}.$$ Le produit Eulerien $\mathcal{Q}{\mathnormal}{(N)}$ est donc compris entre $1$ et $2$, et il joue un relativement faible rôle dans l’expression de $C_{\text{P}}$. On a ainsi $$1 \leqslant \prod_{p|N}{\ensuremath{\left(1-{{\ensuremath{\mathchoice {\dfrac{1}{p^2}}
{\dfrac{1}{p^2}}
{\frac{1}{p^2}}
{\frac{1}{p^2}}
}}}\right)}}^{-2}\mathcal{Q}{\mathnormal}{(N) \leqslant {{\ensuremath{\mathchoice {\dfrac{4}{3}}
{\dfrac{4}{3}}
{\frac{4}{3}}
{\frac{4}{3}}
}}}\zeta(2)^3 < 6.}$$ Le terme principal, du moins si l’on s’intéresse à des valeurs asymptotiques du conducteur ou du degré de la paramétrisation modulaire, est donc ${{\ensuremath{\mathchoice {\dfrac{\gamma(4N)}{N^2}}
{\dfrac{\gamma(4N)}{N^2}}
{\frac{\gamma(4N)}{N^2}}
{\frac{\gamma(4N)}{N^2}}
}}}\text{deg}(\phi_{N,E})$. Selon la conjecture du degré (cf. [@mur2] et [@De] page 35) qui est équivalente à une des formes de la conjecture abc, on aurait $\text{deg}(\Phi_{N,E})\ll_\varepsilon N^{2+\epsilon}$ pour tout $\epsilon>0$. Comme $\gamma(4N)\ll N$, cela donne une borne supérieure sur la croissance des hauteurs des points P$_d$ lorsque $N$ tend vers $+\infty$ avec $Y$. On sait d’autre part qu’il existe des familles de courbes de $j$-invariant borné ([@De] page 50) pour lesquelles $\text{deg}(\Phi_{N,E})\gg N^{\frac{7}{6}}\log{N}$ ce qui donne une borne inférieure sur la vitesse de croissance des hauteurs lorsque $N$ tend vers $+\infty$ avec $Y$.
Remarquons finalement que, même à conducteur fixé, la constante $C_{\emph{P}}^\prime$ dépend de la courbe elliptique $E$ et pas seulement du degré de la paramétrisation modulaire de $E$. Par contre, il ne semble pas être possible d’obtenir une estimation satisfaisante de la taille de $C_{\emph{P}}^\prime$ par rapport au niveau.
Analyse des résultats théoriques et numériques {#5}
==============================================
Après avoir donné quelques valeurs numériques des constantes en jeu, nous donnons des résultats expérimentaux illustrant les formules théoriques.
Quelques valeurs numériques {#valnum}
---------------------------
### Valeurs numériques de $C_{\text{Tr}}$ et de $C_{\text{P}}$
Le tableau \[constantes\] regroupe les valeurs des trois constantes $C_{\text{Tr}}^{(0)}$, $C_{\text{Tr}}^{(1)}$ et $C_{\text{P}}$ régissant le comportement en moyenne des hauteurs des points de Heegner et de leurs traces (selon les corollaires \[rang0\], \[rang1\] et \[hpointstheo\]) pour toutes les courbes elliptiques de conducteur sans facteurs carrés et inférieur à $100$. Les valeurs de $C_{\text{Tr}}$ et de $C_{\text{P}}$ ont été multipliées par $10^3$ pour une meilleure lisibilité.
$\begin{array}{ccc}
\begin{array}{|c|c|c|c|c|}
\hline
\text{Courbe}&\text{Rang}& C_{\text{Tr}}\times 10^3 & C_{\text{P}}\times 10^3 & C_{\text{P}}/C_{\text{Tr}} \\
\hline
11-1 & 0 & 3.33 & 17.0 & 5.11 \\
14-1 & 0 & 1.07 & 6.39 & 5.92 \\
15-1 & 0 & 0.904 & 4.40 & 4.87 \\
17-1 & 0 & 3.21 & 11.3 & 3.52 \\
19-1 & 0 & 3.10 & 10.2 & 3.29 \\
21-1 & 0 & 1.21 & 3.28 & 2.71 \\
26-1 & 0 & 1.80 & 7.29 & 4.03 \\
26-2 & 0 & 5.00 & 7.29 & 1.45 \\
30-1 & 0 & 0.621 & 2.20 & 3.54 \\
33-1 & 0 & 2.48 & 6.56 & 2.64 \\
34-1 & 0 & 5.11 & 5.67 & 1.10 \\
35-1 & 0 & 1.95 & 4.29 & 2.19 \\
37-1 & 1 & 1.86 & 10.7 & 5.75 \\
37-2 & 0 & 5.64 & 10.7 & 1.90 \\
38-1 & 0 & 4.70 & 15.3 & 3.25 \\
38-2 & 0 & 5.09 & 5.10 & 1.00 \\
39-1 & 0 & 1.54 & 3.75 & 2.43 \\
42-1 & 0 & 2.30 & 3.28 & 1.42 \\
43-1 & 1 & 1.90 & 9.27 & 4.87 \\
46-1 & 0 & 3.16 & 10.6 & 3.36 \\
51-1 & 0 & 2.12 & 2.91 & 1.37 \\
53-1 & 1 & 1.87 & 7.55 & 4.02 \\
55-1 & 0 & 2.33 & 2.85 & 1.22 \\
57-1 & 1 & 0.923 & 5.24 & 5.68 \\
57-2 & 0 & 3.83 & 3.93 & 1.02 \\
57-3 & 0 & 8.24 & 15.7 & 1.90 \\
\hline
\end{array}
& &
\begin{array}{|c|c|c|c|c|}
\hline
\text{Courbe}&\text{Rang}& C_{\text{Tr}} & C_{\text{P}} & C_{\text{P}}/C_{\text{Tr}} \\
\hline
58-1 & 1 & 1.14 & 6.80 & 5.91 \\
58-2 & 0 & 9.47 & 6.80 & 0.718 \\
61-1 & 1 & 1.97 & 6.58 & 3.33 \\
62-1 & 0 & 4.85 & 3.18 & 0.657 \\
65-1 & 1 & 0.593 & 2.44 & 4.12 \\
66-1 & 0 & 1.01 & 2.18 & 2.15 \\
66-2 & 0 & 2.01 & 2.18 & 1.08 \\
66-3 & 0 & 25.2 & 10.9 & 0.433 \\
67-1 & 0 & 12.4 & 15.0 & 1.20 \\
69-1 & 0 & 2.40 & 2.18 & 0.909 \\
70-1 & 0 & 2.49 & 2.14 & 0.861 \\
73-1 & 0 & 6.90 & 8.27 & 1.19 \\
77-1 & 1 & 1.24 & 4.26 & 3.42 \\
77-2 & 0 & 15.4 & 21.3 & 1.37 \\
77-3 & 0 & 5.20 & 6.39 & 1.23 \\
78-1 & 0 & 4.47 & 18.7 & 4.18 \\
79-1 & 1 & 1.97 & 5.10 & 2.58 \\
82-1 & 1 & 1.15 & 4.85 & 4.19 \\
83-1 & 1 & 1.93 & 4.85 & 2.51 \\
85-1 & 0 & 3.00 & 3.80 & 1.26 \\
89-1 & 1 & 1.90 & 4.53 & 2.37 \\
89-2 & 0 & 10.5 & 11.3 & 1.07 \\
91-1 & 1 & 1.09 & 3.65 & 3.34 \\
91-2 & 1 & 2.03 & 3.65 & 1.79 \\
94-1 & 0 & 4.11 & 2.12 & 0.516 \\
& & & & \\
\hline
\end{array}\\
\end{array}$
Le rapport entre la constante gouvernant le comportement des hauteurs des points et celle donnant celui des traces donné dans la dernière colonne n’a que peu de sens dans le cas d’une courbe elliptique de rang $1$ car les points y sont asymptotiquement <<plus gros>> que les traces d’un facteur $\log{Y}$ selon les corollaires \[rang1\] et \[hpointstheo\]. Il est intéressant de voir que cette constante prend à la fois des valeurs plus grandes (<<les points sont plus gros>>) et plus petites (<<les traces sont plus grosses>>) que $1$.
Si l’on poursuit le calcul sur les 200 premières courbes elliptiques de conducteur sans facteurs carrés alors on obtient un rapport moyen de $1.5$ environ et ce rapport tend à décroître. Il n’y a donc pas de raison a priori de croire qu’il soit plus souvent plus grand ou petit que $1$.
### Étude plus fine du rapport $C_{\text{P}}/C_{\text{Tr}}^{(0)}$
Pour étudier le rapport $${{\ensuremath{\mathchoice {\dfrac{C_{\text{P}}}{C_{\text{Tr}}^{(0)}}}
{\dfrac{C_{\text{P}}}{C_{\text{Tr}}^{(0)}}}
{\frac{C_{\text{P}}}{C_{\text{Tr}}^{(0)}}}
{\frac{C_{\text{P}}}{C_{\text{Tr}}^{(0)}}}
}}}= {{\ensuremath{\mathchoice {\dfrac{\mathcal{Q}{\mathnormal}{(N)}}{\mathcal{P}{\mathnormal}{(1)}}}
{\dfrac{\mathcal{Q}{\mathnormal}{(N)}}{\mathcal{P}{\mathnormal}{(1)}}}
{\frac{\mathcal{Q}{\mathnormal}{(N)}}{\mathcal{P}{\mathnormal}{(1)}}}
{\frac{\mathcal{Q}{\mathnormal}{(N)}}{\mathcal{P}{\mathnormal}{(1)}}}
}}}L(E,1)^{-1}\prod_{p|N} {\ensuremath{\left(1-{{\ensuremath{\mathchoice {\dfrac{a_p}{p}}
{\dfrac{a_p}{p}}
{\frac{a_p}{p}}
{\frac{a_p}{p}}
}}}\right)}} {\ensuremath{\left(1-{{\ensuremath{\mathchoice {\dfrac{a_{p^2}}{p^2}}
{\dfrac{a_{p^2}}{p^2}}
{\frac{a_{p^2}}{p^2}}
{\frac{a_{p^2}}{p^2}}
}}}\right)}}^{-1},$$ on néglige le rôle de ${{\ensuremath{\mathchoice {\dfrac{\mathcal{Q}{\mathnormal}{(N)}}{\mathcal{P}{\mathnormal}{(1)}}}
{\dfrac{\mathcal{Q}{\mathnormal}{(N)}}{\mathcal{P}{\mathnormal}{(1)}}}
{\frac{\mathcal{Q}{\mathnormal}{(N)}}{\mathcal{P}{\mathnormal}{(1)}}}
{\frac{\mathcal{Q}{\mathnormal}{(N)}}{\mathcal{P}{\mathnormal}{(1)}}}
}}}$, qui est de toute façon borné. Ainsi, la taille de $$\gamma_E = L(E,1)^{-1} \prod_{p|N} {\ensuremath{\left(1+{{\ensuremath{\mathchoice {\dfrac{a_p}{p}}
{\dfrac{a_p}{p}}
{\frac{a_p}{p}}
{\frac{a_p}{p}}
}}}\right)}}^{-1}$$ par rapport à $1$ reflète essentiellement le signe du terme $$\sum_{\sigma\in G_d\backslash \{Id\}} <P,P^{\sigma}>_{\mathbb{H}_d}.$$ Plus $\gamma_E$ sera petit, plus ce produit scalaire sera grand, et, de manière imagée, on pourrait dire que les points de Heegner sont essentiellement resserés autour d’une même direction ; alors que si $\gamma_E$ est grand devant $1$, cette somme est négative et les points sont éclatés dans l’espace à $h_d$ dimensions. On s’attend donc, par exemple, à ce que la hauteur des traces (en moyenne) soit supérieure à celle des points sur la courbe $58\text{B}$ ($\gamma_E=0.67$), ce qui est illustré par la figure \[58B-points-traces\]. Par contre, dans le cas de la courbe $37\text{B}$ ($\gamma_E=1.34$), les points sont plus gros (figure \[37B-points-traces\]).
![Somme des hauteurs des traces et des points sur la courbe $58\text{B}$.[]{data-label="58B-points-traces"}](58B-points-traces.eps){height="10cm"}
![Somme des hauteurs des traces et des points sur la courbe $37\text{B}$.[]{data-label="37B-points-traces"}](37B-points-traces.eps){height="10cm"}
Résultats expérimentaux
-----------------------
Nous avons effectué de nombreux calculs de points de Heegner et de leurs hauteurs sur différentes courbes, à l’aide des logiciels Magma et Pari. Magma a permis de calculer les points, ou les traces, eux-mêmes (suivant la méthode de calcul exposée dans [@da]), alors que Pari s’est avéré plus rapide pour le calcul direct de la série $L$ intervenant dans la formule de Gross-Zagier. On s’est concentré sur des courbes de petits conducteurs ($N<200$), car les algorithmes ont une complexité en $\mathcal{O}(N^2)$. Nous présentons ici certains des résultats obtenus, pour illustrer notre théorème.
### Comparaison entre les valeurs expérimentale et théorique de $C_{\text{P}}$
Nous commençons par comparer les valeurs expérimentales de $C_{\text{P}}$ à la valeur théorique donnée dans la section précédente. Ainsi, pour chaque courbe de conducteur sans facteurs carrés plus petit que $100$, on a représenté le rapport entre la valeur expérimentale $$C_{\text{P}}^{\text{exp}}(Y):= {{\ensuremath{\mathchoice {\dfrac{1}{Y^{3/2}\log Y}}
{\dfrac{1}{Y^{3/2}\log Y}}
{\frac{1}{Y^{3/2}\log Y}}
{\frac{1}{Y^{3/2}\log Y}}
}}}\sum_{\substack{d\in \mathcal{D} \\
\vert d\vert \leqslant Y}} \hat{h}_{\mathbb{H}_d}(\text{P}_d)$$ pour quelques valeurs de $Y$ et la valeur théorique dans le tableau \[expe\].
$\begin{array}{ccccc}
\begin{array}{|c|c|c|c|}
\hline
\text{Courbe} & 6000 & 13000 & 20000 \\
\hline
11-1& 0.716 & 0.738 & 0.750 \\
14-1& 0.707 & 0.736 & 0.742 \\
15-1& 0.703 & 0.723 & 0.740 \\
17-1& 0.735 & 0.753 & 0.764 \\
19-1& 0.718 & 0.737 & 0.748 \\
21-1& 0.700 & 0.731 & 0.742 \\
26-1& 0.714 & 0.730 & 0.744 \\
26-2& 0.682 & 0.702 & 0.717 \\
30-1& 0.687 & 0.703 & 0.722 \\
33-1& 0.697 & 0.725 & 0.736 \\
34-1& 0.707 & 0.724 & 0.735 \\
35-1& 0.749 & 0.764 & 0.774 \\
37-1& 0.661 & 0.689 & 0.704 \\
37-2& 0.805 & 0.820 & 0.830 \\
38-1& 0.760 & 0.775 & 0.784 \\
38-2& 0.702 & 0.722 & 0.733 \\
39-1& 0.713 & 0.720 & 0.738 \\
\hline
\end{array}
& &
\begin{array}{|c|c|c|c|}
\hline
\text{Courbe} & 6000 & 13000 & 20000 \\
\hline
42-1& 0.683 & 0.722 & 0.735 \\
43-1& 0.678 & 0.699 & 0.714 \\
46-1& 0.699 & 0.720 & 0.729 \\
51-1& 0.711 & 0.719 & 0.737 \\
53-1& 0.690 & 0.716 & 0.725 \\
55-1& 0.818 & 0.821 & 0.825 \\
57-1& 0.680 & 0.695 & 0.705 \\
57-2& 0.774 & 0.780 & 0.785 \\
57-3& 0.727 & 0.738 & 0.745 \\
58-1& 0.663 & 0.687 & 0.703 \\
58-2& 0.808 & 0.818 & 0.828 \\
61-1& 0.735 & 0.752 & 0.770 \\
62-1& 0.832 & 0.835 & 0.847 \\
65-1& 0.752 & 0.756 & 0.766 \\
66-1& 0.752 & 0.773 & 0.785 \\
66-2& 0.693 & 0.719 & 0.734 \\
66-3& 0.671 & 0.699 & 0.714 \\
\hline
\end{array}
& &
\begin{array}{|c|c|c|c|}
\hline
\text{Courbe} & 6000 & 13000 & 20000 \\
\hline
67-1& 0.725 & 0.753 & 0.763 \\
69-1& 0.795 & 0.797 & 0.807 \\
70-1& 0.762 & 0.774 & 0.785 \\
73-1& 0.803 & 0.822 & 0.834 \\
77-1& 0.709 & 0.738 & 0.752 \\
77-2& 0.767 & 0.791 & 0.802 \\
77-3& 0.770 & 0.795 & 0.806 \\
78-1& 0.722 & 0.722 & 0.742 \\
79-1& 0.794 & 0.823 & 0.827 \\
82-1& 0.731 & 0.754 & 0.763 \\
83-1& 0.786 & 0.799 & 0.811 \\
85-1& 0.808 & 0.820 & 0.826 \\
89-1& 0.823 & 0.838 & 0.844 \\
89-2& 0.800 & 0.817 & 0.824 \\
91-1& 0.738 & 0.748 & 0.754 \\
91-2& 0.766 & 0.772 & 0.778 \\
94-1& 0.996 & 0.980 & 0.974 \\
\hline
\end{array}\\
\end{array}$
Ainsi, même pour des discriminants assez grands ($2\cdot 10^4$), la constante expérimentale est souvent de l’ordre de $75\%$ de la constante théorique. On a représenté plusieurs valeurs de $Y$ pour bien montrer que ce rapport augmente toutefois, mais très lentement.
### Étude plus fine des courbes $37\text{A}$ et $37\text{B}$
Nous allons étudier plus en profondeur les courbes $37\text{A}$ et $37\text{B}$. Ces deux courbes sont intéressantes pour plusieurs raisons : elles ont même degré et même conducteur, donc devraient avoir même $C_{\text{P}}$. La courbe $37\text{B}$ est de rang $0$ alors que la $37\text{A}$ est la courbe de rang $1$ de plus petit conducteur.
![Hauteur des points en moyenne sur les courbes $37\text{A}$ et $37\text{B}$.[]{data-label="37AB-points1"}](37AB-points100k1.eps){height="10cm"}
La figure \[37AB-points1\] représente les sommes des hauteurs des points sur les courbes $37\text{A}$ et $37\text{B}$ comparées à la valeur théorique donnée par le corollaire \[hpointstheo\]. Contrairement à ce que l’on avait pour les hauteurs des traces, ici les courbes ne se supperposent pas du tout, ce qui était prévisible étant donné le tableau ci-dessus.
On a en particulier l’impression que la courbe $37\text{A}$ est nettement en-dessous de la $37\text{B}$ sans paraître la rejoindre alors que le corollaire \[hpointstheo\] affirme que les hauteurs des points sur ces courbes elliptiques devraient être les mêmes en moyenne. Cependant, une analyse plus fine de la différence entre ces deux courbes montre qu’elle semble être en $Y^{3/2}$ et donc que les deux courbes semblent se rapprocher à une vitesse de $1/\log Y$ de la courbe théorique d’équation $Y\mapsto 0.0107Y^{\frac{3}{2}}\log{Y}$ ce qu’il est malheureusement difficile d’observer dans l’échelle de discriminants représentée. Autrement dit, on devine numériquement sur les courbes $37$A et $37$B que $$\sum_{\substack{d\in\mathcal{D} \\
\vert d\vert\leqslant Y}}\hat{h}_{\mathbb{H}_d}(\text{P}_d)=C_{\text{P}}Y^{\frac{3}{2}}\log{Y}\left(1+\mathcal{O}_{N,E}\left(\frac{1}{\log{Y}}\right)\right).$$ Selon le corollaire \[hpointstheo\] et la preuve du théorème \[traceana\], on a $$\begin{aligned}
\sum_{\substack{d\in\mathcal{D} \\
\vert d\vert\leqslant Y}}\hat{h}_{\mathbb{H}_d}(\text{P}_d) & = & C_{\text{P}}Y^{\frac{3}{2}}\log{Y}+C_{\text{P}}^\prime Y^{\frac{3}{2}}+\frac{1}{3\Omega_{E,N}}Y^{\frac{1}{2}}\mathsf{Error}+\mathcal{O}_{N,\varepsilon}\left(Y^{\frac{29}{20}+\varepsilon}\right), \\
& = & C_{\text{P}}Y^{\frac{3}{2}}\log{Y}\left(1+\frac{C_{\text{P}}^\prime}{C_{\text{P}}}\frac{1}{\log{Y}}+\frac{1}{3\Omega_{E,N}C_{\text{P}}}\frac{\mathsf{Error}}{Y\log{Y}}+\mathcal{O}_{N,\varepsilon}\left(Y^{-\frac{1}{20}+\varepsilon}\right)\right)\end{aligned}$$ où $\mathsf{Error}$ est défini en et pour tout $\varepsilon>0$. L’analyse numérique suggère donc que le terme $C_{\text{P}}Y^{3/2}$ dans le développement du corollaire \[hpointstheo\] est non nul et même de l’ordre du terme principal pour des petits discriminants. Ceci suggère également que $$\label{devine}
\mathsf{Error}=o_{\varepsilon}(NY).$$ \[fin\] Prouver cela nécessite de pouvoir estimer les moyennes mentionnées dans la remarque \[rq\]. Donnons une autre justification numérique de nos intuitions. Posons $$\delta(Y):={{\ensuremath{\mathchoice {\dfrac{1}{Y^{3/2}}}
{\dfrac{1}{Y^{3/2}}}
{\frac{1}{Y^{3/2}}}
{\frac{1}{Y^{3/2}}}
}}} \sum_{\substack{d\in\mathcal{D} \\
\vert d\vert\leqslant Y}} {\ensuremath{\left(\hat{h}_{\mathbb{H}_d,37\text{B}}(\text{P}_d)-\hat{h}_{\mathbb{H}_d,37\text{A}}(\text{P}_d)\right)}}.$$ Le tableau \[delta\] donne la valeur de $\delta(Y)$ pour plusieurs valeurs de $Y$.
$\begin{array}{|c|c|c|c|c|c|c|}
\hline
Y & 2\cdot 10^4 & 4\cdot 10^4 & 6\cdot 10^4 & 8\cdot 10^4 & 10\cdot 10^4 \\
\hline
\delta(Y) & 0.01337 & 0.01329 & 0.01328 & 0.01326 & 0.01324 \\
\hline
\end{array}$
Ainsi, $\delta(Y)$ décroît très légèrement avec $Y$ et semble se stabiliser. On devine alors que $$\delta(Y)=D_E+o_{N,E}(1)$$ pour une constante $D_E$. Or, le corollaire \[hpointstheo\] affirme que $$\delta(Y)=\left(C_{\text{P},37\text{B}}-C_{\text{P},37\text{A}}\right)+\frac{1}{3Y}\left(\frac{\mathsf{Error}_{37\text{B}}}{\Omega_{37\text{B},37}}-\frac{\mathsf{Error}_{37\text{A}}}{\Omega_{37\text{A},37}}\right)+\mathcal{O}_{N,\varepsilon}\left(Y^{-\frac{1}{20}+\varepsilon}\right)$$ pour tout $\varepsilon>0$ ce qui confirme . En outre, il ne semble pas y avoir de compensation entre $C_{\text{P}}^\prime Y^{\frac{3}{2}}$ et $C_{\text{P}}^{\prime\prime}Y^{\frac{1}{2}}\mathsf{Err}_1$ car sinon $\delta(Y)$ tendrait plus vite vers $0$.
[999999]{}
Abbes A., Ullmo E.: *À propos de la conjecture de Manin pour les courbes elliptiques modulaires*, Compositio Math. 103:3 (1996), 269-286.
Bombieri E.: *Le grand crible dans la Théorie Analytique des Nombres*, Astérisque 18 (1973).
Cremona J.E.: *Elliptic Curve Data*, disponible à <http://www.maths.nott.ac.uk/personal/jec/ftp/data/INDEX.html>.
Darmon H.: *Rational points on modular elliptic curves*, CBMS vol. 101 (2004).
Darmon H., Green P.: *Elliptic curves and class fields of real quadratic fields : algorithms and evidence*, Exp. Math. **11**(2002), 37-55.
Delaunay C.: *Formes modulaires et invariants de courbes elliptiques définies sur $\mathbb{Q}$*, thèse de doctorat, Université Bordeaux I (2002), disponible à <http://igd.univ-lyon1.fr/~delaunay/>.
Edixhoven B.: *On the Manin constants of modular elliptic curves*, in *Arithmetic Algebraic Geometry* (Texel, 1989), edited by G. van der Geer, F. Oort, and J. Steenbrink, 25-39, Progr. Math. 89. Boston, MA: Birkhäuser Boston, 1991.
Gross B.H.: *Heegner Points on $X_0(N)$*, Modular Forms, ed. R.A. Rankin, Halsted Press (1984), 87-105.
Gross B.H., Zagier D., *Derivatives of L-series and the height of Heegner points*, Invent. Math. **84**(1986), 225-320.
Iwaniec H., *On the order of vanishing of modular $L$-functions at the critical point*, J. Théor. Nombres Bordeaux **6**(1990), 365-375.
Iwaniec H., Kowalski E.: *Analytic number theory*, Providence R.I., Colloquium publications (American Mathematical Society) (2004).
Manin J., *Parabolic points and zeta functions of modular curves*, Izv. Akad. Nauk SSSR Ser. Mat. 36 (1972), 19-66.
Murty M.R., *Bounds for congruence primes*, Proc. Sympos. Pure Math. **66**(1999), 177-192.
Taylor R., Wiles A.:*Ring-theoretic properties of certain Hecke algebras*, Ann. of Math. (2) 141 553–572.
Shimura G., *On the holomorphy of a certain Dirichlet series*, Proc. London Math. Soc. **31**(1975), 79-98.
Silverman J.H., *The arithmetic of elliptic curves*, Graduate Texts in Mathematics **106**, Springer-Verlag, Berlin, 1986.
Watkins M., *Computing the modular degree of an elliptic curve*, Experiment. Math. **11**(2002).
Wiles A.: *Modular elliptic curves and Fermat’s last theorem*, Ann. of Math. (2) 141, 443-551.
[*G. Ricotta*]{} Université de Montréal, Département de Mathématiques et de Statistique, CP 6128 succ Centre-Ville, Montréal QC H3C 3J7, Canada; ricotta@dms.umontreal.ca[*T. Vidick*]{} École Normale Supérieure, 45 rue d’Ulm, 75005 Paris, France; thomas.vidick@ens.fr
[^1]: Toutefois, nous prendrons soin de garder explicite toute dépendance en le conducteur $N$ de la courbe.
[^2]: Ceci est faux si $E$ n’est pas forte: $[0,1,1,0,0]$ a une $X_0(11)$-constante de Manin égale à $5$.
[^3]: À noter une erreur de frappe dans [@iwaniec], il s’agit bien de ${\ensuremath{\left({{\ensuremath{\mathchoice {\dfrac{X}{2\pi}}
{\dfrac{X}{2\pi}}
{\frac{X}{2\pi}}
{\frac{X}{2\pi}}
}}}\right)}}^s$ et non de son inverse.
[^4]: Le type de symétrie de cette famille de fonctions $L$ est orthogonal impair.
[^5]: Au cours de cette preuve, $\mathsf{TP}_i$ désignera la quantité d’où provient la contribution principale et $\mathsf{Err}_i$ un terme d’erreur.
|
---
abstract: 'In the simple \[hyper\]cubic five dimension near neighbor interaction Ising ferromagnet, extensive simulation measurements are made of the link overlap and the spin overlap distributions. These “two replica” measurements are standard in the Spin Glass context but are not usually recorded in ferromagnet simulations. The moments and moment ratios of these distributions (the variance, the kurtosis and the skewness) show clear critical behaviors at the known ordering temperature of the ferromagnet. Analogous overlap effects can be expected quite generally in Ising ferromagnets in any dimension. The link overlap results in particular, with peaks at criticality in the kurtosis and the skewness, also have implications for Spin Glasses.'
author:
- 'P. H. Lundow'
- 'I. A. Campbell'
title: 'The Ising ferromagnet in dimension five : link and spin overlaps'
---
Introduction
============
As is well known, the upper critical dimension (ucd) for Ising ferromagnets is four. Here we show results of simulations for an Ising ferromagnet with near neighbor interactions, on a simple \[hyper\]cubic lattice with periodic boundary conditions in dimension five, the next dimension up. The Hamiltonian is as usual $$\mathcal{H}= - J\sum_{ij}S_{i}S_{j}
\label{ham}$$ with spins $i$ and $j$ near neighbors. We will take $J=1$ and will quote inverse temperatures $\beta = 1/T$.
All the principal properties of this system are well known. Recent consistent and precise estimates of the inverse ordering temperature $\beta_c$ are $0.11391$ [@jones:05], 0.113925(12) [@berche:08], $0.1139139(5)$ [@lundow:11] from simulations, and 0.113920(1)[@butera:12] from High Temperature Series Expansion (HTSE). We will use $\beta_c=0.113915(1)$ as a compromise estimate. The susceptibility critical exponent and the effective correlation length exponent take the exact mean field values $\gamma=1$ and $\nu =
1/2$, with a leading correction to scaling exponent $\theta=1/2$ [@guttmann:81]. In the periodic boundary condition geometry, above the ucd the “effective length” is $L_{\mathrm{eff}}=AL^{d/4}$ where $A$ is a non-universal constant [@brezin:82; @jones:05].
We analyse simulation data for the “two replica” observables link overlap and spin overlap, familiar in the Ising Spin Glass (ISG) context. It might seem curious to apply techniques developed for complex systems to the much simpler ferromagnet, particularly above the ucd, even though the observables can be defined in exactly the same way in a ferromagnet as in an ISG. Properties of the spin overlap at and beyond $\beta_c$ have already been studied in the $3$d Ising ferromagnet [@berg:02].
However, just because all the major parameters are well known it is convenient to use the $5$d ferromagnet as a testbed for studying the critical behavior of various moments or moment ratios (the variance, the kurtosis and the skewness) of both overlap distributions. The results should be generalizable [*mutatis mutandis*]{} to all Ising ferromagnets. The final aim is to establish the ground rules for the properties related to the link overlap near criticality in order to apply a similar methodology to complex systems, in particular to ISGs.
The link overlap [@caracciolo:90], like the more familiar spin overlap, is an important parameter in ISG numerical simulations. In both cases two replicas (copies) $A$ and $B$ of the same physical system, i.e. with identical sets of interactions between spins, are first generated and equilibrated; updating is then continued and the “overlaps” between the two replicas are recorded over long time intervals. The spin overlap at any instant $t$ corresponds to the fraction $q(t)$ of spins $S_{i}$ in $A$ and $B$ having the same orientation ($S_{i}^{A}$ and $S_{i}^{B}$ both up or both down), and the normalized overall distribution over time is written $P(q)$. The link overlap corresponds to the fraction $q_{\ell}(t)$ of links (or bonds or edges) $ij$ between spins which are either both satisfied or both dissatisfied in the two replicas; the normalized overall distribution over time is written $Q(q_{\ell})$. The explicit definitions are $$q(t)=\frac{1}{N}\,\sum_{i=1}^{N} S_{i}^{A}(t)S_{i}^{B}(t)
\label{qtdef}$$ and $$q_{\ell}(t)=\frac{1}{N_{l}}\sum_{ij}S_{i}^{A}(t)S_{j}^{A}(t)S_{i}^{B}(t)S_{j}^{B}(t)
\label{qltdef}$$ where $N$ is the number of spins in the system and $N_{l}$ the number of links; spins $i$ and $j$ are linked, as denoted by $ij$. We will indicate means taken over time by $\langle\cdots\rangle$.
Equilibration and measurement runs were performed by standard heat bath updating on sites selected at random. The spin systems started with a random configuration, i.e. at infinite temperature, and were gradually cooled and equilibrated until they reached their designated temperatures, where they received a longer equilibration. For example, for $L=10$ the systems at $\beta=0.114$ saw roughly $10^6$ sweeps before any measurements took place and the smaller systems at least $10^7$ sweeps. Several sweeps were made between measurements and with a flip rate of about $27\%$ near $\beta_c$ which means at least four sweeps between measurements. For $L=10$ about $10^7$ measurements were recorded at each temperature near $\beta_c$ and considerably more for the smaller systems.
Link overlap
============
For any standard near neighbor Ising ferromagnet with all interactions identical and with periodic boundary conditions, there is a simple rule for the mean link overlap in equilibrium $\langle
q_{\ell}(\beta,L)\rangle$. If $p_{s}(\beta,L)$ is the probability averaged over time that any given bond is satisfied, then by definition the mean energy per bond is $$|U(\beta,L)| \equiv 1-2p_{s}(\beta,L)$$ Because all bonds are equivalent $$\langle q_{\ell}\rangle(\beta,L) = p_{s}^2 + (1-p_{s})^2 - 2p_{s}(1-p_{s}) \equiv U(\beta,L)^2.$$ This rule is exact at all temperatures (we have checked this numerically), so it would appear at first glance that link overlap measurements present no interest in a simple ferromagnet as they contain no more information than the energy. However, the moments of the link overlap distribution reflect the structure of the temporary spin clusters which build up in the paramagnetic state before $\beta_c$, and the domain structure in the ferromagnetic state beyond $\beta_c$. Thus if at some instant $t$ a cluster of parallel spins exists in replica $A$ and a similar cluster in the same part of space exists in replica $B$, then the instantaneous $q_{\ell}(t)$ will be significantly higher than the time average $\langle
q_{\ell}(t)\rangle$. The width of the overall distribution $Q(q_{\ell})$ increases rapidly on the approach to $\beta_c$ and we find phenomenologically that, as a consequence of the repeated occurrence of the cluster situation, around the critical temperature the distributions do not remain simple Gaussians but develop excess kurtosis and skewness, even though to the naked eye these deviations from pure Gaussian distributions are not obvious; for instance no secondary peaks appear in the distributions.
We exhibit in Figures 1 to 4 data at sizes $L=4,6, 8$ and $10$ for the Q-variance $$Q_{\mathrm{var}}(\beta,L) = \left\langle\left(q_{\ell}-\langle q_{\ell}\rangle\right)^2\right\rangle,
\label{Qvar}$$ the Q-kurtosis $$Q_{k}(\beta,L) =
\frac{
\left\langle\left(q_{\ell}-\langle q_{\ell}\rangle\right)^4\right\rangle
}{
\left\langle\left(q_{\ell}-\langle q_{\ell}\rangle\right)^2\right\rangle^2
}
\label{Qkurt}$$ and the Q-skewness $$Q_{s}(\beta,L) =
\frac{
\left\langle\left(q_{\ell}-\langle q_{\ell}\rangle\right)^3\right\rangle
}{
\left\langle\left(q_{\ell}-\langle q_{\ell}\rangle\right)^2\right\rangle^{3/2}
}
\label{Qskew}$$
The three Q moments and moment ratios follow the standard definitions for the moments of a distribution. For the Q-variance we plot $\log(Q_{\mathrm{var}}(\beta,L)-1)$ so as to display the entire range of data. Fig. \[fig:1\] shows the behavior of the Q-variance from high temperature through $\beta_c$ to $\beta = 0.13$, well into the ordered state. Fig. \[fig:2\] shows the same data in the region near $\beta_c$.
![(Color online) The Q-variance, Eq. \[Qvar\], as a function of size and inverse temperature for the $5$d near neighbor ferromagnet. In this and all the following figures the convention for indicating size is : $L=4$, blue triangles; $L=6$, red circles; $L=8$, black squares; $L=10$, pink inverted triangles. In this and following figures errors are smaller than the size of the points unless stated otherwise. The red vertical line indicates the inverse ordering temperature $\beta_{c}=
0.113915$.](flink_fig1.eps){width="3.5in"}
\[fig:1\]
![(Color online) The Q-variance as in Fig. \[fig:1\], in the region of the inverse ordering temperature $\beta_c$. Sizes coded as in Fig. \[fig:1\]. ](flink_fig2.eps){width="3.5in"}
\[fig:2\]
It can be seen that the Q-variance has clear critical behavior. Just as for standard “phenomenological couplings” such as the Binder cumulant or the correlation length ratio $\xi(\beta,L)/L^{5/4}$ (in the $5$d case [@jones:05]), it is size independent at $\beta_c$ to within a finite size correction. As the Q-variance is a phenomenological coupling in this particular system, it can be expected to have a similar form as a function of temperature in any ferromagnet, with the appropriate finite size correction exponent. This makes the Q-variance a supplementary phenomenological coupling for ferromagnets in general. As $q_{\ell}$ is a near-neighbor measurement like the energy, the distribution $Q(q_{\ell})$ tends to equilibrate faster on annealing than a parameter such as the correlation length $\xi$.
![(Color online) The Q-kurtosis, Eq. \[Qkurt\], for the $5$d near neighbor ferromagnet as a function of size and inverse temperature. Sizes coded as in Fig. \[fig:1\]. ](flink_fig3.eps){width="3.5in"}
\[fig:3\]
![(Color online) The Q-skewness, Eq. \[Qskew\], for the $5$d near neighbor ferromagnet as a function of size and inverse temperature. Sizes coded as in Fig. \[fig:1\]. ](flink_fig4.eps){width="3.5in"}
\[fig:4\]
The Q-kurtosis has a more unusual form, Fig. \[fig:3\]. At temperatures well above or well below the critical temperature it takes up the Gaussian value $Q_{k}(\beta)= 3$, but near criticality there is an excess Q-kurtosis peak corresponding to a “fat tailed” form of the link overlap distribution. With increasing $L$ the width and strength of the peak decrease and the peak position $\beta_{\mathrm{max}}(Q_{k})$ approaches $\beta_c$. In the present $5$d ferromagnet case the form of the temperature dependence evolves with $L$; with increasing $L$ from a simple peak it tends to peak plus dip.
The Q-skewness, Fig. \[fig:4\], resembles the Q-kurtosis plot. The Q-skewness starts at $0$ (a symmetric distribution) at $\beta=0$ and then develops a strong positive peak as a function of $\beta$ in the region of $\beta_c$ (so a distribution $Q(q_{\ell})$ tilted towards high $q_{\ell}$). Again the width and the strength of the peak decrease with increasing $L$. There is a weak indication of the beginning of a dip beyond the peak. The Q-kurtosis and Q-skewness can be expected to show qualitatively the same critical peak form in any ferromagnet.
For the moment these observations are essentially phenomenological; it would be of interest to go beyond the argument given above in terms of correlated clusters of spins so as to obtain a full quantitative explanation for the details of the critical behavior of the $Q(q_{\ell})$ distribution and its moments in finite $L$ samples. Link overlap moment peaks in ISGs resemble these ferromagnet results [@lundow:12] implying that the peak structure is a very general qualitative form of the behavior of link overlap distributions at an Ising magnet critical point.
The link overlap can be defined for vector spins [@katzgraber:02] by $$q_{\ell} = \frac{1}{N_l}\sum_{ij}[({\bf S}^{A}_{i} \cdot {\bf S}^{A}_{j})({\bf S}^{B}_{i} \cdot {\bf S}^{B}_{j})]
\label{qlvec}$$ which is invariant under global symmetry operations; the same link overlap critical properties as seen in Ising systems may well exist in $XY$ and Heisenberg magnets also.
Spin overlap
============
As for the link overlap, one can also define various moments and moment ratios of the spin overlap distribution such as the P-variance $$P_{\mathrm{var}}(\beta,L) = \langle q^2\rangle
\label{Pvar}$$ and the P-kurtosis $$P_{k}(\beta,L) = \langle q^4\rangle\big/\langle q^2\rangle^2
\label{Pkurt}$$ which is simply related to the Binder-like P-cumulant $P_{b}(\beta,L)
= \left(3-P_{k}(\beta,L)\right)/2$.
![(Color online) The P-variance Eq. \[Pvar\] for the $5$d near neighbor ferromagnet as a function of size and inverse temperature. Sizes coded as in Fig. \[fig:1\]. ](flink_fig5.eps){width="3.5in"}
\[fig:5\]
![(Color online) The P-kurtosis Eq. \[Pkurt\] for the $5$d near neighbor ferromagnet as a function of size and inverse temperature. Sizes coded as in Fig. \[fig:1\]. ](flink_fig6.eps){width="3.5in"}
\[fig:6\]
The P-variance in the ferromagnet has the phenomenological coupling form, Fig. \[fig:5\]; at $\beta_c$ the P-variance tends to an $L$ independent value with a finite size correction. As the temperature goes to zero the P-variance will tend to $L^d$. The P-kurtosis has a different phenomenological coupling form, a peak before a sharp drop to $1$, corresponding to the $P(q)$ distribution becoming “fat tailed” before and at $\beta_c$ before taking on a two-peak structure in the ordered state [@berg:02]. This is in contrast to the magnetization M-kurtosis (usually expressed as the Binder cumulant) in ferromagnets or the standard P-kurtosis in ISGs which both drop regularly from the Gaussian value $3$ towards the two peak value $1$ with increasing order. However, it can be noted that the temperature variation of the kurtosis for the chiral order parameter in Heisenberg spin glasses has the same general form as the present ferromagnetic P-kurtosis, the distribution becoming fat tailed above the ordering temperature [@hukushima:05].
As the distributions $P(q)$ in equilibrium are by definition symmetrical about $q=0$, the P-skewness is always zero. However, for the one-sided distribution of the absolute value of $|q|$, other parameters can be defined, in particular the absolute P-kurtosis $$P_{|q|,k}(\beta,L) =
\frac{
\left\langle\left(|q|-\langle |q|\rangle\right)^4\right\rangle
}{
\left\langle\left(|q|-\langle |q|\rangle\right)^2\right\rangle^2
}
\label{Pabskurt}$$ and the absolute P-skewness $$P_{|q|,s}(\beta,L) =
\frac{
\left\langle\left(|q|-\langle |q|\rangle\right)^3\right\rangle
}{
\left\langle\left(|q|-\langle |q|\rangle\right)^2\right\rangle^{3/2}
}
\label{Pabssk}$$
The absolute P-kurtosis and the absolute P-skewness in the ferromagnet have rather complex phenomenological coupling temperature dependence patterns, with very weak finite size corrections at $\beta_c$, Figs. \[fig:7\], and \[fig:8\]. If the weak finite size correction is a general property for these parameters, it could be usefully exploited so as to obtain high precision estimates of ordering temperatures in systems where these temperatures are not well known.
The spin overlap properties do not transport from the ferromagnet into ISG systems in the same manner as the link overlap properties do because the P-variance takes on a different status : in an ISG $q^2$ becomes the order parameter.
![(Color online) The absolute P-kurtosis $P_{|q|,k}$, Eq. \[Pabskurt\], for the $5$d near neighbor ferromagnet as a function of size and inverse temperature. Sizes coded as in Fig. \[fig:1\]. ](flink_fig7.eps){width="3.5in"}
\[fig:7\]
![(Color online) The absolute P-skewness $P_{|q|,s}$, Eq. \[Pabssk\], for the $5$d near neighbor ferromagnet as a function of size and inverse temperature. Sizes coded as in Fig. \[fig:1\]. ](flink_fig8.eps){width="3.5in"}
\[fig:8\]
Conclusion
==========
The standard near neighbor interaction Ising ferromagnet on a simple \[hyper\]cubic lattice in dimension five has been used as a test case in order to demonstrate the critical form of the temperature variations of observables related to the link overlap and the spin overlap, parameters developed in the ISG context and not generally recorded in ferromagnet simulations. The moments of the link and spin overlap distributions $Q(q_\ell)$ and $P(q)$ show a rich variety of temperature variations, with specific critical behaviors. The temperature dependence of the link overlap kurtosis and the link overlap skewness show peaks at criticality which are “evanescent” in the sense that they will disappear in the the large size thermodynamic limit. The present results validate the assumption that these observables show true critical temperature dependencies. A temporary cluster phenomenon is proposed as determining the evolution of the link overlap distributions. If correct, this mechanism is quite general, so we expect the critical peaks in the Q-kurtosis and the Q-skewness to be present in the entire class of Ising ferromagnets, not only those in dimensions above the ucd, and plausibly in vector spin ferromagnets also. Beyond the class of ferromagnets, it can be noted that link overlap Q-kurtosis and Q-skewness critical peaks have also been observed [@lundow:12] in Ising Spin Glasses.
[99]{} J. L. Jones and A. P. Young Phys. Rev. B [**71**]{}, 174438 (2005). B. Berche, C. Chatelain, C. Dhall, R. Kenna, R. Low, and J.-C. Walter, J. Stat. Mech. [**2008**]{}, P11010. P. H. Lundow and K. Markström, Nucl. Phys. B [**845**]{}, 120 (2011). P. Butera and M. Pernici, Phys. Rev. E [**86**]{}, 011139 (2012). A. J. Guttmann, J. Phys. A [**14**]{}, 233 (1981). E. Brézin, J. Phys. (France) [**43**]{}, 15 (1982). B.A. Berg, A. Billoire and W. Janke, Phys. Rev. E [**66**]{}, 046122 (2002). S. Caracciolo, G. Parisi, S. Patarnello, and N. Sourlas, J. Phys. (Paris) [**51**]{}, 1877 (1990). K. Hukushima and H. Kawamura, Phys. Rev. B [**72**]{}, 144416 (2005). P. H. Lundow and I. A. Campbell, unpublished. H. G. Katzgraber and A. P. Young, Phys. Rev. B [**65**]{}, 214401 (2002).
|
---
abstract: 'We describe the design and assembly of the LUX-ZEPLIN experiment, a direct detection search for cosmic WIMP dark matter particles. The centerpiece of the experiment is a large liquid xenon time projection chamber sensitive to low energy nuclear recoils. Rejection of backgrounds is enhanced by a Xe skin veto detector and by a liquid scintillator Outer Detector loaded with gadolinium for efficient neutron capture and tagging. LZ is located in the Davis Cavern at the 4850’ level of the Sanford Underground Research Facility in Lead, South Dakota, USA. We describe the major subsystems of the experiment and its key design features and requirements.'
bibliography:
- 'LZNIM.bib'
title: 'The LUX-ZEPLIN (LZ) Experiment'
---
Acknowledgements
================
This work was partially supported by the U.S. Department of Energy (DOE) Office of Science under contract number DE-AC02-05CH11231 and under grant number DE-SC0019066; by the U.S. National Science Foundation (NSF); by the U.K. Science & Technology Facilities Council under award numbers, ST/M003655/1, ST/M003981/1, ST/M003744/1, ST/M003639/1, ST/M003604/1, and\
ST/M003469/1; and by the Portuguese Foundation for Science and Technology (FCT)under award number\
PTDC/FIS-PAR/28567/2017; and by the Institute for Basic Science, Korea (budget number IBS-R016-D1). University College London and Lawrence Berkeley National Laboratory thank the U.K. Royal Society for travel funds under the International Exchange Scheme (IE141517). We acknowledge additional support from the Boulby Underground Laboratory in the U.K.; the University of Wisconsin for grant UW PRJ82AJ; and the GridPP Collaboration, in particular at Imperial College London. This work was partially enabled by the University College London Cosmoparticle Initiative. Futhermore, this research used resources of the National Energy Research Scientific Computing Center, a DOE Office of Science User Facility supported by the Office of Science of the U.S. Department of Energy under Contract No. DE-AC02-05CH11231. The University of Edinburgh is a charitable body, registered in Scotland, with the registration number SC005336. The research supporting this work took place in whole or in part at the Sanford Underground Research Facility (SURF) in Lead, South Dakota. The assistance of SURF and its personnel in providing physical access and general logistical and technical support is acknowledged. SURF is a federally sponsored research facility under Award Number DE-SC0020216.
|
---
abstract: 'For each projective or affine geometry $N$ over a prime field $\bF$, we give a best-possible upper bound on number of elements in a simple $\bF$-representable matroid $M$ of sufficiently large rank with no $N$-minor. We also characterize all $M$ of sufficiently large rank for which equality holds.'
author:
- Peter Nelson
- Zachary Walsh
date: May 2016
title: The extremal function for geometry minors of matroids over prime fields
---
Introduction
============
We prove the following theorem.
\[mainsimple\] Let $t \in \nni$. If $M$ is a simple binary matroid of sufficiently large rank with no $\PG(t+2,2)$-minor, then $$|M| \le 2^t\tbinom{r(M)-t+1}{2} + 2^t-1.$$
We also show that this bound is best-possible, and characterize the unique example where equality holds for each large $n$. In fact, we solve the analogous problems for excluding arbitrary projective and affine geometries over any prime field.
For a class $\cM$ of matroids, let $h_{\cM}(n)$ denote the maximum number of elements in a simple matroid in $\cM$ of rank at most $n$. (If $\cM$ is a nonempty subclass of the $\GF(q)$-representable matroids then $h_{\cM}(n)$ is always defined, with $h_{\cM}(n) \le \tfrac{q^n-1}{q-1}$ for all $n$.) We call $h_{\cM}$ the *extremal function* of $\cM$. This is often referred to as the *size* or *growth rate* function; our terminology here is an attempt to agree with the broader combinatorics literature. A simple matroid $M \in \cM$ with $|M| = h_{\cM}(r(M))$ is *extremal* in $\cM$. A nonempty class $\cM$ of matroids is *minor-closed* if it is closed under both minors and isomorphism. The following is a simplified version of the growth rate theorem of Geelen, Kung and Whittle \[\[gkw09\]\].
If $\bF$ is a prime field and $\cM$ is a proper minor-closed subclass of the $\bF$-representable matroids, then either
- there is some $\alpha \in \nni$ such that $h_{\cM}(n) \le \alpha n$ for all $n$, or
- $\cM$ contains all graphic matroids and there is some $\alpha \in \nni$ such that $\binom{n+1}{2} \le h_{\cM}(n) \le \alpha n^2$ for all $n$.
We call classes of the latter type *quadratic*. Our main results go some way towards the difficult problem of classifying the extremal functions of quadratic classes of representable matroids exactly. The proofs make essential use of a deep structure theorem of Geelen, Gerards and Whittle that is stated in \[\[ggwstructure\]\], whose proof has yet to appear in full.
To state our results, we first introduce the terminology used to naturally describe the extremal functions and the extremal matroids which will occur. Fix a finite field $\bF$ and a subgroup $\Gamma$ of the multiplicative group $\bF^*$. The *weight* of a vector is its number of nonzero entries. A *unit vector* is a weight-$1$ vector whose nonzero entry is $1$. A *$\Gamma$-frame matrix* is an $\bF$-matrix in which each column is either a weight-$0$ vector, a unit vector, or a weight-$2$ vector of the form $\gamma e_j - e_i$ for some $\gamma \in \Gamma$ and distinct unit vectors $e_i$ and $e_j$. A matroid represented by a $\Gamma$-frame matrix is a *$\Gamma$-frame matroid.* The class of $\Gamma$-frame matroids is well-known to be minor-closed; see \[\[zaslav\]\] for a comprehensive reference.
Write $\cG(\Gamma)$ for the class of $\Gamma$-frame matroids, and $\cG(\Gamma)^t$ for the class of matroids having a representation ${{{P} \brack {A}}}$ for some $\bF$-matrix $P$ with at most $t$ rows and some $\Gamma$-frame matrix $A$. (In this notation $\bF$ is implicit.) Note that $\cG(\{1\})^0$ is the class of graphic matroids. We will see that $\cG(\Gamma)^t$ is minor-closed, and has extremal function $f_{|\bF|,|\Gamma|,t}(n)$ defined by $$f_{|\bF|,|\Gamma|,t}(n) = |\bF|^t\left(|\Gamma|\tbinom{n-t}{2} + n-t\right) - \tfrac{|\bF|^t-1}{|\bF|-1}$$ for all $n \ge t$. We refer to this function $f_{q,g,t}(n)$, which is quadratic in $n$ with leading term $\tfrac{1}{2}gq^tn^2$, frequently throughout. For each $n \ge t$, there is a unique rank-$n$ extremal matroid $M$ in $\cG(\Gamma)^t$ given by $M \cong \si\left(M{{{P} \brack {A}}}\right)$, where ${{{P} \brack {A}}}$ includes all possible columns for which $P$ has $t$ rows and $A$ is a $\Gamma$-frame matrix with $n-t$ rows. We call this extremal matroid $\DG(n,\Gamma)^t$; it will be discussed later in more detail.
We can now fully state our results for excluding projective and affine geometries over all prime fields. The $p \le 3$ case differs from the general case; this is essentially because rank-$3$ projective/affine geometries can be binary/ternary frame matroids but are not frame matroids over larger fields.
\[maintwo\] Let $t \in \nni$ and $N$ be one of $\PG(t+2,2)$ or $\AG(t+3,2)$. If $\cM$ is the class of binary matroids with no $N$-minor, then $$h_{\cM}(n) = f_{2,1,t}(n) = 2^t\tbinom{n-t+1}{2} + 2^t-1$$ for all sufficiently large $n$. Moreover, if $M$ is extremal in $\cM$ and $r(M)$ is sufficiently large, then $M \cong \DG(r(M),\{1\})^t$.
\[mainthree\] If $t \in \nni$, the class $\cM$ of ternary matroids with no $\AG(t+2,3)$-minor satisfies $$h_{\cM}(n) = f_{3,2,t}(n) = 3^t(n-t)^2 + \tfrac{1}{2}(3^t-1)$$ for all sufficiently large $n$. Furthermore, if $M$ is extremal in $\cM$ and $r(M)$ is sufficiently large, then $M \cong \DG(r(M),\GF(3)^*)^t$.
\[mainodd\] Let $t \in \nni$ and $N$ be either $\PG(t+1,p)$ for some prime $p \ge 3$ or $\AG(t+1,p)$ for some prime $p \ge 5$. If $\cM$ is the class of $\GF(p)$-representable matroids with no $N$-minor, then $$h_{\cM}(n) = f_{p,(p-1)/2,t}(n) = p^t\left(\tfrac{p-1}{2}\tbinom{n-t}{2} + n-t\right) + \tfrac{p^t-1}{p-1}$$ for all sufficiently large $n$. Moreover, if $M$ is extremal in $\cM$ and $r(M)$ is sufficiently large, then $M \cong \DG(r(M),\Gamma)^t$, where $\Gamma$ is the index-$2$ subgroup of $\GF(p)^*$.
Theorem \[maintwo\] was previously known only for $t=0$ and, in the case of projective geometries, $t = 1$ (see \[\[gvz\],\[heller\],\[kmpr\]\]); Theorems \[mainthree\] and \[mainodd\] were unknown for all $t$. They will all follow from a more general result, Theorem \[bigmain\]; We state a simplified version here.
\[simplifiedmain\] Let $\bF = \GF(p)$ be a prime field. If $\cM$ is a quadratic minor-closed class of $\bF$-representable matroids then there exists $\Gamma \le \bF^*$ and $t \in \nni$ such that $\cG(\Gamma)^t \subseteq \cM$ and $h_{\cM}(n) = f_{p,|\Gamma|,t}(n) + O(n)$. Furthermore, either
- for all sufficiently large $n$ we have $h_{\cM}(n) = f_{p,|\Gamma|,t}(n)$ and $\DG(n,\Gamma)^t$ is the unique extremal rank-$n$ matroid in $\cM$, or
- for all sufficiently large $n$, the class $\cM$ contains a simple rank-$n$ extension of $\DG(n,\Gamma)^t$.
Theorem \[bigmain\] is essentially the above with the latter outcome further refined. Theorem \[excludeN\], a corollary of Theorem \[bigmain\] that is also slightly too technical to state here, in fact determines the extremal function for the class of $\bF$-representable matroids with no $N$-minor for many different $N$; Theorems \[maintwo\], \[mainthree\] and \[mainodd\] are just the special cases where $N$ is a projective or affine geometry. Most of our material, when specialised to binary matroids, was originally proved in \[\[walsh\]\].
We also prove a result, Corollary \[strongstructure\], that facilitates the application of the aforementioned structure theorem of Geelen, Gerards and Whittle with no loss of generality by simplifying the notion of a ‘frame template’; while we cannot state the result here due to its inherent technicality, we expect it to be very useful in future work.
Preliminaries
=============
We use the notation of Oxley, and also write $|M|$ for $|E(M)|$ and $\elem(M)$ for $|\si(M)|$ for a matroid $M$. The rows and columns of matrices and the co-ordinates of vectors will always be indexed by sets, and thus have no inherent ordering. We write $0_A$ and ${\mathbf{1}_{A}}$ for the zero and all-ones vector in $\bF^A$ respectively, and $0_{A \times B}$ for the zero matrix in $\bF^{A \times B}$, and we identify $\bF^A \times \bF^B$ with $\bF^{A \cup B}$ for disjoint $A$ and $B$. For a matrix $P = \bF^{A \times B}$, we write $P[A',B']$ for the submatrix with rows in $A'$ and columns in $B'$, and write $P[A']$ for $P[A',B]$ and $P[B']$ for $P[A,B']$ where there is no ambiguity. If $|A| = |B|$ but $A \ne B$ then the ‘determinant’ of a matrix $P \in \bF^{A \times B}$ is only defined up to sign, and identity matrices do not make sense, but nonsingularity and $P^{-1}$ (where it exists) are well-defined. We refer to any square matrix in $\bF^{A \times B}$ whose columns are distinct unit vectors as a *bijection matrix*.
Let $U \subseteq \bF^E$. For a vector $u \in U$ and a set $X \subseteq E$, we write $u[X]$ for the co-ordinate projection of $u$ onto $X$, and $U[X] = \{u[X]\colon u \in U\}$. For a set $\Gamma \subseteq \bF$ (typically a multiplicative subgroup), write $\Gamma U = \{\gamma u \colon u \in U, \gamma \in \Gamma\}$. For a matrix $P \in \bF^{E \times E}$ we denote $\{Pu\colon u \in U\}$ by $PU$. If $U$ and $W$ are additive subgroups of $\bF^E$ then we say $U$ and $W$ are *skew* if $U \cap W = \{0\}$, and if they are skew subspaces with $U + W = \bF^E$ then they are *complementary*; a pair of complementary subspaces gives rise to a well-defined projection map $\psi\colon \bF^E \to W$ for which $\psi(u+w) = w$ for all $u \in U$ and $w \in W$.
Represented Matroids {#represented-matroids .unnumbered}
--------------------
Most of our arguments involve manipulation of matrices; for this purpose we will use a formalised notion of a matroid representation. Let $\bF$ be a field and $E$ be a finite set. We say two subspaces $U_1$ and $U_2$ of $\bF^E$ are *projectively equivalent* if $U_1 = U_2D$ for some nonsingular diagonal matrix $D$. For a field $\bF$, we define an *$\bF$-represented matroid* to be a pair $(E,U)$ where $E$ is a finite set and $U$ is a subspace of $\bF^E$; two represented matroids $(E_1,U_1)$ and $(E_2,U_2)$ are equal if $E_1 = E_2$ and $U_1$ and $U_2$ are projectively equivalent, and are *isomorphic* if there is a bijection $\varphi \colon E_1 \to E_2$ such that $\{(u_{\varphi(e)}\colon e \in E_1)\colon u \in U_1\}$ is projectively equivalent to $U_2$.
A *representation* of $M$ is an $\bF$-matrix $A$ whose row space is projectively equivalent to $U$ (that is, its rowspace is $U$ after some set of nonzero column scalings); we write $M = M(A)$. For each $X \subseteq E$, we write $r(X)$ for the dimension of the subspace $U[X]$, or equivalently $\rank(A[X])$ for any representation $A$ of $M$. Note that $r(\cdot)$ is invariant under projective equivalence so is well-defined. The pair $\tilde{M} = (E,r)$ is a matroid in the usual sense, and we call this the *abstract matroid* associated with $M$; an abstract matroid $N$ is thus $\bF$-representable if and only if there is some $\bF$-represented matroid $M$ with $N = \tilde{M}$.
From here on we will be working with represented matroids exclusively, abbreviating them as just *matroids*. To be precise, we define $\cG(\Gamma)^t$ in this new context to be the class of $\bF$-represented matroids of the form $M{{{P} \brack {A}}}$ for some $\Gamma$-frame matrix $A$ and some matrix $P$ with at most $t$ rows.
The *dual* of a represented matroid $M = (E,U)$ is defined to be $M^* = (E,U^{\perp})$, and for a set $X \subseteq E$ we define $M \del X = (E-X,U[E-X])$ and $M \con X = (M^* \del X)^*$ and define minors of $M$ accordingly; these are well-defined, and agree with the usual notions of minors and duality in the abstract matroid. An *extension* of a represented matroid $M$ is a matroid $M^+$ such that $M^+ \del e = M$ for some $e \in E(M^+)$, or equivalently a matroid having a representation obtained from one of $M$ by appending a new column. Any invariant property or parameter of abstract matroids can easily be extended to represented matroids, and we define (co-)simplicity, (co-)simplification, the parameter $\elem(M)$, and the extremal function $h_{\cM}$ for a class $\cM$ of represented matroids in the obvious way. We remark that the authors of \[\[ggwstructure\]\] consider a finer notion of represented matroid in which projectively equivalent subspaces do not in general give equal matroids; this does not affect our use of their structure theorem, which is stated at the level of reresented matroids in our sense.
Connectivity {#connectivity .unnumbered}
------------
Write $\lambda_M(A) = r_M(A) + r_M(E(M)-A) - r(M)$ for each $A \subseteq E(M)$. For $k \in \nni$, a matroid $M$ of rank at least $k$ is *vertically $k$-connected* if for every partition $(A,B)$ of $E(M)$ with $\lambda_M(A) < k-1$, either $A$ or $B$ is spanning in $M$. (This definition is somewhat nonstandard but equivalent to the usual one.) We require a theorem from \[\[gn\]\], which roughly states that the highly-connected matroids exemplify the densest members of any quadratic class. The version we state is both simplified and specialised to matroids over prime fields.
\[connreduction\] Let $\bF$ be a prime field and let $f(x)$ be a real quadratic polynomial with positive leading coefficient. If $\cM$ is a quadratic minor-closed class of $\bF$-represented matroids with $h_{\cM}(n) > f(n)$ for infinitely many $n \in \nni$, then for every $k \in \nni$ there is a vertically $k$-connected matroid $M \in \cM$ with $r(M) \ge k$ and $\elem(M) > f(r(M))$.
To obtain the equality characterisation in our main theorems as well as the bounds, we need a lemma that is a variant of the above.
\[equalityhc\] Let $\bF$ be a finite field and $f(x)$ be a real quadratic polynomial with positive leading coefficient, and let $k \in \nni$. If $\cM$ is a restriction-closed class of $\bF$-represented matroids and $h_{\cM}(n) = f(n)$ for all sufficiently large $n$, then for all sufficiently large $r$, every rank-$r$ matroid $M \in \cM$ with $\elem(M) = f(r)$ is vertically $k$-connected.
Say $f(x) = ax^2 + bx + c$ where $a,b,c \in \bR$ and $a > 0$. Set $n_0 \in \nni$ so that $n_0 \ge 2k+a^{-1}$, while $h_{\cM}(n) = f(n)$ for all $n \ge n_0$, and $f$ is increasing on $[n_0,\infty)$. Let $n_1 = \max(2n_0,f(k), \tfrac{1}{2a}(|\bF|^{n_0} + a-b))$; we show that every $M \in \cM$ with $r(M) \ge n_1$ and $\elem(M) = f(r(M))$ is vertically $k$-connected.
Let $M \in \cM$ satisfy $r(M) = r \ge n_1$ and $\elem(M) = f(r)$. If $M$ is not vertically $k$-connected, then there is a partition $(A,B)$ of $E(M)$ for which $1 \le r_M(A) \le r_M(B) \le r-1$ and $r \le r_M(A) + r_M(B) < r + k$. Let $r_B = r_M(B)$ and $r_A = r_M(A)$; note that $r_B \ge \tfrac{r}{2} \ge n_0$ so we have $\elem(M|B) \le f(r_B) \le f(r-1) = a(r-1)^2 + b(r-1) + c$.
If $r_A \le n_0$ then $\elem(M|A) < |\bF|^{n_0}$ so $$\begin{aligned}
ar^2 + br + c &= \elem(M) \\
&\le \elem(M|A) + \elem(M|B) \\
&< |\bF|^{n_0} + a(r-1)^2 + b(r-1) + c.
\end{aligned}$$ This implies that $r < \tfrac{1}{2a}\left(|\bF|^{n_0} +a-b\right) \le n_1$, a contradiction.
If $r_A > n_0$ then $\elem(M|A) \le f(r_A)$ and we have $$\begin{aligned}
f(r) &\le \elem(M|A) + \elem(M|B) \\
&\le a(r_A^2 + r_B^2) + b(r_A + r_B) + 2c\\
&= a(r_A + r_B)^2 + b(r_A + r_B) + 2c - 2ar_Ar_B\\
&< a(r+k)^2 + b(r+k) + 2c - 2an_0 (\tfrac{r}{2})\\
&= f(r) + f(k) + ra(2k-n_0),
\end{aligned}$$ where we use $r_A \ge n_0$ and $r_B \ge \tfrac{r}{2}$. This gives $ra(n_0-2k) < f(k)$ so, using $n_0 \ge 2k+a^{-1}$, we have $r < f(k) \le n_1$, again a contradiction.
Frame matroids and extensions {#dowlingsection}
=============================
In this section we define the extremal matroids in classes $\cG(\Gamma)^t$, and consider certain slightly larger classes. Let $\bF$ be a finite field, $\Gamma$ be a subgroup of $\bF^*$ and $n \in \nni$. Let $B_0$ be an $n$-element set and $b_1, \dotsc, b_n$ be the unit vectors in $\bF^{B_0}$. Let $$W(n) = \{b_i \colon i \in [n]\} \cup \{-b_i + \gamma b_j\colon i,j \in [n], i < j, \gamma \in \Gamma\},$$ and $A \in \bF^{B_0 \times E}$ be a matrix whose set of columns is $W(n)$. We write $\DG(n,\Gamma)$ for any matroid isomorphic to $M(A)$ (this is a *Dowling geometry* over $\Gamma$), and call any matrix obtained from such an $A$ by column scalings a *standard* representation for $\DG(n,\Gamma)$. Given any $\Gamma$-frame matrix $A' \in \bF^{B \times F}$ with $\rank(A') = n$, we can remove redundant rows, rename rows, and rescale columns to obtain a matrix whose columns are all in $W(n)$; it follows that any rank-$n$ extremal matroid in $\cG(\Gamma)$ is isomorphic to $\DG(n,\Gamma)$.
For each $t \in \nni$ and $n \ge t$, let $X$ be a $t$-element set and $B_0$ be an $(n-t)$-element set, and $A^t \in \bF^{(B_0 \cup X) \times E}$ be a matrix whose set of columns is $$W^t(n) = (\bF^X \times W(n-t)) \cup (U \times \{0_{B_0}\}),$$ where $U$ is a maximal set of pairwise non-parallel nonzero vectors in $\bF^X$. We write $\DG(n,\Gamma)^t$ for any matroid isomorphic to $M(A^t)$ for such an $A^t$, and call any matrix obtained from such an $A^t$ by column scalings a *standard* representation of $\DG(n,\Gamma)^t$. It is not hard to check that, given a standard representation, rescaling rows in $B_0$ by elements of $\gamma$ yields another standard representation. Moreover, given any rank-$n$ matroid $M \in \cG(\Gamma)^t$ with $n \ge t$, we can remove/append/rename rows then rescale columns to find a representation for $M$ whose columns are all in $W^t(n)$; therefore every rank-$n$ extremal matroid in $\cG(\Gamma)^t$ is isomorphic to $\DG(n,\Gamma)^t$. We can thus determine the extremal function $h_{\cG(\Gamma)^t}(n) = |W^t(n)|$; we have $|W(n)| = |\Gamma|\binom{n}{2} + n$ and so $$h_{\cG(\Gamma)^t}(n) = |W^t(n)| = |\bF|^t |W(n-t)| + \tfrac{|\bF|^t-1}{|\bF|-1} = f_{|\bF|,|\Gamma|,t}(n),$$ which justifies our earlier claims.
The next lemma’s proof uses the fact that $\cG(\Gamma)$ is minor-closed.
\[findpg\] Let $\bF = \GF(q)$ be a finite field and $\Gamma \le \bF^*$. For all $t \in \nni$, the class $\cG(\Gamma)^t$ is minor-closed.
Let $M \in \cG(\Gamma)^t$, so there exist $P \in \bF^{T \times E}$ with $t$ rows and a $\Gamma$-frame matrix $Q \in \bF^{B \times E}$, such that $M = M(A)$ for $A = {{{P} \brack {Q}}}$. Let $e \in E$ be a nonloop of $M$. Clearly $M(A) \del e \in \cG(\Gamma)^t$. If $Q[e] = 0$ then we can perform row-operations within $P$ and remove a row of $P$ to contract $e$ and we have $M(A) \con e \in \cG(\Gamma)^{t-1} \subseteq \cG(\Gamma)^t$. If $Q[e] \ne 0$ then $A$ is row-equivalent to a matrix ${{{P'} \brack {Q}}}$ for which $P' \in \bF^{T \times E}$ satisfies $P'[e] = 0$. Then $M(A) \con e = M{{{P'} \brack {Q'}}}$ for some $\Gamma$-frame matrix $Q'$ that represents $M(Q) \con e$. Therefore $M(A) \con e \in \cG(\Gamma)^t$; it follows that $\cG(\Gamma)^t$ is minor-closed.
Extensions {#extensions .unnumbered}
----------
If $x \in \bF^* - \Gamma$, then let $\DG^{(x)}(n,\Gamma)^t$ denote a matroid of the form $M(A|w)$, where $A \in \bF^{(X \cup B_0) \times E}$ is a standard representation of $\DG(n,\Gamma)^t$ and $w$ is a vector for which $w[X] = 0$ and $w[B_0]$ has weight $2$ and has nonzero entries $-1$ and $x$; this is a frame matroid over some subgroup $\Gamma'$ properly containing $\Gamma$. One can check that if $x$ and $x'$ lie in the same coset of $\Gamma$ in $\bF^*$, then $\DG^{(x)}(n,\Gamma)^t$ and $\DG^{(x')}(n,\Gamma)^t$ are isomorphic. Let $\DG^{{\scalebox{0.4}{$\square$}}}(n,\Gamma)^t$ denote a matroid of the form $M(A|w)$, where $A$ is a standard representation of $\DG(n,\Gamma)^t$ and $w$ is the sum of three distinct unit vectors whose nonzero entries lie in $B_0$. From here on we write $\bF_p$ for the prime subfield of a finite field $\bF$.
\[primesubfield\] If $\Gamma$ is a subgroup of $\bF^*$ for some finite field $\bF$ and $\bF_p^* \not\subseteq \Gamma$, then $\DG^{{\scalebox{0.4}{$\square$}}}(n+1,\Gamma)$ has a $\DG^{(x)}(n,\Gamma)$-minor for some $x \in \bF_p^* - \Gamma$.
Let $[A|w] \in \bF^{(X \cup B_0) \times (E \cup \{e\})}$ be a representation of $\DG^{{\scalebox{0.4}{$\square$}}}(n+1,\Gamma)$ for which $A$ is a standard representation of $\DG(n+1,\Gamma)$ and $w = A[e]$ is the sum of three distinct unit vectors supported on $B_0$.
Let $r_1,r_2,r_3 \in B_0$ be the rows on which $w$ is nonzero. Let $E' \subseteq E$ be a set so that $A[r_1,E'] = 0$ and $A' = A[X \cup B_0 - r_1,E']$ is a standard representation of $\DG(n,\Gamma)$. If $-1 \notin \Gamma$, then contracting the unit column supported on $r_1$ and restricting to $E'$ yields a representation of a $\DG^{(-1)}(n,\Gamma)$-minor of $M$, as required. So we may assume that $-1 \in \Gamma$.
Since $\bF_p^* \not\subseteq \Gamma$ and $1 \in \Gamma$, there is some $\gamma \in \Gamma \cap \bF_p^*$ for which $\gamma \ne -1$ and $\gamma + 1 \notin \Gamma$. Consider a minor $M'$ of $M$ obtained by contracting a column $c$ of $A$ supported on $\{r_1,r_2\}$ for which $c[r_2] = -\gamma c[r_1]$, then restricting to $E'$. Now $M' = M(A'|w')$ where $w'$ has weight $2$ and has nonzero entries $1+\gamma$ and $1$. Thus $M' \cong \DG^{(-1-\gamma)}(n,\Gamma)$. If $-1-\gamma \notin \Gamma$ then the result holds; thus we may assume that $-1-\gamma \in \Gamma$ and so $(-1)(-1-\gamma) = 1 + \gamma \in \Gamma$, a contradiction.
\[dowlingextension\] Let $m \in \nni$, let $\bF$ be a finite field and $\Gamma$ be a subgroup of $\bF^*$. If $n \in \nni$ satisfies $n \ge |\bF|^2m + t + 3$ and $M$ is a simple rank-$n$ $\bF$-represented matroid that is an extension of $\DG(n,\Gamma)^t$, then either
- $M$ has a $\DG^{(x)}(m,\Gamma)^t$-minor for some $x \notin \Gamma$, or
- $\bF_p^* \subseteq \Gamma$ and $M$ has a $\DG^{{\scalebox{0.4}{$\square$}}}(m,\Gamma)^t$-minor.
Let $e$ satisfy $M \del e \cong \DG(n,\Gamma)^t$. Let $A \in \bF^{(X \cup B_0) \times E}$ be a standard representation of $\DG(n,\Gamma)^t$ for which $M = M(A|w)$ for some $w \in \bF^{X \cup B_0}$; since $w$ is not parallel to a column of $A$, we may assume that either $w[B_0]$ has weight at least $3$, or that $w[B_0]$ has weight $2$ and its two nonzero entries $\alpha$ and $\beta$ satisfy $-\alpha\beta^{-1} \notin \Gamma$. Let $r \in B_0$ be such that $w[r] \ne 0$; by adding multiples of $r$ to the rows in $X$ we obtain a matrix $[A'|w']$ row-equivalent to $[A|w]$ for which $A'[B_0] = A[B_0]$ and $w'[X] = 0$; since $M(A) = M(A')$ we see that $A'$ is also a standard representation of $\DG(n,\Gamma)^t$. We may thus assume that $w[X] = 0$.
If $w[B_0]$ has weight $2$, then we can scale $w$ to obtain a weight-two column $\wh{w}$ whose nonzero entries are $-1$ and $x = -\alpha\beta^{-1} \notin \Gamma$. Since $m \le n$, the matrix $[A|\wh{w}]$ has a submatrix $[A'|w'] \in \bF^{(B_0' \cup X) \times E'}$ for which $w'$ is a weight-$2$ subvector of $\wh{w}$, and $A'$ is a standard representation of $\DG(m,\Gamma)^t$. We have $M[A'|w'] \cong \DG^{(x)}(m,\Gamma)$. But all unit vectors in $\bF^{B_0 \cup X}$ are columns of $A$, so $M[A'|w']$ is a minor of $M$, giving the result.
Otherwise, let $R$ be a three-element subset of $B_0$ for which $w[R]$ has no zero entries. By a majority argument, there is a $p(m-2)$-element subset $B$ of $B_0-R$ for which $w[B]$ is constant; say all its entries are $\alpha \in \bF$. Let $(B^0,B^1, \dotsc, B^{p-1})$ be a partition of $B$ into equal-sized sets. There are disjoint $(m-2)$-element subsets $C^1, \dotsc, C^p$ of $E$ for which $A[X,C^i] = 0$ and $A[B^0 \cup B^i,C^i] = {{{-J^i} \brack {J^0}}}$ for each $i \in \{1, \dotsc, p-1\}$, where $J^i \in \bF^{B^i \times C^i}$ is a bijection matrix. Let $E_0 \subseteq E$ be a set for which $A[B^i,E_0] = 0$ for each $i \ge 1$, and the matrix $A_0 \in \bF^{(B^0 \cup R \cup X) \times E_0}$ is a standard representation of $\DG(m+1,\Gamma)^t$. By construction of the $B^i$ and $C^i$, we have $(M \con \cup_{i=1}^{p-1} C_i)|(F \cup \{e\}) = M(A_0|w_0)$, where $w_0[R] = w[R]$ and each entry of $w_0[B_0]$ is the sum of $p$ copies of $\alpha$ so is zero; that is, $w_0$ has weight $3$. Let $M_0 = M(A_0|w_0)$.
Note that $r(M_0) = m+1$. Let $\beta_0,\beta_1$ and $\beta_2$ be the nonzero entries of $w_0$. We may assume by scaling that $\beta_0 = -1$. If $\beta_1 \notin \Gamma$ then removing the row containing $\beta_2$ yields a representation of a $\DG^{(\beta_1)}(m,\Gamma)^t$-minor of $M$, so we may assume that $\beta_1 \in \Gamma$ and, symmetrically, that $\beta_2 \in \Gamma$. By scaling the rows containing $\beta_1$ and $\beta_2$ by $\beta_1^{-1}$ and $\beta_2^{-1}$ respectively, we may assume that $\beta_1 = \beta_2 = 1$. If $-1 \notin \Gamma$ then removing the row containing $\beta_0$ yields a representation of a $\DG^{(-1)}(m,\Gamma)^t$-minor of $M$. If $-1 \in \Gamma$ then scaling the row containing $\beta_0$ by $-1$ yields a representation of $\DG^{{\scalebox{0.4}{$\square$}}}(m+1,\Gamma)^t$. The result now follows easily from Lemma \[primesubfield\].
For each $x$, let ${\cG(\Gamma)^{t}_{(x)}}$ denote the closure of $\{\DG^{(x)}(n,\Gamma)^t \colon n \ge t\}$ under minors and isomorphism, and define ${\cG(\Gamma)^{t}_{{\scalebox{0.4}{$\square$}}}}$ analogously. For each $d \in \nni$, let $\cG(\Gamma)^t_d$ denote the closure under minors of the class of matroids of the form $M[A | D]$, where $M(A) \in \cG(\Gamma)^t$ and $D$ has $d$ columns. It is clear by the definitions that $\cG(\Gamma)^t_1$ contains ${\cG(\Gamma)^{t}_{{\scalebox{0.4}{$\square$}}}}$ and ${\cG(\Gamma)^{t}_{(x)}}$ for all $x$. We will later require an easy lemma characterising matroids in $\cG(\Gamma)^t_d$.
\[extensionprojection\] For $t,d \in \nni$, each matroid in $\cG(\Gamma)^t_d$ is a minor of a matroid having a representation ${{{P_2} \brack {P_0}}}$, where $P_2$ has $t$ rows, and $P_0$ is a matrix for which there is a matrix $P_1$ with $d$ rows such that ${{{P_1} \brack {P_0}}}$ is row-equivalent to a $\Gamma$-frame matrix.
Let $N_0 \in \cG(\Gamma)^t_d$; we see that there is a matroid $N$ with an $N_0$-minor, $t$-element set $X$, a set $B$ and a $d$-element set $R \subseteq E(N)$ so that $N = M(A)$, where $A \in \bF^{(B \cup X) \times E(N)}$ is such that $A[B,E(N)-R]$ is a $\Gamma$-frame matrix. Let $A_N = A[B,E-R] \oplus I_R$. It is clear that $A_N[B \cup R]$ is a $\Gamma$-frame matrix, and that $A_N$ is row-equivalent to the matrix $A_N' = A_N + (A[R] \oplus 0_{R \times E})$. But $A[B] = A_N'[B]$, so $A[B]$ is obtained from a matrix row-equivalent to a $\Gamma$-frame matrix by removing $|R| = d$ rows. Since $A[X]$ has $t$ rows, the result follows with $P_2 = A[X]$ and $P_0 = A[B]$ and $P_1 = A_N'[R]$.
To derive our results about excluding geometries, we need to understand which projective and affine geometries belong to which $\cG(\Gamma)^t$, ${\cG(\Gamma)^{t}_{(x)}}$ and ${\cG(\Gamma)^{t}_{{\scalebox{0.4}{$\square$}}}}$. Given a subgroup $\Gamma$ of $\bF^*$ and $t \in \nni$, we write $(t,\Gamma) \preceq (t',\Gamma')$ if $(t,|\Gamma|)$ does not exceed $(t,|\Gamma'|)$ in the lexicographic order on $(\nni)^2$. This is equivalent to the statement that $|\bF|^t|\Gamma| \le |\bF|^{t'}|\Gamma'|$.
In the next two lemmas, we use the fact that a simple $\bF$-represented matroid is a restriction of an affine geometry if and only if it has a representation $A$ for which $\row(A)$ contains a vector with no zero entries.
\[techtwo\] Let $\bF = \GF(2)$ and $t \in \nni$. If $N$ is one of $\AG(t+3,2)$ or $\PG(t+2,2)$, then $N \notin \cG(\{1\})^t$ but $N \in {\cG(\{1\})^{t}_{{\scalebox{0.4}{$\square$}}}}$ and $N \in \cG(\{1\})^{t'}$ for all $t' > t$.
Since ${\cG(\{1\})^{t}_{{\scalebox{0.4}{$\square$}}}}$ and $\cG(\{1\})^t$ are minor-closed and $\PG(t+2,2)$ is a minor of $\AG(t+3,2)$, it suffices to show that $\PG(t+2,2) \notin \cG(\{1\})^t$ and $\AG(t+3,2) \in {\cG(\{1\})^{t}_{{\scalebox{0.4}{$\square$}}}} \cap \cG(\{1\})^{t+1}$. The class $\cG(\{1\})^t$ has extremal function $f_{2,1,t}(n) = 2^t\tbinom{n-t+1}{2} + 2^t-1$. Now $f_{2,1,t}(t+3) = 7 \cdot 2^t - 1 < 2^{t+3}-1$, so $\PG(t+2,2) \notin \cG(\{1\})^t$ as required.
Let $K \cong K_{2,4}$ be the complete bipartite graph with bipartition $(\{1,2,3,4\},\{5,6\})$. Let $X \subseteq \bF^{[6]}$ be the set of columns of the incidence matrix of $K$ and $w \in \bF^{[6]}$ be the characteristic vector of $\{1,2,3,4\}$. Let $T$ be a $t$-element set, and $A$ be a matrix with row-set $[6] \cup T$ whose set of columns is $\bF^T \times X$. Let $w' = (0^T,w)$ and let $M$ be obtained from $M[A|w']$ by contracting column $w'$. Since the incidence matrix of $K$ has rank $5$, we can remove a redundant row from $A|w'$ to see that $M$ is a contraction of a restriction of $\DG_{{\scalebox{0.4}{$\square$}}}(t+5,\{1\})$, so $r(M) \le t+4$ and $M \in {\cG(\{1\})^{t}_{{\scalebox{0.4}{$\square$}}}}$.
By construction, no pair of columns of $A$ add to $w'$, so $M$ is simple with $|M| = 2^t|X| = 2^{t+3}$. Moreover, one can check that $M$ has a representation $A_0$ with row set $([6]-\{1\}) \cup T$ for which $A_0[T] = A[T]$ and $A_0[5] + A_0[6]$ sum to the all-ones vector. Therefore $M$ is a simple restriction of $\AG(t+3,2)$ with $2^{t+3}$ elements, from which it follows that $M \cong \AG(t+3,2)$, so $\AG(t+3,2) \in {\cG(\{1\})^{t}_{{\scalebox{0.4}{$\square$}}}}$.
Finally, consider a matrix $A'$ with row-set $[t+5]$ that contains as columns precisely the $v$ for which $v[[4]]$ is a column of the incidence matrix of the $4$-cycle $(1,2,3,4)$ with vertex set $[4]$. Clearly $\rank(A') = t+4$ and $M(A') \in \cG(\{1\})^{t+1} \subseteq \cG(\{1\})^{t'}$; moreover, $A'[2] + A'[4]$ is an all-ones vector, so $M(A')$ is a restriction of $\AG(t+3,2)$; since $A'$ has $2^{t+3}$ distinct columns it follows that $M(A') \cong \AG(t+3,2) \in \cG(\{1\})^{t'}$.
\[techthree\] Let $\bF = \GF(3)$ and $t \in \nni$. If $N \cong \AG(t+2,3)$ then $N \notin \cG(\bF^*)^t$ but $N \in {\cG(\bF^*)^{t}_{{\scalebox{0.4}{$\square$}}}}$ and $N \in \cG(\Gamma')^{t'}$ for all $(t',\Gamma') \succ (t,\Gamma)$.
If $A$ is a $\GF(3)$-representation of $\AG(m,3)$ for some $m \ge 2$, then removing a row of $A$ yields a representation of a matroid with an $\AG(m-1,3)$-restriction. If we had $N \in \cG(\bF^*)^t$ then we could thus remove $t$ rows from some representation of $N$ to obtain an $\bF^*$-frame matrix $A_0$ for which $M(A_0)$ has an $AG(2,3)$-restriction, and so $\AG(2,3)$ is an $\bF^*$ frame matroid. But $|AG(2,3)| = 9 = h_{\cG(\bF^*)}(3)$ so this implies that $\AG(2,3) \cong \DG(3,\bF^*)$. This is a contradiction as $\DG(3,\bF^*)$ contains a $4$-point line but $\AG(2,3)$ does not.
Define an $\bF^*$-frame matrix $Q$ with row-set $[4]$ by
$Q = $
[c|ccccccccc|]{} $1$ & $0$ & $0$ & $0$ & $0$ & $0$ & $0$ & $0$ & $1$ & $1$\
$2$ & $1$ & $0$ & $1$ & $1$ & $1$ & $1$ & $0$ & $0$ & $0$\
$3$ & $0$ & $1$ & $0$ & $1$ & $0$ & $0$ & $1$ & $0$ & $0$\
$4$ & $0$ & $0$ & $2$ & $0$ & $1$ & $2$ & $1$ & $1$ & $2$\
.
Let $T$ be a $t$-element set and let $b_1, \dotsc, b_4$ be the unit vectors corresponding to $1,\dotsc, 4$ in $\bF^{T \cup [4]}$. Let $X$ be the set of columns of $Q$ and let $A$ be a matrix whose column set is $\bF^T \times X$. This matrix has $3^{t+2}$ columns which are nonzero and pairwise non-parallel. Let $w = b_1 + b_2 + b_3$ and $M = M(A|w)$; clearly $M$ is a restriction of $\DG_{{\scalebox{0.4}{$\square$}}}(t+4,\bF^*)^t$. Note further that no two columns of $A$ span $w$, so the matroid $M_0$ obtained from $M$ by contracting column $w$ is simple with $r(M_0) \le t+3$. Furthermore, $M_0$ has a representation ${{{P} \brack {Q'}}}$ for some matrix $P$ with row-set $T$ and some $Q'$ with row-set $\{2,3,4\}$ in which the sum of rows $2$ and $3$ contains no zero entries. It follows that $M_0$ is a restriction of $\AG(t+2,3)$; since $|M_0| = 3^{t+2} = |\AG(t+2,3)|$, we thus have $M_0 \cong \AG(t+2,3) \cong N$ and so $N \in {\cG(\bF^*)^{t}_{{\scalebox{0.4}{$\square$}}}}$ as required.
If $(t',\Gamma') \succ (t,\Gamma)$, then $t' > t$; consider an $\bF$-matrix $A'$ with row-set $[t+3]$ containing as a column every vector $v$ for which $v[1] \ne v[2]$. Now $\si(M(A')) \cong \AG(t+3,2) = N$ and, since $A'[\{1,2\}]$ is a $\{1\}$-frame matrix up to column scalings, we have $N \in \cG(\{1\})^{t+1} \subseteq \cG(\Gamma')^{t'}$ as required.
\[techodd\] Let $t \in \nni$ and let $N$ be either $\PG(t+1,p)$ for some prime $p > 2$ or $\AG(t+1,p)$ for some prime $p > 3$. Let $\bF = \GF(p)$ and $\Gamma$ be the index-$2$ subgroup of $\bF^*$. Then $N \notin \cG(\Gamma)^t$ but $N \in {\cG(\Gamma)^{t}_{(x)}}$ for all $x \in \bF^* - \Gamma$ and $N \in \cG(\Gamma')^{t'}$ for all $(t',\Gamma') \succ (t,\Gamma)$.
The value of the extremal function of $\cG(\Gamma)^t$ at $n = t+2$ is $f_{p,(p-1)/2,t}(t+2) = p^t\left(\tfrac{p+3}{2}\right) + \tfrac{p^t-1}{p-1}$. If $p = 3$ then this expression is $\tfrac{p^{t+2}-1}{p-1} - p^t < |N|$. If $p \ge 5$ then, using $\tfrac{p+3}{2} \le p-1$, we have $f_{p,(p-1)/2,t}(t+2) < p^{t+1} = |N|$. Since $r(N) = t+2$ in either case, we have $N \notin \cG(\Gamma)^t$.
It suffices for all $p$ to show that $\PG(t+1,2) \in {\cG(\Gamma)^{t}_{(x)}}$. Let $[A|w] \in \bF^{(X \cup B_0) \times (E-\{e\})}$ be a representation of $M \cong \DG^{(x)}(t+3,\Gamma)^t$, where $A$ is a standard representation of $\DG(t+3,\Gamma)$ and $w = A[e]$ is weight-$2$ vector supported on $B_0$ whose nonzero entries are $-1$ and $x$. Let $F \subseteq E$ be the set of columns of $A[B_0]$ whose support is contained in the support of $w_0$; we have $|A| = |\Gamma|+2 = \tfrac{p+3}{2}$. The lines of $M$ containing $e$ and more than one other point are the sets $L_w = \{w\}\times F$ for $w \in \bF^X$. For each $L_w$, contracting $e$ identifes the points in $L_w$; we thus lose $\tfrac{p+1}{2}$ points for each $L_w$, so $$\begin{aligned}
\elem(M \con e) &= \elem(M)-1 - \tfrac{p+1}{2} p^t\\
&= f_{p,(p-1)/2,t}(t+3) - \tfrac{p+1}{2} p^t \\
&= p^t\left(\tfrac{p-1}{2}\tbinom{3}{2} + 3\right) + \tfrac{p^t-1}{p-1} - \tfrac{p+1}{2} p^t \\
&= \tfrac{p^{t+2}-1}{p-1}.
\end{aligned}$$ So $\si(M \con e)$ is a rank-$(t+2)$ matroid in ${\cG(\Gamma)^{t}_{(x)}}$ with $\tfrac{p^{t+2}-1}{p-1}$ elements; it follows that $\si(M \con e) \cong \PG(t+1,2)$ as required.
Let $(t',\Gamma') \succ (t,\Gamma)$. If $t' = t$ then $\Gamma = \bF^*$ and $N \in \cG(\Gamma')^{t'}$ follows from the fact that ${\cG(\Gamma)^{t}_{(x)}} \subseteq \cG(\bF^*)^t$. If $t' > t$, let $A'$ be an $\bF$-matrix with row-set $[t+2]$ containing as columns all vectors $v$ for which $v[1] \in \{0,1\}$; clearly $M(A') \cong \PG(t+1,p)$ has an $N$-restriction. Since $A'[1]$ is trivially a $\Gamma'$-frame matrix we thus have $N \in \cG(\Gamma')^{t+1} \subseteq \cG(\Gamma')^{t'}$.
Frame Templates
===============
Templates were introduced in \[\[ggwstructure\]\] as a means of precisely describing a class of matroids whose members are ‘close’ to being frame matroids. We make a simplification to the original definition, where a set named ‘$D$’ is absorbed into ‘$Y_0$’, with no loss of generality and the definition of ‘conforming’ is simplified accordingly; our definition is essentially identical to that given in \[\[gvz\]\]. For field $\bF$, an *$\bF$-frame template* (hereafter just a *template*) is an $8$-tuple $\Phi = (\Gamma,C,X,Y_0,Y_1,A_1,\Delta,\Lambda)$, where
(i) $\Gamma$ is a subgroup of $\bF^*$,
(ii) $C,X,Y_0$ and $Y_1$ are disjoint finite sets,
(iii) $A_1 \in \bF^{X \times (Y_0 \cup Y_1 \cup C)}$, and
(iv) $\Delta$ and $\Lambda$ are additive subgroups of $\bF^{Y_0 \cup Y_1 \cup C}$ and $\bF^X$ respectively, and both are closed under scaling by elements of $\Gamma$.
(In the case where $\bF$ is a prime field, with which we are mostly concerned, both $\Delta$ and $\Lambda$ are sub*spaces*.) A template describes a class of matrices; we say a matrix $A' \in \bF^{B \times E}$ *respects* $\Phi$ if
(a) $X \subseteq B$ and $Y_0 \cup Y_1 \cup C \subseteq E$,
(b) $A_1 = A'[X, Y_0 \cup Y_1 \cup C]$,
(c) there is a set $Z \subseteq E-(Y_0 \cup Y_1 \cup C)$ such that $A'[X,Z] = 0$ and each column of $A'[B-X,Z]$ is a unit vector,
(d) each row of $A'[B-X,Y_0 \cup Y_1 \cup C]$ is in $\Delta$, and
(e) the matrix $A'[B,E - (Z \cup Y_0 \cup Y_1 \cup C)]$ has the form ${{{P} \brack {F}}}$, where each column of $P$ is in $\Lambda$, and $F$ is a $\Gamma$-frame matrix.
Whenever we define such an $A \in \bF^{B \times E}$, we implicitly name the set $Z \subseteq E$. The structure of a matrix respecting $\Phi$ is depicted below.
[c|c|c|c|]{} & & &\
$X$ & columns from $\Lambda$ & $0$ & [$A_1$]{}\
& $\Gamma$-frame matrix & unit columns & rows from $\Delta$\
A matrix $A \in \bF^{B \times E}$ *conforms to $\Phi$* if there is a matrix $A' \in \bF^{B \times E}$ respecting $\Phi$ such that $A'[B,E-Z] = A[B,E-Z]$, and for each $z \in Z$ we have $A[z] = A'[z] + A'[y]$ for some $y \in Y_1$. Equivalently, a matrix conforming to $\Phi$ is one of the form $A'(I_E + H)$, where $A' \in \bF^{B \times E}$ respects $\Phi$, and $H \in \bF^{E \times E}$ is a matrix for which every nonzero entry lies in $H[Y_1,Z]$, such that every column of $H[Y_1,Z]$ is a unit vector. We call such a matrix of the form $S = I_E + H$ an *$(E,Z,Y_1)$-shift matrix*; such a matrix is ‘lower-diagonal’ and therefore nonsingular.
A matroid $M$ *conforms to $\Phi$* if there is a matrix $A$ conforming to $\Phi$ for which $M = M(A) \con C \del Y_1$, or equivalently if there is a matrix $A$ respecting $\Phi$ and an $(E,Z,Y_1)$-shift matrix $S$ such that $M = M(AS) \con C \del Y_1$. Let $\cM(\Phi)$ denote the class of matroids that are isomorphic to a matroid conforming to $\Phi$, and $\cM^*(\Phi)$ denote the class of their duals. Two templates $\Phi$ and $\Phi'$ are *equivalent* if $\cM(\Phi) = \cM(\Phi')$. Classes $\cM(\Phi)$ are not in general minor-closed; we write ${\overline{\cM(\Phi)}}$ for the closure of $\cM(\Phi)$ under minors. Note that if $\Phi$ is a template for which $|X| = t$ and $\Lambda = \bF^X$ while $C \cup Y_0 \cup Y_1 = \varnothing$, then $\cM(\Phi)$ is the class $\cG(\Gamma)^t$ described earlier.
We can now state the structure theorem, which states that all the highly connected matroids in a minor-closed class are described by two finite sets of templates.
\[structure\] Let $\bF$ be a finite field of characteristic $p$ and $\cM$ be a minor-closed class of $\bF$-representable matroids not containing all $\GF(p)$-representable matroids. There exist an integer $k$ and finite sets $\bT$ and $\bT^*$ of $\bF$-frame templates such that
- $\cM$ contains $\cup_{\Phi \in \bT}\cM(\Phi)$ and $\cup_{T \in \bT^*}\cM^*(\Psi)$, and
- each simple vertically $k$-connected matroid $M \in \cM$ with $r(M) \ge k$ is in $\cup_{\Phi \in \bT}\cM(\Phi)$ or $\cup_{\Psi \in \bT^*}\cM^*(\Psi)$.
Taming Templates
================
In this section we prove that templates can be substantially simplified with no loss of generality in the structure theorem. Our first few lemmas give basic ways to manipulate templates without changing the class of conforming matroids. The first allows us to generically contract an appropriately structured subset of $C$.
\[contracttemplate\] Let $\Phi = {(\Gamma,C,X,Y_0,Y_1,A_1,\Delta,\Lambda)}$ be a template and let $\wh{X} \subseteq X$ and $\wh{C} \subseteq C$ be sets for which $\rank(A_1[\wh{X},\wh{C}]) = |\wh{X}|$ while $A_1[X-\wh{X},\wh{C}] = 0$ and $\Delta[\wh{C}] = 0$. Then $\Phi$ is equivalent to the template $$\Phi' = (\Gamma,C',X',Y_0,Y_1,A_1[X',C' \cup Y_0 \cup Y_1], \Lambda[X'], \Delta[C']),$$ where $X' = X - \wh{X}$ and $C' = C-\wh{C}$.
For each matrix $A \in \bF^{B \times E}$ conforming to $\Phi$, let $\varphi(A)$ denote the matrix $A[B-\wh{X},E-\wh{C}]$. Note that for every $A'$ conforming to $\Phi'$ there is some $A$ conforming to $\Phi$ for which $\varphi(A) = A'$. By the hypotheses we see that $A[\wh{X},\wh{C}]$ is a rank-$|\wh{X}|$ submatrix and $A[B-\wh{X},\wh{C}] = 0$, so $M(A) \con \wh{C} = M(\varphi(A))$. It follows that $$M(A) \con C \del Y_1 = M(\varphi(A)) \con (C - \wh{C}) \del Y_1 \in \cM(\Phi')$$ for each $A$ conforming to $\Phi$. This gives $\cM(\Phi) \subseteq \cM(\Phi')$; the fact that $\cM(\Phi') \subseteq \cM(\Phi)$ follows from the surjectivity of $\varphi$.
The next lemma allows us to perform ‘row-operations’ on a template.
\[unitary\] Let $\Phi = {(\Gamma,C,X,Y_0,Y_1,A_1,\Delta,\Lambda)}$ be a template over a field $\bF$ and let $U \in \bF^{X \times X}$ be nonsingular. Then $\Phi$ is equivalent to the template $\Phi' = (\Gamma,C,X,Y_0,Y_1,UA_1,\Delta,U\Lambda)$.
By linearity $U\Lambda$ is an additive subgroup of $\bF^X$ that is closed under $\Gamma$-scalings. Let $A \in \bF^{B \times E}$ respect $\Phi$ and $S$ be an $(E,Z,Y_1)$-shift matrix. Let $\wh{U} = U \oplus I_{B-X}$. Then $\wh{U}A$ respects $\Phi'$ and $AS$ is row-equivalent to $\wh{U}AS$. Thus for each matrix conforming to $\Phi$ there is a row-equivalent matrix conforming to $\Phi'$, so $\cM(\Phi) = \cM(\Phi')$.
The third let us project $\Delta$ using certain rows of $A_1$.
\[makeskew\] Let $\Phi = {(\Gamma,C,X,Y_0,Y_1,A_1,\Delta,\Lambda)}$ be a template over a field $\bF$ and let $(X_0,X_1)$ be a partition of $X$ for which $\Lambda[X_1] = 0$. Let $W = \operatorname{row}(A_1[X_1])$ and $V$ be a complementary subspace of $W$ in $\bF^{C \cup Y_0 \cup Y_1}$. Let $\psi \colon \bF^{C \cup Y_0 \cup Y_1} \to V$ be the projection map $w+v \mapsto v$. Then $\Phi$ is equivalent to the template $\Phi' = (\Gamma,C,X,Y_0,Y_1,A_1,\Lambda,\psi(\Delta))$.
By linearity, the set $\psi(\Delta)$ is an additive subgroup closed under $\Gamma$-scalings. If $A \in \bF^{B \times E}$ respects $\Phi$ then let $A'$ be the matrix obtained from $A$ by applying $\psi$ to each row $u \in \Delta$ of $A[B-X,C \cup Y_0 \cup Y_1]$. Clearly $A'$ respects $\Phi$, and $A'$ is row-equivalent to $A$, since we can obtain $A'$ from $A$ by adding elements of $\row(A[X_1])$ to rows in $B-X$. Therefore every matrix respecting $\Phi$ is row-equivalent to one respecting $\Phi'$, so $\Phi$ and $\Phi'$ are equivalent.
The next lemma generically simplifies the structure of $\Delta$.
\[magic\] Every template over a finite field $\bF$ is equivalent to a template $\Phi = {(\Gamma,C,X,Y_0,Y_1,A_1,\Delta,\Lambda)}$ for which there exists $C' \subseteq C$ such that $\Delta = \Gamma (\bF_p^{C'}) \times \{0\}^{(C-C') \cup Y_0 \cup Y_1}$.
Let $\Phi' = (\Gamma,C',X',Y_0,Y_1,A_1',\Delta',\Lambda')$ be a template over a finite field $\bF$. Let $D$ be a generating set for $\Delta'$, and let $A_{\Delta} \in \bF^{\wh{X} \times (Y_0 \cup Y_1 \cup C)}$ be a matrix whose set of rows is $D$, where $\wh{X}$ is a $|D|$-element set. Let $\wh{C}$ be a set of size $|\wh{X}|$ and let $P \in (\bF_p)^{\wh{X} \times \wh{C}}$ be nonsingular. Let $C = C' \cup \wh{C}$ and $X = X' \cup \wh{X}$. Let $\Lambda = \Lambda' \times \{0\}^{\wh{X}}$ and $\Delta = \Gamma(\bF_p^{\wh{C}}) \times \{0\}^{Y_0 \cup Y_1 \cup C'}$. Finally let
$A_1 = $
[c|c|c|]{} & &\
$\wh{X}$ & $A_{\Delta}$ & $P$\
$X$ & $A_1'$ & $0$\
.
Note that $\Delta'$ and $\Lambda'$ are additive subgroups closed under $\Gamma$-scalings; we argue that the template $\Phi = {(\Gamma,C,X,Y_0,Y_1,A_1,\Delta,\Lambda)}$, which satisfies the required condition by choice of $\Delta$, is equivalent to $\Phi'$.
Define a map $\psi\colon \Gamma (\bF_p^{\wh{C}}) \to \Delta'$ by $\psi(w) = wP^{-1}A_{\Delta}$. Note that $\psi(w)$ is some $\Gamma$-scaling of an $\bF_p$-linear combination of vectors in $D \subseteq \Delta'$ so has range contained in $\Delta'$; moreover, since $D$ is a generating set, for every $u \in \Delta'$ there is some $w \in \bF_p^{\wh{C}}$ for which $\psi(w) = u$; thus $\psi$ is surjective.
Let $\wh{\Phi} = (\Gamma,C,X,Y_0,Y_1,A_1,\Delta' \times \{0\}^{\wh{C}},\Lambda)$. By Lemma \[contracttemplate\], the templates $\wh{\Phi}$ and $\Phi'$ are equivalent. Let $A \in \bF^{B \times E}$ respect $\Phi$. Each row of the submatrix $A[B-X,C \cup Y_0 \cup Y_1]$ has the form $(0_{C' \cup Y_0 \cup Y_1},w)$ where $w \in \Gamma(\bF_p^{\wh{C}})$; let $\varphi(A)$ be the matrix obtained by replacing each such row with the row $(\psi(w),0^{\wh{C}}) \in \Delta$. It is clear that $\varphi(A)$ respects $\wh{\Phi}$; moreover, by the surjectivity of $\psi$, for every $\wh{A}$ respecting $\wh{\Phi}$ there is a matrix $A$ respecting $\Phi$ for which $A' = \psi(A)$. Finally, the matrices $A$ and $A'$ are row-equivalent by construction of $\psi$, so for any $(E,Z,Y_1)$-shift matrix $S$ the matrices $AS$ and $\psi(A)S$ are row-equivalent. Therefore $\cM(\Phi) = \cM(\wh{\Phi})$. Since $\wh{\Phi}$ is equivalent to $\Phi'$, the lemma follows.
We say a template $\Phi = {(\Gamma,C,X,Y_0,Y_1,A_1,\Delta,\Lambda)}$ over $\bF$ is *$Y$-reduced* if $\Delta[C] = \Gamma(\bF_p^C)$ and $\Delta[Y_0 \cup Y_1] = \{0\}$, and there is a partition $(X_0,X_1)$ of $X$ for which $\bF_p^{X_0} \subseteq \Lambda[X_0]$ and $\Lambda[X_1] = \{0\}$.
Every template is equivalent to a $Y$-reduced template.
Let $\Phi = {(\Gamma,C,X,Y_0,Y_1,A_1,\Delta,\Lambda)}$ be a template over a finite field $\bF$ with prime subfield $\bF_p$. We may assume by Lemma \[magic\] that there is a partition $(C_0,C_1)$ of $C$ for which $\Delta = \Gamma(\bF_p^{C_0}) \times \{0_{C_1 \cup Y_0 \cup Y_1}\}$. Now $A_1$ is row-equivalent to a matrix
$A_1' = $
[c|c|c|c|]{} & & &\
$X-\wh{X}$ & & $Q$ &\
$\wh{X}$ & & $0$ &\
where $\wh{X} \subseteq X$ and $\rank(Q) = |X-\wh{X}|$. Let $U \in \bF^{X \times X}$ be nonsingular with $A_1' = UA_1$. By Lemma \[unitary\], $\Phi$ is equivalent to the template $$\Phi' = (\Gamma,C,X,Y_0,Y_1,A_1',U\Lambda,\Gamma (\bF_p^{C_0}) \times \{0_{C_1 \cup Y_0 \cup Y_1}\}).$$
Let $\wh{A}_1 = A_1'[\wh{X},C_0 \cup Y_0 \cup Y_1]$. Let $\wh{\Lambda} = (U_{\Lambda})[\wh{X}]$. Let $\wh{\Delta} = \Gamma(\bF_p^{C_0}) \times \{0\}^{Y_0 \cup Y_1}$. Let $\wh{\Phi} = (\Gamma,C_0,\wh{X},Y_0,Y_1,\wh{A}_1,\wh{\Lambda},\wh{\Delta})$. By Lemma \[contracttemplate\] we have $\cM(\Phi') = \cM(\wh{\Phi})$. Finally, by mapping a maximal linearly independent subset of $\wh{\Lambda}$ to a set of unit vectors, we see that there is a nonsingular matrix $\wh{U}_{\Lambda} \in \bF^{\wh{X} \times \wh{X}}$ and a partition $(\wh{X}_0,\wh{X}_1)$ of $\wh{X}$ for which the additive subgroup $\Lambda' = \wh{U}_\Lambda \wh{\Lambda}$ satisfies $\Lambda'[\wh{X}_1] = \{0\}$ and contains all unit vectors supported in $\wh{X}_0$, which implies that $\bF_p^{\wh{X}_0} \subseteq \Lambda'[\wh{X}_0]$. By Lemma \[unitary\], we see that $\wh{\Phi}$, and therefore $\Phi$, is equivalent to the $Y$-reduced template $(\Gamma,C_0,\wh{X},Y_0,Y_1,\wh{U}_{\Lambda}\wh{\Lambda},\wh{\Delta})$.
A template $\Phi = {(\Gamma,C,X,Y_0,Y_1,A_1,\Delta,\Lambda)}$ over a field $\bF$ is *reduced* if there is a partition $(X_0,X_1)$ of $X$ such that
- $\Delta = \Gamma(\bF_p^C \times \Delta')$ for some additive subgroup $\Delta'$ of $\bF^{Y_0 \cup Y_1}$,
- $\bF_p^{X_0} \subseteq \Lambda[X_0]$ while $\Lambda[X_1] = \{0\}$ and $A_1[X_1,C] = 0$, and
- the rows of $A_1[X_1]$ are a basis for a subspace skew to $\Delta$.
Every template is equivalent to a reduced template.
Let $\Phi = {(\Gamma,C,X,Y_0,Y_1,A_1,\Delta,\Lambda)}$ be a template over a field $\bF$. We may assume that $\Phi$ is $Y$-reduced; let $(X_0,X_1)$ be the partition of $X$ for which $\Lambda[X_1] = \{0\}$ and $\bF_p^{X_0} \subseteq \Lambda[X_0]$. Let $Y = Y_0 \cup Y_1$. By applying elementary row-operations to $A_1$ without adding any multiples of rows in $X_0$ to rows in $X_1$, we obtain a matrix
$A_1' = $
[c|c|c|c|]{} & & &\
$X_1''$ & &\
$X_1'$ & $Q$ & &\
$X_0$ & $0$ & &\
,
where $(C',C'')$ and $(X_1',X_1'')$ are partitions of $C$ and $X$ respectively, and $Q$ is a nonsingular matrix. Let $U \in \bF^{X \times X}$ be a nonsingular matrix for which $U[X_1,X_0] = 0$ and $A_1' = UA_1$. By Lemma \[unitary\], the template $\Phi$ is equivalent to $\Phi' = (\Gamma,C,X,Y_0,Y_1,A_1',U\Lambda,\Delta)$.
Define a linear map $\psi \colon \Delta \to \bF^{C \cup Y}$ by $$\psi(w) = (0_{C'},w[C''],w[C']Q^{-1} A_Y[X_1'])$$ and let $\Delta'' = \psi(\Delta)$. Note that $\Delta'' = \Gamma(\bF_p^{C''} \times \Delta_0)$ for some additive subgroup $\Delta_0$ of $\bF^Y$. Let $\Phi'' = (\Gamma,C,X,Y_0,Y_1,A_1',U_\Lambda,\Delta')$. For each matrix $A' \in \bF^{B \times E}$ respecting $\Phi'$, let $A' \in \bF^{B \times E}$ be obtained by replacing each row $w \in \Delta$ of $A'[B-X,C \cup Y_0 \cup Y_1]$ with the row $\psi(w)$. Now $A''$ both respects $\Phi''$ and is row-equivalent to $A'$; thus each matrix respecting $\Phi'$ is row-equivalent to a matrix respecting $\Phi''$, so $\Phi'$ and $\Phi''$ are equivalent. Let $\wh{X} = X_0 \cup X_1''$ and $\wh{C} = C''$. Let $\wh{\Delta} = \Delta[\wh{C} \cup Y]$ and $\wh{A}_1 = A_1'[\wh{X},\wh{C} \cup Y]$. Since $\Delta''[C'] = 0$, Lemma \[contracttemplate\] implies that the template $\Phi''$ is equivalent to $\wh{\Phi} = (\Gamma,\wh{C},\wh{X},Y_0,Y_1,\wh{A}_1,\wh{\Delta},(U\Lambda)[\wh{X}]).$
Since $U[X_1,X_0] = 0$ and $\Lambda[X_1] = \{0\}$ while $\bF_p^{X_0} \times \{0_{X_1}\} \subseteq \Lambda$, we know that $\Lambda$ contains a basis for $\bF^{X_0} \times \{0_{X_1}\}$ and so $U\Lambda$ does also, and moreover $U{\Lambda}[\wh{X}]$ contains a basis for $\bF^{X_0} \times \{0_{X_1''}\}$. Let $U' \in \bF^{\wh{X} \times \wh{X}}$ be a nonsingular matrix mapping this basis to the standard basis; therefore the set $\wh{\Lambda} = U'(U\Lambda[\wh{X}])$ satisfies $\wh{\Lambda}[X_1''] = \{0\}$ and $\bF_p^{X_0} \subseteq \wh{\Lambda}[X_0]$. By Lemma \[unitary\] the template $\wh{\Phi}$ is equivalent to $\wh{\Phi}' = (\Gamma,\wh{C},\wh{X},Y_0,Y_1,U'\wh{A}_1,\wh{\Lambda},\wh{\Delta})$.
Let $V$ be a complementary subspace of $W = \row(A_1[X_1''])$ in $\bF^{\wh{C} \cup Y_0 \cup Y_1}$ and let $\varphi\colon \bF^{\wh{C} \cup Y_0 \cup Y_1} \to V$ be the associated projection map $\varphi(v+w) \mapsto v$. Since $\wh{A}_1[X_1'',\wh{C}] = 0$ we have $W[\wh{C}] = 0$ and therefore for each $u \in \bF^{\wh{C}}$ and $v \in \bF^{\wh{Y}}$ we have $\varphi(u,v) = (u,\varphi(v))$ and so $\varphi(\wh{\Delta}) = \Gamma(\bF_p^{\wh{C}} \times \wh{\Delta}_0)$ for some additive subgroup $\Delta_0 \subseteq \bF^Y$. By construction $\varphi(\wh{\Delta})$ is skew to $\row(\wh{A}_1)[X_1'']$, and moreover by Lemma \[makeskew\] the template $\wh{\Phi}$ is equivalent to the template $\wh{\Phi}' = (\Gamma,\wh{C},\wh{X},Y_0,Y_1,\wh{A}_1,\varphi(\wh{\Delta}),\wh{\Lambda})$.
The construction of $\wh{\Phi}'$ is such that $\varphi(\wh{\Delta})$ and $\wh{\Lambda}$ have the required structure for a reduced template with respect to the partition $(X_0,X_1'')$ of $\wh{X}$, save the property that the rows of $\wh{A}_1[X_1'']$ are themselves linearly independent. Since $\Lambda[X_1''] = \{0\}$, this property can easily be obtained by considering a maximal set $\wh{X}_1 \subseteq X_1''$ for which $\wh{A}_1[\wh{X}_1]$ has linearly independent rows, and then restricting $\wh{\Lambda}$ and $\wh{A}_1$ to just the rows in $X_0 \cup \wh{X}_1$. Doing so yields a template $\Psi$ with the property that every matrix respecting $\Psi$ is row-equivalent to one respecting $\wh{\Phi}'$; it follows that the reduced template $\Psi$ is equivalent to $\wh{\Phi}'$ and therefore to $\Phi$.
\[strongstructure\] The sets $\bT$ and $\bT^*$ of templates given by Theorem \[structure\] can be taken to be contain only reduced templates.
Density and Subclasses
======================
Templates are especially nice over prime fields due to the fact that $\Lambda$ and $\Delta$ are subspaces; in this section we investigate them further. For a template $\Phi = {(\Gamma,C,X,Y_0,Y_1,A_1,\Delta,\Lambda)}$, define the *complexity* of $\Phi$ by $c(\Phi) = |X \cup Y_0 \cup Y_1 \cup C|$. We use this measure in the next two lemmas to bound the density of matroids conforming and co-conforming to reduced templates. The primal bound is quadratic in rank.
\[templatedensity\] Let $\Phi = {(\Gamma,C,X,Y_0,Y_1,A_1,\Delta,\Lambda)}$ be a reduced template over a prime field $\bF = \GF(p)$ with $\dim(\Lambda) = t$ and $c(\Phi) = c$. Then every matroid $M \in \cM(\Phi)$ satisfies $$\elem(M) \le f_{p,|\Gamma|,t}(r(M)) + p^{t+1}c(r(M)\\ + c).$$
Let $M = M(AS)\con C \del Y_1 \in \cM(\Phi)$, where $A \in \bF^{B \times E}$ respects $\Phi$ and $S$ is an $(E,Z,Y_1)$-shift matrix. Note that $r(M) \ge \rank(AS) - |C \cup Y_1| \ge \rank(A) - c$, and since $\rank(A)$ is at least the number of distinct columns in the submatrix $A[B-X,Z]$, we see that $A[B-X,Z]$ has at most $r(M) + c$ distinct columns. Every column of $(AS)[Z]$ is the sum of a column of $A[Y_1]$ and a column of $A[Z]$, so $(AS)[Z]$ has at most $|Y_1|(r(M)+c)$ distinct columns and $(AS)[Z \cup Y_0]$ has at most $|Y_1|(r(M)+c) + |Y_0| \le c(r(M)+c)$ distinct columns. Thus $\elem(M|(Y_0 \cup Z)) \le \elem(M(AS)|(Y_0 \cup Z)) \le c(r(M)+c)$. Let $F = E - (C \cup Y_0 \cup Y_1 \cup Z)$. We have $M(AS)|F \in \cG(\Gamma)^t$, so $\elem(M|F) \le \elem(M(AS)|F) \le f_{p,|\Gamma|,t}(\rank(AS)) \le f_{p,|\Gamma|,t}(r(M)+c)$. Now for all $x$ we have $$\begin{aligned}
f_{p,|\Gamma|,t}(x+c) - f_{p,|\Gamma|,t}(x) &= p^t|\Gamma|((x-t)c + \tfrac{1}{2}(c^2+c)) \\
&\le p^t(p-1)c(x+c),
\end{aligned}$$ since $|\Gamma| \le p-1$ and $c \le c^2$. Combining the above estimates we have $$\begin{aligned}
\elem(M) &\le \elem(M|F) + \elem(M|(Y_0 \cup Z)) \\
&\le f_{p,|\Gamma|,t}(r(M)+c) + c(r(M)+c) \\
&\le f_{p,|\Gamma|,t}(r(M)) + p^t(p-1)c(r(M) + c) + c(r(M)+c)\\
&\le f_{p,|\Gamma|,t}(r(M)) + p^{t+1}c(r(M)+c),
\end{aligned}$$ giving the bound.
For the dual bound, which is linear in rank, we need an easy lemma bounding the density of the dual of a frame matroid. (The lemma applies to any $\Gamma$-frame matroid over any field).
\[coframedensity\] If $M^*$ is a frame matroid, then $\elem(M) \le 3r(M)$.
We may assume that $M$ is simple. Let $A$ be a frame representation of $M^*$ with $r^*(M)$ rows. If some row of $A$ has weight less than $3$ then $M^*$ has a coloop or series pair so $M$ is not simple. Thus $A$ has at least $3r^*(M)$ nonzero entries, so the number $|M|$ of columns of $A$ is at least $\tfrac{3}{2}r^*(M)$. Therefore $\tfrac{r(M)}{\elem(M)} = \tfrac{|M|-r^*(M)}{|M|} = 1 - \tfrac{r*(M)}{|M|} \ge \tfrac{1}{3}$.
\[dualdensity\] Let $\Phi$ be a template over a finite field $\bF$. If $M \in \cM^*(\Phi)$ then $\elem(M) \le |\bF|^{c(\Phi)}(3r(M) + 6c(\Phi)+1)$.
Let $\Phi = {(\Gamma,C,X,Y_0,Y_1,A_1,\Delta,\Lambda)}$ and $c = c(\Phi)$. Let $M^* \in \cM(\Phi)$, so there exists $A \in \bF^{B \times E}$ respecting $\Phi$ and an $(E,Z,Y_1)$-shift matrix $S$ for which $M = (M(AS) \con C \del Y_1)^*$. Let $A'$ be obtained from $A$ by replacing all rows in $X$ and columns in $C \cup Y_0 \cup Y_1$ by zero; clearly $A' = A'S$ is a $\Gamma$-frame matrix and $\rank((A-A')S) \le \rank(A-A') \le c$.
Therefore $A' = AS + P$ for a matrix $P$ of rank at most $c$. It follows that $M(A')$ and $M(AS)$ have $\bF$-representations that agree on all but at most $c$ rows, and thus that there is a matroid $\wh{M}$ and a pair of disjoint $c$-element sets $T_1,T_2 \subseteq E(\wh{M})$ such that $\wh{M} \con T_1 \del T_2 = M(A')$ and $\wh{M} \del T_1 \con T_2 = M(AS)$. Let $N = \wh{M} \con C \del Y_1$, so $N \del T_1 \con T_2 = M^*$ and $N \con T_1 \del T_2$ is minor of $M(A')$, so is a $\Gamma$-frame matroid. By Lemma \[coframedensity\] we have $\elem(N^* \del T_1 \con T_2) \le 3r(N^* \del T_1 \con T_2)$ and so, since $\elem(M_0) \le |\bF|^{|H|}(\elem(M_0 \con H) +1)$ for every set $H$ in an $\bF$-represented matroid $M_0$, we have $$\elem(N^* \con T_1 \del T_2) \le \elem(N^* \del T_1) \le |\bF|^c(3r(N^* \del T_1 \con T_2)+1) \le |\bF|^c(3r(N^*) + 1).$$ But $M = N^* \del T_2 \con T_1$ and $r(N^*) \le r(M) + |T_1 \cup T_2| = r(M) + 2c$, so this gives $\elem(M) \le |\bF|^c(3(r(M)+2c) + 1)$, as required.
The next lemma essentially states that, for a reduced template $\Phi$, the class ${\overline{\cM(\Phi)}}$ contains all matroids whose representation is obtained from a $\Gamma$-frame matrix by appending $\dim(\Lambda)$ rows and $\dim(\Delta)$ columns.
\[subclass\] If $\Phi = {(\Gamma,C,X,Y_0,Y_1,A_1,\Delta,\Lambda)}$ is a reduced template over a prime field and $(t,d) = (\dim(\Lambda),\dim(\Delta))$, then $\cG(\Gamma)_d^t \subseteq {\overline{\cM(\Phi)}}$.
Let $(X_0,X_1)$ be the partition of $X$ certifying that $\Phi$ is reduced; note that $|X_0| = t$. Let $N_0 \in \cG(\Gamma)^t_d$; by Lemma \[extensionprojection\] there is a matroid $N$ with an $N_0$-minor, a set $B_0$, a $d$-element set $R$ and matrices $P_1 \in \bF^{R \times F}$ and $P_2 \in \bF^{X_0 \times F}$ and $P_0 \in \bF^{B_0 \times F}$ such that ${{{P_1} \brack {P_0}}}$ is row-equivalent to a $\Gamma$-frame matrix, while $N = M{{{P_2} \brack {P_0}}}$. We show that $N \in {\overline{\cM(\Phi)}}$.
Let $Y = Y_0 \cup Y_1$. Let $W \in \bF^{R \times (C \cup Y)}$ be a matrix with rowspace $\Delta$; note, since $|R| = d$, that $W$ has linearly independent rows. Since $A_1[X_1]$ has row space skew to $\Delta$ and has linearly independent rows, we see that ${{{A_1[X_1]} \brack {W}}}$ also has linearly independent rows. Let $\wh{C} \subseteq C \cup Y$ be such that the matrix $Q = {{{A_1[X_1]} \brack {W}}}[\wh{C}]$ is nonsingular. So $|\wh{C}| = d + |X_1|$, and since $\Delta = \bF^C \times \Delta[Y]$, we must have $C \subseteq \wh{C}$. Since $Q$ is nonsingular, there is a matrix $P_2' \in \bF^{X_0 \times F}$ for which the matrices
$Q_1 = $
[c|c|c|]{} & &\
$X_0$ & $P_2'$ & $A_1[X_0,\wh{C}]$\
$X_1$ & $0$ &\
$R$ & $P_1$ &\
$Q_2 = $
[c|c|c|]{} & &\
$X_0$ & $P_2$ & $0$\
$X_1$ & $0$ &\
$R$ & $P_1$ &\
are row-equivalent. Let $C_i = \wh{C} \cap Y_i$ for $i \in \{0,1\}$, so $\wh{C} = C \cup C_0 \cup C_1$. We essentially wish to contract $C_1$ from a matroid conforming to $\Phi$, but since the columns in $C_1$ must be deleted, we must ‘copy’ its entries using $Z$. Let $Z$ be a copy of the set $C_1$, let $\{c,d\}$ be a $2$-element set, and consider the matrix
$A = $
[c|c|c|c|c|]{} & & & &\
$X_0$ & $P_2'$ & & & $A_1[X_0]$\
$X_1$ & $0$ & & & $A_1[X_1]$\
$R$ & $P_1$ & & & $W$\
$B_0$ & $P_0$ & & &\
$d$ & $0$ & $1$ & ${\mathbf{1}_{Z}}$ &\
,
where ${\mathbf{1}_{Z}}$ is the all-ones vector in $\bF^Z$. Since ${{{P_1} \brack {P_0}}}$ is row-equivalent to a $\Gamma$-frame matrix and $\operatorname{row}(W) \subseteq \Delta$, we see that $A$ is row-equivalent to a matrix $A'$ respecting $\Phi$. Let $E$ be the set of column indices of $A$. Recall that $Z$ is a copy of $C_1 \subseteq Y_1$; let $S$ be the $(E,Z,Y_1)$-shift matrix so that $AS$ is obtained from $A$ by adding each column of $A[C_1]$ to its corresponding column in $A[Z]$. Thus
$AS = $
[c|c|c|c|c|]{} & & & &\
$X_0$ & $P_2'$ & & & $A_1[X_0]$\
$X_1$ & $0$ & & & $A_1[X_1]$\
$R$ & $P_1$ & & & $W$\
$B_0$ & $P_0$ & & $0$ &\
$d$ & $0$ & $1$ & ${\mathbf{1}_{Z}}$ &\
,
where $V$ is a copy of ${{{A_1} \brack {W}}}[C_1]$. So $(AS)[R \cup X_0 \cup X_1,F \cup Z \cup C \cup C_0]$ is a copy of the matrix $Q_1$ defined earlier. Let $$M_0 = (M(AS) \con (\{c\} \cup Z \cup C \cup C_0))|F.$$ Let $\wh{A} = AS[F \cup \{c\} \cup Z \cup C \cup C_0]$. By construction of $P_2'$ we can perform row-operations on $\wh{A}$ to replace $P_2'$ by $P_2$ and replace the submatrix $\wh{A}[X_0,Z \cup C \cup C_0]$ with zero. Then one can contract the set $\{c\} \cup Z \cup C \cup C_0$ from $M(\wh{A})$ by first removing from $\wh{A}$ row $d$ and column $c$, then removing all columns in $Z \cup C \cup C_0$ and all rows in $X_1 \cup R$. Thus $M_0 = M{{{P_2} \brack {P_0}}} = N$. Since $N_0$ is a minor of $N = M_0$ and $M_0$ is a minor of $M(AS) \con C \del Y_1$, we have $N_0 \in {\overline{\cM(\Phi)}}$ and the result follows.
The Main Result
===============
We now prove our main technical lemma; a reduced template in which $\dim(\Lambda) = t$ either describes a ‘degenerate’ class, a subclass of $\cG(\Gamma)^t$, or a class whose minor-closure contains ${\cG(\Gamma)^{t}_{{\scalebox{0.4}{$\square$}}}}$ or some ${\cG(\Gamma)^{t}_{(x)}}$ .
\[templatetech\] Let $\Phi = {(\Gamma,C,X,Y_0,Y_1,A_1,\Delta,\Lambda)}$ be a reduced template over a prime field $\bF$ and let $t = \dim(\Lambda)$ and $c = c(\Phi)$. Either
1. \[m0\] $\cM(\Phi)$ contains no vertically $(c+1)$-connected matroid of rank at least $c+1$,
2. \[m1\] $\cM(\Phi) \subseteq \cG(\Gamma)^t$,
3. \[m2\] $\Gamma = \bF^*$ and ${\cG(\Gamma)^{t}_{{\scalebox{0.4}{$\square$}}}} \subseteq {\overline{\cM(\Phi)}}$, or
4. \[m3\] $\Gamma \ne \bF^*$ and ${\cG(\Gamma)^{t}_{(x)}} \subseteq {\overline{\cM(\Phi)}}$ for some $x \in \bF^* - \Gamma$.
Suppose that (\[m2\]) and (\[m3\]) do not hold. By Lemma \[dowlingextension\], there is thus a matroid $N \in \cG(\Gamma)^t$ (in fact, of the form $\DG(n,\Gamma)^t$) with $r(N) \ge 2|X|$, such that no simple rank-$r(N)$ extension of $N$ is in ${\overline{\cM(\Phi)}}$. Since every such extension is in $\cG(\Gamma)^t_1$, we may assume that $\cG(\Gamma)^t_1 \not\subseteq {\overline{\cM(\Phi)}}$ and thus, by Lemma \[subclass\], that $\Delta = \{0\}$. Since $\dim(\Delta) \ge |C|$ in a reduced template, we also have $C = \varnothing$.
Let $(X_0,X_1)$ be the partition of $X$ certifying that $\Phi$ is reduced, and let $h = |X_1|$. Since $N \in \cG(\Gamma)^t$ we have $N = M{{{P_0} \brack {Q}}}$ for some $P_0 \in \bF^{X_0 \times F}$ and some $\Gamma$-frame matrix $Q \in \bF^{B_0 \times F}$, where $F = E(N)$ and $B_0$ satisfies $|B_0| > |X-X_0| = |X_1|$. We may assume that $\rank{{{P_0} \brack {Q}}} = |B_0| + |X_0|$, as otherwise we can remove redundant rows and rescale columns. Suppose now that (\[m0\]) does not hold.
$\col(A_1[X_1,Y_0]) \subseteq \col(A_1[X_1,Y_1])$.
Suppose not. Since $\Lambda[X_1] = 0$, every matrix $A \in \bF^{B \times E}$ conforming to $\Phi$ satisfies $\col(A[X_1,E-(Y_0 \cup Y_1)]]) \subseteq \col(A[X_1,Y_1])$, and so $\rank(A[X_1,E-(Y_0 \cup Y_1)]) < \rank(A[X_1,E-Y_1])$, which gives $r(M(A) \del (Y_1 \cup Y_0)) < r(M(A) \del Y_1)$. Therefore $\lambda_{M(A) \del Y_1}(Y_0) < r_{M(A)}(Y_0) \le |Y_0|$. If $r(M(A) \del Y_1) > |Y_0|$ then it follows that $M(A) \del Y_1$ is not vertically $(|Y_0| + 1)$-connected, so (\[m0\]) holds, a contradiction.
Recall that the rows of $A_1[X_1,Y_0 \cup Y_1]$ are linearly independent. We may therefore assume by the first claim, and applying row-operations to $A_1[X_1]$, that there exists $T \subseteq Y_1$ so that $A_1[X_1,T] = -J$ for some bijection matrix $J$. Recall that $|B_0| > |X_1|$; let $X_1' \subseteq B_0$ be a set whose elements we associate with those in $X_1$, and let $J' \in \bF^{X_1' \times T}$ be a copy of $J$. Let $r \in B_0 - X_1'$. Consider the matrix $M = M(A)$, where
$A= $
[c|c|c|c|c|]{} & & & &\
$X_1$ & $0$ & $-J$ & &\
$X_0$ & $P_0$ & $A_1[X_0,T]$ & &\
$X_1'$ & & $J'$ & &\
$B_0-(X_1' \cup \{r\})$ & & $0$ & &\
$r$ & & $0$ & ${\mathbf{1}_{Y_1-T}}$ &\
.
The matroid $M(A)$ is isomorphic to a matroid conforming to $\Phi$, as $A$ is obtained from a certain matrix conforming to $\Phi$ (in which $Z$ is a copy of $Y_1$) by removing the columns in $Y_1$ and then renaming the set $Z$ as $Y_1$. Thus $M \in \cM(\Phi)$.
Let $M' = M \con T$. Since $J'$ is a copy of $J$, we have $M = M(A')$ where
$A'= $
[c|c|c|c|]{} & & &\
$X_0$ & $P_0$ & $H_1$ & $H_2$\
$X_1'$ & & $A_1[X_1,Y_1-T]$ & $A_1[X_1,Y_0]$\
$B_0-(X_1' \cup \{r\})$ & & $0$ &\
$r$ & & ${\mathbf{1}_{Y_1-T}}$ &\
,
where the sets $X_1'$ and $X_1$ are identified, and $H_1,H_2$ are some matrices. Now $M' = M \con T \in {\overline{\cM{(\Phi)}}}$, but also $M' = M(A')$ is evidently a rank-$(r(N))$ extension of the matroid $N = M{{{P_0} \brack {Q}}}$. By the choice of $N$, we may assume that $M'$ contains no simple extension, so in fact $A'[B_0]$ is a $\Gamma$-frame matrix up to column scalings. Thus every column of $A_1[X_1,Y_1-T]$ is either a zero vector or a weight-$1$ vector whose nonzero entry is in $-\Gamma$. The same is true of $A_1[X_1,T] = -J$.
Consider an arbitrary matroid $M \in \cM(\Phi)$, so $M = M(A) \del Y_1$ for some matrix $A \in \bF^{B \times E}$ conforming to $\Phi$. Every column of $A[X_1,Y_0]$ is parallel to a column of a $\Gamma$-frame matrix, so the same is true of $A[X_1 \cup B,Y_0]$. The columns of $A[B,Z]$ are unit vectors and the columns of $A[X_1,Z]$ are columns of $A[X_1,Y_1]$ so each is either a zero vector or a weight-$1$ vector with nonzero entry in $-\Gamma$; thus, each column of $A[X_1 \cup B,Z]$ is parallel to a column of a $\Gamma$-frame matrix. The same is evidently true of $A[X_1 \cup B,E-(Y_0 \cup Y_1 \cup Z)]$; thus, $A[X_1 \cup B,E-Y_1]$ is a $\Gamma$-frame matrix up to column scalings. Since $|X_0| = t$ it follows that $M(A) \del Y_1 \in \cG(\Gamma)^t$ and so (\[m1\]) holds.
Now we prove the theorem which implies all our main results. Note that this implies Theorem \[simplifiedmain\] because ${\cG(\Gamma)^{t}_{(x)}}$ and ${\cG(\Gamma)^{t}_{{\scalebox{0.4}{$\square$}}}}$ both contain simple extensions of $\DG(n,\Gamma)^t$ for all $n \ge t$.
\[bigmain\] Let $\bF = \GF(p)$ be a prime field and $\cM$ be a quadratic minor-closed class of $\bF$-represented matroids. There is a subgroup $\Gamma$ of $\bF^*$ and some $t,\alpha \in \nni$ so that $\cG(\Gamma)^t \subseteq \cM$ and $$f_{p,|\Gamma|,t}(n) \le h_{\cM}(n) \le f_{p,|\Gamma|,t}(n) + \alpha n$$ for all sufficiently large $n$. Moreover, either
(a) \[ma0\] $\alpha = 0$ and every extremal matroid $N$ in $\cM$ of sufficiently large rank is isomorphic to $\DG(r(N),\Gamma)^t$,
(b) \[ma1\] $\Gamma = \bF^*$ and ${\cG(\Gamma)^{t}_{{\scalebox{0.4}{$\square$}}}} \subseteq \cM$, or
(c) \[ma2\] $\Gamma \ne \bF^*$ and ${\cG(\Gamma)^{t}_{(x)}} \subseteq \cM$ for some $x \in \bF^* - \Gamma$.
For each integer $s$, let $\cM_s$ denote the class of vertically $s$-connected matroids of rank at least $s$ in $\cM$. Since $\cM$ is a quadratic class, it contains all graphic matroids, so $\cM_s \ne \varnothing$ for all $s \in \nni$.
By Theorem \[structure\] and Corollary \[strongstructure\] there are finite sets $\bT$ and $\bT^*$ of reduced frame templates and an integer $k$ such that each simple matroid $M \in \cM_k$ is either in $\cM(\Phi)$ for some $\Phi \in \bT$ or in $\cM^*(\Psi)$ for some $\Psi \in \bT^*$, while $\cM$ contains $\cM(\Phi)$ for all $\Phi \in \bT$ and $\cM^*(\Psi)$ for all $\Psi \in \bT^*$. Since $\cM_k \ne \varnothing$ we know that $\bT \cup \bT^* \ne \varnothing$.
Note that the class $\cG(\{1\})^0$ of graphic matroids is contained in $\cM$. If $\cG(\{1\})^t \subseteq \cM$ for all $t \in \nni$ then $\cM$ contains all projective geometries over $\bF$ and is thus not a quadratic class; let $t \in \nni$ and $\Gamma \le \bF^*$ be such that $\cG(\Gamma)^t \subseteq \cM$ and $p^t|\Gamma|$ is as large as possible. Let $c = \max_{\Phi \in \bT \cup \bT^*}c(\Phi)$. Note that the function $f_{p,g,t'}(x)$ is quadratic in $x$ with leading term $\tfrac{1}{2}gp^tx^2$; let $n_0$ be an integer for which every $x \ge n_0$ satisfies $p^c(3x + 6c + 1) < f_{p,|\Gamma|,t}(x)-x$, and let $n_1$ be an integer so that every $x \ge n_1$ satisfies $f_{p,g,t'}(x) + 2p^{t'+1}cx < f_{p,|\Gamma|,t}(x)-x$ for all $t',g \in \nni$ such that $p^{t'}g < p^t|\Gamma|$. Let $k_0 = \max(t,k,c+1,n_0,n_1)$.
If $M \in \cM_{k_0}$ is simple and $\elem(M) \ge f_{p,|\Gamma|,t}(r(M))-r(M)$, then $M$ conforms to a template $\Phi' = (\Gamma',C',X',Y_0',Y_1',A_1',\Delta',\Lambda') \in \bT$ for which $\Gamma' = \Gamma$ and $\dim(\Lambda') = t$.
Let $r = r(M)$. Since $M \in \cM_{k_0}$ and $k_0 \ge k$, we have $M \in \cM^*(\Psi)$ for some $\Psi \in \bT^*$ or $M \in \cM(\Phi)$ for some $\Phi \in \bT$. In the former case, since $r \ge k_0 \ge n_0$ we have $|M| < p^c(3r + 6c + 1) < f_{p,|\Gamma|,t}(r)-r$ by Lemma \[dualdensity\], a contradiction. Therefore $M \in \cM(\Phi')$ for some $\Phi' \in \bT$; let $\Phi' = (\Gamma',C',X',Y_0',Y_1',A_1',\Delta',\Lambda')$ and $t' = \dim(\Lambda')$.
Lemma \[subclass\] implies that $\cG(\Gamma')^{t'} \subseteq {\overline{\cM(\Phi')}} \subseteq \cM$, so $p^{t'}|\Gamma'| \le p^t|\Gamma|$ by our choice of $\Gamma$ and $t$. Using Lemma \[templatedensity\], we have $$f_{p,|\Gamma|,t}(r)-r < |M| < f_{p,|\Gamma'|,t'}(r) + p^{t'+1}c(r+c) < f_{p',|\Gamma'|,t}(r) + 2p^{t'+1}cr .$$ If $p^{t'}|\Gamma'| < p^t|\Gamma|$ then the above and the fact that $r \ge k_0 \ge n_1$ give a contradiction. Thus $p^{t'}|\Gamma'| = p^t|\Gamma|$, which implies that $t' = t$ and $|\Gamma'| = |\Gamma|$; since $\bF^p$ is cyclic this gives $\Gamma' = \Gamma$.
Since $\cG(\Gamma)^t \subseteq \cM$, we have $h_{\cM}(n) \ge f_{p,|\Gamma|,t}(n)$ for all $n \ge t$. Let $d \in \{-1,0,1, \dotsc, 2p^{t+1}c\}$ be maximal such that $h_{\cM}(n) > f_{p,|\Gamma|,t}(n) + dn$ for infinitely many $n$. By Theorem \[connreduction\], applied with $f(x) = f_{p,|\Gamma|,t}(x) + dx$, there is a simple rank-$r$ matroid $M \in \cM_{k_0}$ for which $|M| > f_{p,|\Gamma|,t}(r) + d r$. By the claim, there is a template $\Phi' = (\Gamma,C',X',Y_0',Y_1',A_1',\Delta',\Lambda') \in \bT$ for which $M \in \cM(\Phi)$ while $\dim(\Lambda) = t$. By Lemma \[templatedensity\] we have $|M| < f_{p,|\Gamma|,t}(r) + 2p^{t+1}cr$, from which we see that $d < 2p^{t+1}c$; by the maximality of $d$ this gives the required upper bound on $h_{\cM}(n)$ with $\alpha = 2p^{t+1}c$.
We now apply Lemma \[templatetech\] to $\Phi'$. Since $M \in \cM(\Phi')$ is vertically $(c+1)$-connected and has rank at least $c+1$, we know that \[templatetech\](\[m0\]) does not hold. Outcomes \[templatetech\](\[m2\]) and \[templatetech\](\[m3\]) imply (\[ma1\]) and (\[ma2\]) respectively, so we may assume that \[templatetech\](\[m1\]) holds. Thus $M \in \cG(\Gamma)^t$ and so $\elem(M) \le f_{p,|\Gamma|,t}(r)$; if $d \ne -1$ this is a contradiction, so $d = -1$. By the choice of $d$ it follows that $h_{\cM}(n) = f_{p,|\Gamma|,t}(n)$ for all sufficiently large $n$.
By Lemma \[equalityhc\], there is some $k_1 \in \nni$ such every simple $N \in \cM$ with $|N| = f_{p,|\Gamma|,t}(r(N))$ and $r(N) \ge k_1$ is in $\cM_{k_0}$. Consider such an $N$; by the claim there is a template $\Phi'' = (\Gamma,C'',X'',Y_0'',Y_1'',A_1'',\Delta'',\Lambda'')$ in $\bT$ with $\dim(\Lambda'') = t$ and $N \in \cM(\Phi'')$. Again, we may assume that outcomes (\[m0\]),(\[m2\]),(\[m3\]) of Lemma \[templatetech\] do not hold for $\Phi''$ and so (\[m1\]) does; thus $N \in \cG(\Gamma)^t$. Since $\DG(r(N),\Gamma)^t$ is the unique simple rank-$r(N)$ matroid with $f_{p,|\Gamma|,t}(n) = |N|$ elements, we must have $N \cong \DG(r(N),\Gamma)^t$ as required.
Finally we prove the corollary that will give the extremal function for excluding geometries. Note that the class $\cM$ in the theorem statement is quadratic as it contains the class $\cG(\{1\})^0$ of graphic matroids.
\[excludeN\] Let $\bF = \GF(p)$ be a prime field and $N$ be a nongraphic $\bF$-represented matroid. Let $\cM$ be the class of $\bF$-represented matroids with no $N$-minor. Let $\Gamma \le \bF^*$ and $t \in \nni$ be such that $p^t|\Gamma|$ is as large as possible subject to $N \notin \cG(\Gamma)^t$. If
- $\Gamma = \bF^*$ and $N \in {\cG(\Gamma)^{t}_{{\scalebox{0.4}{$\square$}}}}$ or
- $\Gamma \ne \bF^*$ and $N \in {\cG(\Gamma)^{t}_{(x)}}$ for all $x \in \bF^* - \Gamma$,
then for all sufficiently large $n$ we have $h_{\cM}(n) = f_{p,|\Gamma|,t}(n)$ and every rank-$n$ extremal matroid in $\cM$ is isomorphic to $\DG(n,\Gamma)^t$.
Clearly $\cG(\Gamma)^t \subseteq \cM$. Let $\Gamma' \le \bF^*$ and $t',\alpha \in \nni$ be given by Theorem \[bigmain\] for $\cM$. If $|\Gamma'|p^{t'} < |\Gamma|p^t$ then, since $f_{p,|\Gamma'|,t'}(n) \approx \tfrac{1}{2}p^{t'}|\Gamma'|n^2$, we have $h_{\cM}(n) \le f_{p,|\Gamma'|,t'}(n) + \alpha n < f_{p,|\Gamma|,t}(n) \le h_{\cM}(n)$ for all large $n$, a contradiction. So $|\Gamma'|p^{t'} = |\Gamma|p^t$; since $|\Gamma| < p$ and $\bF^*$ is cyclic this gives $\Gamma = \Gamma'$ and $t = t'$. By hypothesis, we see that \[bigmain\](\[ma1\]) and \[bigmain\](\[ma2\]) cannot hold for $\cM$, so we have \[bigmain\](\[ma0\]) which implies the result.
Theorems \[maintwo\], \[mainthree\] and \[mainodd\] follow from the above result and Lemmas \[techtwo\], \[techthree\] and \[techodd\] respectively.
References {#references .unnumbered}
==========
[\[\]]{}
\[ggwstructure\] J. Geelen, B. Gerards, G. Whittle, The highly connected matroids in minor-closed classes, Ann. Comb. 19 (2015), 107–123.
\[gkw09\] J. Geelen, J.P.S. Kung, G. Whittle, Growth rates of minor-closed classes of matroids, J. Combin. Theory. Ser. B 99 (2009) 420–427.
\[gvz\] K. Grace, S.H.M. van Zwam, Templates for binary matroids, SIAM J. Discrete Math. 31 (2017), 254–282.
\[heller\] I. Heller, On linear systems with integral valued solutions, Pacific. J. Math. 7 (1957) 1351–1364.
\[kmpr\] J.P.S. Kung, D. Mayhew, I. Pivotto, G.F. Royle, Maximum size binary matroids with no $\AG(3,2)$-minor are graphic, SIAM J. Discrete Math. 28 (2014), 1559–1577.
\[gn\] J. Geelen, P. Nelson, Matroids denser than a clique, J. Combin. theory. Ser. B 114 (2015), 51–69.
\[oxley\] J. G. Oxley, Matroid Theory (2nd edition), Oxford University Press, New York, 2011.
\[walsh\] Z. Walsh, On the density of binary matroids without a given minor, MMath Thesis, University of Waterloo, 2016.
\[zaslav\] T. Zaslavsky, Signed graphs, Discrete Appl. Math. 4 (1982) 47–74
|
---
abstract: |
It is important to have fast and effective methods for simplifying 3-manifold triangulations without losing any topological information. In theory this is difficult: we might need to make a triangulation super-exponentially more complex before we can make it smaller than its original size. Here we present experimental work suggesting that for 3-sphere triangulations the reality is far different: we never need to add more than two tetrahedra, and we never need more than a handful of local modifications. If true in general, these extremely surprising results would have significant implications for decision algorithms and the study of triangulations in 3-manifold topology.
The algorithms behind these experiments are interesting in their own right. Key techniques include the isomorph-free generation of all 3-manifold triangulations of a given size, polynomial-time computable signatures that identify triangulations uniquely up to isomorphism, and parallel algorithms for studying finite level sets in the infinite Pachner graph.
**ACM classes** F.2.2; G.2.1; G.2.2; D.1.3
**Keywords** Triangulations, 3-manifolds, Pachner moves, 3-sphere recognition, isomorph-free enumeration
author:
- 'Benjamin A. Burton'
bibliography:
- 'pure.bib'
date: 'February 23, 2011'
title: |
The Pachner graph and the simplification\
of 3-sphere triangulations
---
Introduction
============
Triangulations of 3-manifolds are ubiquitous in computational knot theory and low-dimensional topology. They are easily obtained and offer a natural setting for many important algorithms.
Computational topologists typically allow triangulations in which the constituent tetrahedra may be “bent” or “twisted”, and where distinct edges or vertices of the same tetrahedron may even be joined together. Such triangulations (sometimes called *semi-simplicial* or *pseudo-triangulations*) can describe rich topological structures using remarkably few tetrahedra. For example, the 3-dimensional sphere can be built from just one tetrahedron, and more complex spaces such as non-trivial surface bundles can be built from as few as six [@matveev90-complexity].
An important class of triangulations is the *one-vertex triangulations*, in which all vertices of all tetrahedra are joined together as a single point. These are simple to obtain [@jaco03-0-efficiency; @matveev03-algms], and they are often easier to deal with both theoretically and computationally [@burton10-dd; @jaco02-algorithms-essential; @matveev03-algms].
Keeping the number of tetrahedra small is crucial in computational topology, since many important algorithms are exponential (or even super-exponential) in the number of tetrahedra [@burton10-complexity; @burton10-dd]. To this end, topologists have developed a rich suite of local simplification moves that allow us to reduce the number of tetrahedra without losing any topological information [@burton04-facegraphs; @matveev98-recognition].
The most basic of these are the four *Pachner moves* (also known as *bistellar moves*). These include the 3-2 move (which reduces the number of tetrahedra but preserves the number of vertices), the 4-1 move (which reduces both numbers), and also their inverses, the 2-3 and 1-4 moves. It is known that any two triangulations of the same closed 3-manifold are related by a sequence of Pachner moves [@pachner91-moves]. Moreover, if both are one-vertex triangulations then we can relate them using 2-3 and 3-2 moves alone [@matveev03-algms].
However, little is known about how *difficult* it is to relate two triangulations by a sequence of Pachner moves. In a series of papers, Mijatovi[ć]{} develops upper bounds on the number of moves required for various classes of 3-manifolds [@mijatovic03-simplifying; @mijatovic04-sfs; @mijatovic05-knot; @mijatovic05-haken]. All of these bounds are super-exponential in the number of tetrahedra, and some even involve exponential towers of exponential functions. For relating one-vertex triangulations using only 2-3 and 3-2 moves, no explicit bounds are known at all.
In this paper we focus on one-vertex triangulations of the 3-sphere. Here simplification is tightly linked to the important problem of *3-sphere recognition*, where we are given an input triangulation ${\mathcal{T}}$ and asked whether ${\mathcal{T}}$ represents the 3-sphere. This problem plays an key role in other important topological algorithms, such as connected sum decomposition [@jaco03-0-efficiency; @jaco95-algorithms-decomposition] and unknot recognition [@hara05-unknotting], and it is now becoming important in computational *4-manifold* topology. We can use Pachner moves for 3-sphere recognition in two ways:
- They give us a *direct* 3-sphere recognition algorithm: try all possible sequences of Pachner moves on ${\mathcal{T}}$ up to Mijatovi[ć]{}’s upper bound, and return “true” if and only if we reach one of the well-known “canonical” 3-sphere triangulations with one or two tetrahedra.
- They also allow a *hybrid* recognition algorithm: begin with a fast and/or greedy procedure to simplify ${\mathcal{T}}$ as far as possible within a limited number of moves. If we reach a canonical 3-sphere triangulation then return “true”; otherwise run a more traditional 3-sphere recognition algorithm on our new (and hopefully simpler) triangulation.
The direct algorithm lies well outside the realm of feasibility: Mijatovi[ć]{}’s bound is super-exponential in the number of tetrahedra, and the running time is at least exponential in Mijatovi[ć]{}’s bound. Current implementations [@burton04-regina] use the hybrid method, which is extremely effective in practice. Experience suggests that when ${\mathcal{T}}$ *is* the 3-sphere, the greedy simplification almost always gives a canonical triangulation. If simplification fails, we revert to the traditional algorithm of Rubinstein [@rubinstein95-3sphere]; although this runs in exponential time, recent improvements by several authors have made it extremely effective for moderate-sized problems [@burton10-dd; @burton10-quadoct; @jaco03-0-efficiency; @thompson94-thinposition].[^1]
Our aims in this paper are:
- to measure how easy or difficult it is *in practice* to relate two triangulations of the 3-sphere using Pachner moves, or to simplify a 3-sphere triangulation to use fewer tetrahedra;
- to understand why greedy simplification techniques work so well in practice, despite the prohibitive theoretical bounds of Mijatovi[ć]{};
- to investigate the possibility that Pachner moves could be used as the basis for a direct 3-sphere recognition algorithm that runs in sub-exponential or even polynomial time.
Fundamentally this is an experimental paper (though the theoretical underpinnings are interesting in their own right). Based on an exhaustive study of $\sim 150$ million triangulations (including $\sim 31$ million one-vertex triangulations of the 3-sphere), the answers appear to be:
- we can relate and simplify one-vertex triangulations of the 3-sphere using remarkably few Pachner moves;
- both procedures require us to add *at most two* extra tetrahedra, which explains why greedy simplification works so well;
- the number of moves required to simplify such a triangulation could also be bounded by a constant, which means polynomial-time 3-sphere recognition may indeed be possible.
These observations are extremely surprising, especially in light of Mijatovi[ć]{}’s bounds. If they can be proven in general—yielding a polynomial-time 3-sphere recognition algorithm—this would be a significant breakthrough in computational topology.
In Section \[s-prelim\] we outline preliminary concepts and introduce the *Pachner graph*, an infinite graph whose nodes represent triangulations and whose arcs represent Pachner moves. This graph is the framework on which we build the rest of the paper. We define *simplification paths* through the graph, as well as the key quantities of *length* and *excess height* that we seek to measure.
We follow in Section \[s-tools\] with two key tools for studying the Pachner graph: an isomorph-free census of all closed 3-manifold triangulations with $\leq 9$ tetrahedra (which gives us the nodes of the graph), and *isomorphism signatures* of triangulations that can be computed in polynomial time (which allow us to construct the arcs of the graph).
Section \[s-analysis\] describes parallel algorithms for bounding both the length and excess height of simplification paths, and presents the highly unexpected experimental results outlined above. We finish in Section \[s-conc\] with a discussion of the implications and consequences of these results.
Triangulations and the Pachner graph {#s-prelim}
====================================
A *3-manifold triangulation of size $n$* is a collection of $n$ tetrahedra whose $4n$ faces are affinely identified (or “glued together”) in $2n$ pairs so that the resulting topological space is a closed 3-manifold.[^2] We are not interested in the shapes or sizes of tetrahedra (since these do not affect the topology), but merely the combinatorics of how the faces are glued together. Throughout this paper, all triangulations and 3-manifolds are assumed to be connected.
We do allow two faces of the same tetrahedron to be identified, and we also note that distinct edges or vertices of the same tetrahedron might become identified as a by-product of the face gluings. A set of tetrahedron vertices that are identified together is collectively referred to as a *vertex of the triangulation*; we define an *edge* or *face of the triangulation* in a similar fashion.
![A 3-manifold triangulation of size $n=2$[]{data-label="fig-rp3"}](rp3)
Figure \[fig-rp3\] illustrates a 3-manifold triangulation of size $n=2$. Here the back two faces of the first tetrahedron are identified with a twist, the front faces of the first tetrahedron are identified with the front faces of the second using more twists, and the back faces of the second tetrahedron are identified together by directly “folding” one onto the other. This is a *one-vertex triangulation* since all eight tetrahedron vertices become identified together. The triangulation has three distinct edges, indicated in the diagram by three distinct arrowheads.
Mijatovi[ć]{} [@mijatovic03-simplifying] describes a *canonical triangulation* of the 3-sphere of size $n=2$, formed by a direct identification of the boundaries of two tetrahedra. In other words, given two tetrahedra $\mathit{ABCD}$ and $A'B'C'D'$, we directly identify face $\mathit{ABC}$ with $A'B'C'$, $ABD$ with $A'B'D'$, and so on. The resulting triangulation has four faces, six edges, and four vertices.
The four *Pachner moves* describe local modifications to a triangulation. These include:
- the *2-3 move*, where we replace two distinct tetrahedra joined along a common face with three distinct tetrahedra joined along a common edge;
- the *1-4 move*, where we replace a single tetrahedron with four distinct tetrahedra meeting at a common internal vertex;
- the *3-2* and *4-1 moves*, which are inverse to the 2-3 and 1-4 moves.
These four moves are illustrated in Figure \[fig-pachner\]. Essentially, the 1-4 and 4-1 moves retriangulate the interior of a pyramid, and the 2-3 and 3-2 moves retriangulate the interior of a bipyramid. It is clear that Pachner moves do not change the topology of the triangulation (i.e., the underlying 3-manifold remains the same). Another important observation is that the 2-3 and 3-2 moves do not change the number of vertices in the triangulation.
Two triangulations are *isomorphic* if they are identical up to a relabelling of tetrahedra and a reordering of the four vertices of each tetrahedron (that is, isomorphic in the usual combinatorial sense). Up to isomorphism, there are finitely many distinct triangulations of any given size.
Pachner originally showed that any two triangulations of the same closed 3-manifold can be made isomorphic by performing a sequence of Pachner moves [@pachner91-moves].[^3] Matveev later strengthened this result to show that any two *one-vertex* triangulations of the same closed 3-manifold with at least two tetrahedra can be made isomorphic through a sequence of 2-3 and/or 3-2 moves [@matveev03-algms]. The two-tetrahedron condition is required because it is impossible to perform a 2-3 or 3-2 move upon a one-tetrahedron triangulation (each move requires two or three distinct tetrahedra).
In this paper we introduce the *Pachner graph*, which describes *how* distinct triangulations of a closed 3-manifold can be related via Pachner moves. We define this graph in terms of *nodes* and *arcs*, to avoid confusion with the *vertices* and *edges* that appear in 3-manifold triangulations.
Let $M$ be any closed 3-manifold. The *Pachner graph* of $M$, denoted ${\mathscr{P}(M)}$, is an infinite graph constructed as follows. The nodes of ${\mathscr{P}(M)}$ correspond to isomorphism classes of triangulations of $M$. Two nodes of ${\mathscr{P}(M)}$ are joined by an arc if and only if there is some Pachner move that converts one class of triangulations into the other.
The *restricted Pachner graph* of $M$, denoted ${\mathscr{P}_1(M)}$, is the subgraph of ${\mathscr{P}(M)}$ defined by only those nodes corresponding to one-vertex triangulations. The nodes of ${\mathscr{P}(M)}$ and ${\mathscr{P}_1(M)}$ are partitioned into finite *levels* $1,2,3,\ldots$, where each level $n$ contains the nodes corresponding to $n$-tetrahedron triangulations.
![Levels 1–3 of the restricted Pachner graph of the 3-sphere[]{data-label="fig-rpg-s3"}](rpg-s3)
It is clear that the arcs are well-defined (since Pachner moves are preserved under isomorphism), and that arcs do not need to be directed (since each 2-3 or 1-4 move has a corresponding inverse 3-2 or 4-1 move). In the full Pachner graph ${\mathscr{P}(M)}$, each arc runs from some level $i$ to a nearby level $i\pm1$ or $i\pm3$. In the restricted Pachner graph ${\mathscr{P}_1(M)}$, each arc must describe a 2-3 or 3-2 move, and must run from some level $i$ to an adjacent level $i\pm1$. Figure \[fig-rpg-s3\] shows the first few levels of the restricted Pachner graph of the 3-sphere.
We can now reformulate the results of Pachner and Matveev as follows:
\[t-connected\] The Pachner graph of any closed 3-manifold is connected. If we delete level 1, the restricted Pachner graph of any closed 3-manifold is also connected.
To simplify a triangulation we essentially follow a path through ${\mathscr{P}(M)}$ or ${\mathscr{P}_1(M)}$ from a higher level to a lower level, which motivates the following definition.
A *simplification path* is a directed path through either ${\mathscr{P}(M)}$ or ${\mathscr{P}_1(M)}$ from a node at some level $i$ to a node at some lower level $<i$. The *length* of a simplification path is the number of arcs it contains. The *excess height* of a simplification path is the smallest $h \geq 0$ for which the entire path stays in or below level $i+h$.
Both the length and excess height measure how difficult it is to simplify a triangulation: the length measures the number of Pachner moves, and the excess height measures the number of extra tetrahedra required. For the 3-sphere, the only known bounds on these quantities are the following:
\[t-mij\] Any triangulation of the 3-sphere can be converted into the canonical triangulation using less than $6 \cdot 10^6 n^2 2^{2 \cdot 10^4 n^2}$ Pachner moves.
In the Pachner graph of the 3-sphere, from any node at level $n>2$ there is a simplification path of length less than $6 \cdot 10^6 n^2 2^{2 \cdot 10^4 n^2}$ and excess height less than $3 \cdot 10^6 n^2 2^{2 \cdot 10^4 n^2}$.
In the *restricted* Pachner graph, no explicit bounds on these quantities are known at all.
Key tools {#s-tools}
=========
Experimental studies of the Pachner graph are difficult: the graph itself is infinite, and even the finite level sets grow super-exponentially in size. By working with isomorphism classes of triangulations, we keep the level sets considerably smaller than if we had used labelled triangulations instead. However, the trade-off is that both the nodes and the arcs of the graph are more difficult to construct.
In this section we outline two key algorithmic tools for studying the Pachner graph: a *census of triangulations* (which enumerates the nodes at each level), and polynomial-time computable *isomorphism signatures* (which allow us to construct the arcs).
A census of triangulations {#s-tools-census}
--------------------------
To enumerate the nodes of Pachner graphs, we build a census of all 3-manifold triangulations of size $n \leq 9$, with each triangulation included precisely once up to isomorphism. Because we are particularly interested in one-vertex triangulations as well as triangulations of the 3-sphere, we extract such triangulations into separate censuses with the help of the highly optimised 3-sphere recognition algorithm described in [@burton10-quadoct]. The final counts are summarised in Table \[tab-census\].
------------ ----------- ----------- ---------- ----------
Number of
tetrahedra
1 4 3 2 1
2 17 12 6 3
3 81 63 32 20
4 577 433 198 128
5 5184 3961 1903 1297
6 57753 43584 19935 13660
7 722765 538409 247644 169077
8 9787509 7148483 3185275 2142197
9 139103032 99450500 43461431 28691150
Total 149676922 107185448 46916426 31017533
------------ ----------- ----------- ---------- ----------
: Counts for 3-manifold triangulations of various types in the census[]{data-label="tab-census"}
The algorithms behind this census are sophisticated; see [@burton07-nor10] for some of the techniques involved. The constraint that the triangulation must represent a 3-manifold is critical: if we just enumerate all pairwise identifications of faces up to isomorphism, there are at least $$\frac{[(4n-1)\times(4n-3)\times\cdots\times3\times1]\cdot6^{2n}}
{n! \cdot 24^n} \quad \simeq \quad 2.35 \times 10^{16}$$ possibilities for $n=9$. To enforce the 3-manifold constraint we use a modified union-find algorithm that tracks partially-constructed edge links and vertex links; see [@burton07-nor10] for details.
Even with this constraint, we can prove that the census grows at a super-exponential rate:
\[t-numvert\] The number of distinct isomorphism classes of 3-manifold triangulations of size $n$ grows at an asymptotic rate of $\exp(\Theta(n\log n))$.
The proof is detailed, and is given in the appendix.
For the largest case $n=9$, the enumeration of all 3-manifold triangulations up to isomorphism required $\sim 85$ days of CPU time as measured on a single 1.7 GHz IBM Power5 processor (though in reality this was reduced to 2–3 days of wall time using 32 CPUs in parallel). The time required to extract all 3-sphere triangulations from this census was negligible in comparison.
Isomorphism signatures
----------------------
To construct arcs of the Pachner graph, we begin at a node—that is, a 3-manifold triangulation ${\mathcal{T}}$—and perform Pachner moves. Each Pachner move results in a new triangulation ${\mathcal{T}}'$, and our main difficulty is in deciding which node of the Pachner graph represents ${\mathcal{T}}'$.
A naïve approach might be to search through nodes at the appropriate level of the Pachner graph and test each corresponding triangulation for isomorphism with ${\mathcal{T}}'$. However, this approach is infeasible: although isomorphism testing is fast (as we prove below), the sheer number of nodes at level $n$ of the graph is too large (see Theorem \[t-numvert\]).
What we need is a property of the triangulation ${\mathcal{T}}'$ that is easy to compute, and that uniquely defines the isomorphism class of ${\mathcal{T}}'$. This property could be used as the key in a data structure with fast insertion and fast lookup (such as a hash table or a red-black tree), and by computing this property we could quickly jump to the relevant node of the Pachner graph.
Here we define such a property, which we call the *isomorphism signature* of a triangulation. In Theorem \[t-sig-unique\] we show that isomorphism signatures do indeed uniquely define isomorphism classes, and in Theorem \[t-sig-fast\] we show that they are small to store and fast to compute.
A *labelling* of a triangulation of size $n$ involves: (i) numbering its tetrahedra from 1 to $n$ inclusive, and (ii) numbering the four vertices of each tetrahedron from 1 to 4 inclusive. We also label the four faces of each tetrahedron from 1 to 4 inclusive so that face $i$ is opposite vertex $i$. A key ingredient of isomorphism signatures is *canonical labellings*, which we define as follows.
Given a labelling of a triangulation of size $n$, let $A_{t,f}$ denote the tetrahedron which is glued to face $f$ of tetrahedron $t$ (so that $A_{t,f} \in \{1,\ldots,n\}$ for all $t=1,\ldots,n$ and $f=1,\ldots,4$). The labelling is *canonical* if, when we write out the sequence $A_{1,1},A_{1,2},A_{1,3},A_{1,4},\allowbreak A_{2,1},\ldots,A_{n,4}$, the following properties hold:
(i) For each $2 \leq i < j$, tetrahedron $i$ first appears before tetrahedron $j$ first appears.
(ii) For each $i \geq 2$, suppose tetrahedron $i$ first appears as the entry $A_{t,f}$. Then the corresponding gluing uses the *identity map*: face $f$ of tetrahedron $t$ is glued to face $f$ of tetrahedron $i$ so that vertex $v$ of tetrahedron $t$ maps to vertex $v$ of tetrahedron $i$ for each $v \neq f$.
As an example, consider the triangulation of size $n=3$ described by Table \[tab-gluings\]. This table lists the precise gluings of tetrahedron faces. For instance, the second cell in the bottom row indicates that face 2 of tetrahedron 3 is glued to tetrahedron 2, in such a way that vertices $1,3,4$ of tetrahedron 3 map to vertices $4,2,3$ of tetrahedron 2 respectively. This same gluing can be seen from the other direction by examining the first cell in the middle row.
------------ -------------- -------------- -------------- --------------
Vertices 234 Vertices 134 Vertices 124 Vertices 123
[Tet. 1]{} Tet. 1:231 Tet. 2:134 Tet. 3:124 Tet. 1:423
[Tet. 2]{} Tet. 1:134 Tet. 2:123 Tet. 2:124
[Tet. 3]{} Tet. 3:123 Tet. 1:124 Tet. 3:234
------------ -------------- -------------- -------------- --------------
: The tetrahedron face gluings for an example 3-tetrahedron triangulation[]{data-label="tab-gluings"}
It is simple to see that the labelling for this triangulation is canonical. The sequence $A_{1,1},\ldots,A_{n,4}$ is $1,2,3,1,\allowbreak 3,1,2,2,\allowbreak 3,2,1,3$ (reading tetrahedron numbers from left to right and then top to bottom in the table), and tetrahedron 2 first appears before tetrahedron 3 as required. Looking closer, the first appearance of tetrahedron 2 is in the second cell of the top row where vertices $1,3,4$ map to $1,3,4$, and the first appearance of tetrahedron 3 is in the following cell where vertices $1,2,4$ map to $1,2,4$. In both cases the gluings use the identity map.
\[l-can-fast\] For any triangulation ${\mathcal{T}}$ of size $n$, there are precisely $24n$ canonical labellings of ${\mathcal{T}}$, and these can be enumerated in $O(n^2\log n)$ time.
In summary, we can choose any of the $n$ tetrahedra to label as tetrahedron 1, and we can choose any of the $4!=24$ labellings of its four vertices. From here the remaining labels are forced, and can be deduced in $O(n\log n)$ time. The full proof is given in the appendix.
For any triangulation ${\mathcal{T}}$ of size $n$, enumerate all $24n$ canonical labellings of ${\mathcal{T}}$, and for each canonical labelling encode the full set of face gluings as a sequence of bits. We define the *isomorphism signature* to be the lexicographically smallest of these $24n$ bit sequences, and we denote this by ${\sigma}({\mathcal{T}})$.
To encode the full set of face gluings for a canonical labelling, we could simply convert a table of gluing data (such as Table \[tab-gluings\]) into a sequence of bits. For practical implementations we use a more compact representation, which will be described in the full version of this paper.
\[t-sig-unique\] Given two 3-manifold triangulations ${\mathcal{T}}$ and ${\mathcal{T}}'$, we have ${\sigma}({\mathcal{T}}) = {\sigma}({\mathcal{T}}')$ if and only if ${\mathcal{T}}$ and ${\mathcal{T}}'$ are isomorphic.
It is clear that ${\sigma}({\mathcal{T}}) = {\sigma}({\mathcal{T}}')$ implies that ${\mathcal{T}}$ and ${\mathcal{T}}'$ are isomorphic, since both signatures encode the same gluing data. Conversely, if ${\mathcal{T}}$ and ${\mathcal{T}}'$ are isomorphic then their $24n$ canonical labellings are the same (though they might be enumerated in a different order). In particular, the lexicographically smallest canonical labellings will be identical; that is, ${\sigma}({\mathcal{T}})={\sigma}({\mathcal{T}}')$.
\[t-sig-fast\] Given a 3-manifold triangulation ${\mathcal{T}}$ of size $n$, the isomorphism signature ${\sigma}({\mathcal{T}})$ has $O(n\log n)$ size and can be generated in $O(n^2\log n)$ time.
To encode a full set of face gluings, at worst we require a table of gluing data such as Table \[tab-gluings\], with $4n$ cells each containing four integers. Because some of these integers require $O(\log n)$ bits (the tetrahedron labels), it follows that the total size of ${\sigma}({\mathcal{T}})$ is $O(n \log n)$.
The algorithm to generate ${\sigma}({\mathcal{T}})$ is spelled out explicitly in its definition. The $24n$ canonical labellings of ${\mathcal{T}}$ can be enumerated in $O(n^2\log n)$ time (Lemma \[l-can-fast\]). Because a full set of face gluings has size $O(n\log n)$, we can encode the $24n$ bit sequences and select the lexicographically smallest in $O(n^2\log n)$ time, giving a time complexity of $O(n^2\log n)$ overall.
This space complexity of $O(n\log n)$ is the best we can hope for, since Theorem \[t-numvert\] shows that the number of distinct isomorphism signatures for size $n$ triangulations grows like $\exp(\Theta(n \log n))$.
It follows from Theorems \[t-sig-unique\] and \[t-sig-fast\] that isomorphism signatures are ideal tools for constructing arcs in the Pachner graph, as explained at the beginning of this section. Moreover, the relevant definitions and results are easily extended to bounded and ideal triangulations (which are beyond the scope of this paper). We finish with a simple but important consequence of our results:
Given two 3-manifold triangulations ${\mathcal{T}}$ and ${\mathcal{T}}'$ each of size $n$, we can test whether ${\mathcal{T}}$ and ${\mathcal{T}}'$ are isomorphic in $O(n^2\log n)$ time.
Analysing the Pachner graph {#s-analysis}
===========================
As discussed in the introduction, our focus is on one-vertex triangulations of the 3-sphere. We therefore direct our attention to ${\mathscr{P}_1(S^3)}$, the restricted Pachner graph of the 3-sphere.
In this section we develop algorithms to bound the shortest length and smallest excess height of any simplification path from a given node at level $n$ of ${\mathscr{P}_1(S^3)}$. By running these algorithms over the full census of $31\,017\,533$ one-vertex triangulations of the 3-sphere (as described in Section \[s-tools-census\]), we obtain a computer proof of the following results:
\[t-results\] From any node at level $n$ of the graph ${\mathscr{P}_1(S^3)}$ where $3 \leq n \leq 9$, there is a simplification path of length $\leq 13$, and there is a simplification path of excess height $\leq 2$.
The bound $3 \leq n$ is required because there are no simplification paths in ${\mathscr{P}_1(S^3)}$ starting at level 2 or below (see Figure \[fig-rpg-s3\]). For $n > 9$ a computer proof becomes computationally infeasible.
The results of Theorem \[t-results\] are astonishing, especially in light of Mijatovi[ć]{}’s super-exponential bounds. Furthermore, whilst it can be shown that the excess height bound of $\leq 2$ is tight, the length estimate of $\leq 13$ is extremely rough: the precise figures could be much smaller still. These results have important implications, which we discuss later in Section \[s-conc\].
In this section we describe the algorithms behind Theorem \[t-results\], and we present the experimental results in more detail. Our algorithms are constrained by the following factors:
- Their time and space complexities must be close to linear in the number of nodes that they examine, due to the sheer size of the census.
- They cannot loop through all nodes in ${\mathscr{P}_1(S^3)}$, since the graph is infinite. They cannot even loop through all nodes at level $n \geq 10$, since there are too many to enumerate.
- They cannot follow arbitrary breadth-first or depth-first searches through ${\mathscr{P}_1(S^3)}$, since the graph is infinite and can branch heavily in the upward direction.[^4]
Because of these limiting factors, we cannot run through the census and directly measure the shortest length or smallest excess height of any simplification path from each node. Instead we develop fast, localised algorithms that allow us to bound these quantities from above. To our delight, these bounds turn out to be extremely effective in practice. The details are as follows.
Bounding excess heights {#s-analysis-height}
-----------------------
In this section we compute bounds $H_n$ so that, from every node at level $n$ of the graph ${\mathscr{P}_1(S^3)}$, there is some simplification path of excess height $\leq H_n$. As in Theorem \[t-results\], we compute these bounds for each $n$ in the range $3 \leq n \leq 9$.
\[a-height\] This algorithm runs by progressively building a subgraph $G \subset {\mathscr{P}_1(S^3)}$. At all times we keep track of the number of distinct components of $G$ (which we denote by $c$) and the maximum level of any node in $G$ (which we denote by $\ell$).
1. Initialise $G$ to all of level $n$ of ${\mathscr{P}_1(S^3)}$. This means that $G$ has no arcs, the number of components $c$ is just the number of nodes at level $n$, and the maximum level is $\ell = n$.
2. While $c > 1$, expand the graph as follows:
(a) Construct all arcs from nodes in $G$ at level $\ell$ to (possibly new) nodes in ${\mathscr{P}_1(S^3)}$ at level $\ell+1$. Insert these arcs and their endpoints into $G$.
(b) Update the number of components $c$, and increment $\ell$ by one.
3. Once we have $c=1$, output the final bound $H_n = \ell - n$ and terminate.
In step 2(a) we construct arcs by performing 2-3 moves. We only construct arcs from nodes *already* in $G$, which means we only work with a small portion of level $\ell$ for each $\ell > n$. In step 2(b) we use union-find to update the number of components in small time complexity.
It is clear that Algorithm \[a-height\] is correct for any $n \geq 3$: once we have $c=1$ the subgraph $G$ is connected, which means there is a path from any node at level $n$ to any other node at level $n$. By Theorem \[t-connected\] at least one such node allows a 3-2 move, and so any node at level $n$ has a simplification path of excess height $\leq \ell$.
However, it is not clear that Algorithm \[a-height\] terminates: it might be that *every* simplification path from some node at level $n$ passes through nodes that we never construct at higher levels $\ell > n$. Happily it does terminate for all $3 \leq n \leq 9$, giving an output of $H_2 = 2$ each time. Table \[tab-height\] shows how the number of components $c$ changes throughout the algorithm in each case.
$$\small \begin{array}{l|r|r|r|r|r|r|r}
\mbox{Input level $n$} & 3 & 4 & 5 & 6 & 7 & 8 & 9 \\
\hline
\mbox{Value of $c$ when $\ell = n$} &
20 & 128 & 1\,297 & 13\,660 & 169\,077 & 2\,142\,197 & 28\,691\,150 \\
\mbox{Value of $c$ when $\ell = n+1$} &
8 & 50 & 196 & 1\,074 & 7\,784 & 64\,528 & 557\,428 \\
\mbox{Value of $c$ when $\ell = n+2$} &
1 & 1 & 1 & 1 & 1 & 1 & 1 \\
\hline
\mbox{Final bound $H_n$} &
\mathbf{2} & \mathbf{2} & \mathbf{2} & \mathbf{2} & \mathbf{2} &
\mathbf{2} & \mathbf{2}
\end{array}$$
It is straightforward to show that the space and time complexities of Algorithm \[a-height\] are linear and log-linear respectively in the number of nodes in $G$ (other small polynomial factors in $n$ and $\ell$ also appear). Nevertheless, the memory requirements for $n=8$ were found to be extremely large in practice ($\sim$29GB), and for $n=9$ they were too large for the algorithm to run (estimated at 400–500GB). In the case of $n=9$ a *two-phase* approach was necessary:
1. Use Algorithm \[a-height\] for the transition from level $n$ to level $n+1$, and terminate if $H_n = 1$.
2. From each node $v$ at level $n+1$, try all possible *combinations* of a 2-3 move followed by a 3-2 move. Let $w$ be the endpoint of such a combination (so $w$ is also a node at level $n+1$). If $w \in G$ then merge the components and decrement $c$ if necessary. Otherwise do nothing (since $w$ would never have been constructed in the original algorithm).
3. If $c=1$ after this procedure then output $H_n=2$; otherwise terminate with no result.
It is important to note that, if this two-phase approach *does* output a result, it will always be the same result as Algorithm \[a-height\]. Essentially Step 2 simulates the transition from level $n+1$ to $n+2$ in the original algorithm, with the advantage of a much smaller memory footprint (since it does not store any nodes at level $n+2$), but with the disadvantage that it cannot move on to level $n+3$ if required (and so it cannot output any result if $H_n > 2$).
Of course by the time we reach $n=9$ there are reasons to suspect that $H_n=2$ (following the pattern for $3 \leq n \leq 8$), and so this two-phase method seems a reasonable (and ultimately successful) approach. For $n=9$ the memory consumption was $\sim$50GB, which was (just) within the capabilities of the host machine.
Bounding path lengths
---------------------
Our next task is to compute bounds $L_n$ so that, from every node at level $n$ of ${\mathscr{P}_1(S^3)}$, there is some simplification path of length $\leq L_n$. Once again we compute $L_n$ for $3 \leq n \leq 9$.
Because it is infeasible to perform arbitrary breadth-first searches through ${\mathscr{P}_1(S^3)}$, we only consider paths that can be expressed as a series of *jumps*, where each jump involves a pair of 2-3 moves followed by a pair of 3-2 moves. This keeps the search space and memory usage small: we always stay within levels $n$, $n+1$ and $n+2$, and we never need to explicitly store any nodes above level $n$. On the other hand, it means that our bounds $L_n$ are very rough—there could be much shorter simplification paths that we do not detect.
\[a-length\] First identify the set $I$ of all nodes at level $n$ of ${\mathscr{P}_1(S^3)}$ that have an arc running down to level $n-1$. Then conduct a breadth-first search across level $n$, beginning with the nodes in $I$ and using jumps as the steps in this breadth-first search. If $j$ is the maximum number of jumps required to reach any node in level $n$ from the initial set $I$, then output the final bound $L_n = 4j+1$.
To identify the initial set $I$ we simply attempt to perform 3-2 moves. When we process each node $v$, we must enumerate all jumps out from $v$; that is, all combinations of two 2-3 moves followed by two 3-2 moves. The number of such combinations is $O(n^4)$ in general.
This time we can guarantee both correctness and termination if $3 \leq n \leq 9$. Because $n \geq 3$ the initial set $I$ is non-empty (Theorem \[t-connected\]), and from our height experiments in Section \[s-analysis-height\] we know that our search will eventually reach all of level $n$. It follows that every node at level $n$ of ${\mathscr{P}_1(S^3)}$ has a path of length $\leq 4j$ to some $v \in I$, and therefore a simplification path of length $\leq 4j+1$. Table \[tab-length\] shows how the search progresses for each $n$.
$$\small \begin{array}{l|r|r|r|r|r|r|r}
\mbox{Input level $n$} & 3 & 4 & 5 & 6 & 7 & 8 & 9 \\
\hline
\mbox{Size of $I$} &
3 & 46 & 504 & 6\,975 & 91\,283 & 1\,300\,709 & 18\,361\,866 \\
\mbox{Nodes remaining} &
17 & 82 & 793 & 6\,685 & 77\,794 & 841\,488 & 10\,329\,284 \\
\mbox{Nodes remaining after 1 jump} &
3 & 1 & 19 & 75 & 496 & 4\,222 & 31\,250 \\
\mbox{Nodes remaining after 2 jumps} &
0 & 0 & 1 & 1 & 0 & 6 & 12 \\
\mbox{Nodes remaining after 3 jumps} & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
\hline
\mbox{Final bound $L_n$} &
\mathbf{9} & \mathbf{9} & \mathbf{13} & \mathbf{13} & \mathbf{9} &
\mathbf{13} & \mathbf{13}
\end{array}$$
This time the space and time complexities are linear and log-linear respectively in the number of nodes at level $n$ (again with further polynomial factors in $n$). This is considerably smaller than the number of nodes processed in Algorithm \[a-height\], and so for Algorithm \[a-length\] memory is not a problem: the case $n=9$ runs in under 4GB.
Parallelisation and performance {#s-analysis-perf}
-------------------------------
For $n=9$, both Algorithms \[a-height\] and \[a-length\] have lengthy running times: Algorithm \[a-height\] requires a very large number of nodes to be processed at levels 9, 10 and 11 of the Pachner graph, and Algorithm \[a-length\] spends significant time enumerating the $O(n^4)$ available jumps from each node.
We can parallelise both algorithms by processing nodes simultaneously (in step 2 of Algorithm \[a-height\], and during each stage of the breadth-first search in Algorithm \[a-length\]). We must be careful however to serialise any updates to the graph.
The experiments described here used an 8-core 2.93GHz Intel Xeon X5570 CPU with 72GB of RAM (using all cores in parallel). With the serialisation bottlenecks, Algorithms \[a-height\] and \[a-length\] achieved roughly $90.5\%$ and $98.5\%$ CPU utilisation for the largest case $n=9$, and ran for approximately 6 and 15 days respectively. All code was written using the topological software package [[*Regina*]{}]{} [@regina; @burton04-regina].
Discussion {#s-conc}
==========
As we have already noted, the bounds obtained in Section \[s-analysis\] are astonishingly small. Although we only consider $n \leq 9$, this is not a small sample: the census includes $\sim 150$ million triangulations including $\sim 31$ million one-vertex 3-spheres; moreover, nine tetrahedra are enough to build complex and interesting topological structures [@burton07-nor10; @martelli01-or9]. Our results lead us to the following conjectures:
\[cj-boundedheight\] From any node at any level $n \geq 3$ of the graph ${\mathscr{P}_1(S^3)}$ there is a simplification path of excess height $\leq 2$.
If true, this result (combined with Theorem \[t-numvert\]) would reduce Mijatovi[ć]{}’s bound in Theorem \[t-mij\] from $\exp(O(n^2))$ to $\exp(O(n\log n))$ for one-vertex triangulations of the 3-sphere. Furthermore, it would help explain why 3-sphere triangulations are so easy to simplify in practice.
There are reasons to believe that a proof might be possible. As a starting point, a simple Euler characteristic argument shows that every closed 3-manifold triangulation has an edge of degree $\leq 5$; using *at most two* “nearby” 2-3 moves, this edge can be made degree three (the setting for a possible 3-2 simplification). The details will appear in the full version of this paper.
\[cj-boundedmoves\] From any node at any level $n \geq 3$ of the graph ${\mathscr{P}_1(S^3)}$ there is a simplification path of length $\leq 13$.
This is a bolder conjecture, since the length experiments are less consistent in their results. However, the fact remains that every 3-sphere triangulation of size $n \leq 9$ can be simplified after just three jumps, and this number does not rise between $n=5$ and $n=9$.
If true, this second conjecture would yield an immediate polynomial-time 3-sphere recognition algorithm: for any triangulation of size $n \geq 3$ we can enumerate all $O(n^{4 \times 3})$ combinations of three jumps, and test each resulting triangulation for a 3-2 move down to $n-1$ tetrahedra. By repeating this process $n-2$ times, we will achieve either a recognisable 2-tetrahedron triangulation of the 3-sphere, or else a proof that our input is not a 3-sphere triangulation.
Even if Conjecture \[cj-boundedmoves\] is false and the length bounds do grow with $n$, this growth rate appears to be extremely slow. A growth rate of $L_n \in O(\log n)$ or even $O(\sqrt{n})$ would still yield the first known sub-exponential 3-sphere recognition algorithm (using the same procedure as above), which would be a significant theoretical breakthrough in algorithmic 3-manifold topology.
Looking forward, it is natural to ask whether this behaviour extends beyond the 3-sphere to triangulations of arbitrary 3-manifolds. Initial experiments suggest “partially”: the Pachner graphs of other 3-manifolds also appear to be remarkably well-connected, though not enough to support results as strong as Conjectures \[cj-boundedheight\] and \[cj-boundedmoves\] above. We explore these issues further in the full version of this paper.
Acknowledgements {#acknowledgements .unnumbered}
================
The author is grateful to the Australian Research Council for their support under the Discovery Projects funding scheme (project DP1094516). Computational resources used in this work were provided by the Queensland Cyber Infrastructure Foundation and the Victorian Partnership for Advanced Computing.
Benjamin A. Burton\
School of Mathematics and Physics, The University of Queensland\
Brisbane QLD 4072, Australia\
(bab@maths.uq.edu.au)
Appendix: Additional proofs {#appendix-additional-proofs .unnumbered}
===========================
Here we offer full proofs for Theorem \[t-numvert\] and Lemma \[l-can-fast\], which were omitted from the main text to simplify the exposition.
The number of distinct isomorphism classes of 3-manifold triangulations of size $n$ grows at an asymptotic rate of $\exp(\Theta(n\log n))$.
An upper bound of $\exp(O(n\log n))$ is easy to obtain. If we count all possible gluings of tetrahedron faces, without regard for isomorphism classes or other constraints (such as the need for the triangulation to represent a closed 3-manifold), we obtain an upper bound of $$\left[(4n-1)\times(4n-3)\times\cdots\times3\times1 \right]
\cdot 6^{2n} < (4n)^{2n} \cdot 6^{2n} \in \exp(O(n\log n)).$$
Proving a lower bound of $\exp(\Omega(n\log n))$ is more difficult—the main complication is that most pairwise identifications of tetrahedron faces do not yield a 3-manifold at all [@dunfield06-random-covers]. We work around this by first counting *2-manifold* triangulations (which are much easier to obtain), and then giving a construction that “fattens” these into 3-manifold triangulations without introducing any unwanted isomorphisms.
To create a 2-manifold triangulation of size $2m$ (the size must always be even), we identify the $6m$ edges of $2m$ distinct triangles in pairs. Any such identification will always yield a closed 2-manifold (that is, nothing can “go wrong”, in contrast to the three-dimensional case).
There is, however, the issue of connectedness to deal with (recall from the beginning of Section \[s-prelim\] that all triangulations in this paper are assumed to be connected). To ensure that a labelled 2-manifold triangulation is connected, we insist that for each $k=2,3,\ldots,2m$, the first edge of the triangle labelled $k$ is identified with some edge from one of the triangles labelled $1,2,\ldots,k-1$. Of course many connected labelled 2-manifold triangulations do not have this property, but since we are proving a lower bound this does not matter.
We can now place a lower bound on the number of labelled 2-manifold triangulations. First we choose which edges to pair with the first edges from triangles $2,3,\ldots,2m$; from the property above we have $3 \times 4 \times \ldots \times 2m \times (2m+1) = \frac12 (2m+1)!$ choices. We then pair off the remaining $2m+2$ edges, with $(2m+1) \times (2m-1) \times \ldots \times 3 \times 1 =
(2m+1)!/2^m m!$ possibilities overall. Finally we note that each of the $3m$ pairs of edges can be identified using one of two possible orientations. The total number of labelled 2-manifold triangulations is therefore at least $$\frac{(2m+1)!}{2} \cdot \frac{(2m+1)!}{2^m m!} \cdot 2^{3m}
= \frac{(2m+1)! \cdot (2m+1)! \cdot 2^{2m}}{2 \cdot m!}.$$
Each isomorphism class can contain at most $(2m)! \cdot 6^{2m}$ labelled triangulations, and so the number of distinct *isomorphism classes* of 2-manifold triangulations is bounded below by $$\begin{aligned}
\frac{(2m+1)! \cdot (2m+1)! \cdot 2^{2m}}
{2 \cdot m! \cdot (2m)! \cdot 6^{2m}} &=
\frac{(2m+1) \cdot (2m+1)!}{2 \cdot m! \cdot 3^{2m}} \\
&> (2m+1) \times 2m \times \cdots \times (m+2) \times (m+1) \times
\left(\tfrac{1}{9}\right)^m \\
&> (m+1)^{m+1} \cdot \left(\tfrac{1}{9}\right)^m \\
&\in \exp(\Omega(m\log m)).
\end{aligned}$$
We fatten each 2-manifold triangulation into a 3-manifold triangulation as follows. Let $F$ denote the closed 2-manifold described by the original triangulation.
1. Replace each triangle with a prism and glue the vertical faces of adjacent prisms together, as illustrated in Figure \[sub-fatten-prisms\]. This represents a *bounded* 3-manifold, which is the product space $F \times I$.
2. Cap each prism at both ends with a triangular pillow, as illustrated in Figure \[sub-fatten-pillow\]. The two faces of each pillow are glued to the top and bottom of the corresponding prism, effectively converting each prism into a solid torus. This produces the *closed* 3-manifold $F \times S^1$, and the complete construction is illustrated in Figure \[sub-fatten-all\].
3. Triangulate each pillow using two tetrahedra, which are joined along three internal faces surrounding an internal vertex. Triangulate each prism using $14$ tetrahedra, which again all meet at an internal vertex. Both triangulations are illustrated in Figure \[sub-fatten-tri\].
-- --
-- --
If the original 2-manifold triangulation uses $2m$ triangles, the resulting 3-manifold triangulation uses $n=32m$ tetrahedra. Moreover, if two 3-manifold triangulations obtained using this construction are isomorphic, the original 2-manifold triangulations must also be isomorphic. The reason for this is as follows:
- Any isomorphism between two such 3-manifold triangulations must map triangular pillows to triangular pillows. This is because the internal vertex of each triangular pillow meets only two tetrahedra, and no other vertices under our construction have this property.
- By “flattening” the triangular pillows into 2-dimensional triangles, we thereby obtain an isomorphism between the underlying 2-manifold triangulations.
It follows that, for $n=32m$, we obtain a family of $\exp(\Omega(m\log m)) = \exp(\Omega(n\log n))$ pairwise non-isomorphic 3-manifold triangulations.
This result is easily extended to $n \not\equiv 0 \bmod 32$. Let $V_n$ denote the number of distinct isomorphism classes of 3-manifold triangulations of size $n$.
- Each triangulation of size $n$ has at least $n-1$ distinct 2-3 moves available (since any face joining two distinct tetrahedra defines a 2-3 move, and there are at least $n-1$ such faces).
- On the other hand, each triangulation of size $n+1$ has at most $6(n+1)$ distinct 3-2 moves available (since each 3-2 move is defined by an edge that meets three distinct tetrahedra, and the triangulation has at most $6(n+1)$ edges in total).
It follows that $V_{n+1} \geq V_n \cdot \frac{n-1}{6(n+1)} \geq V_n/18$ for any $n > 1$. This gives $V_{32m+k} \geq V_{32m} / 18^{31}$ for sufficiently large $m$ and all $0 \leq k < 32$, and so we obtain $V_n \in \exp(\Omega(n\log n))$ with no restrictions on $n$.
Of course, we expect that $V_{n+1} \gg V_n$ (and indeed we see this in the census). The bounds that we use to show $V_{n+1} \geq V_n/18$ in the proof above are very loose, but they are sufficient for the asymptotic result that we seek.
For any triangulation ${\mathcal{T}}$ of size $n$, there are precisely $24n$ canonical labellings of ${\mathcal{T}}$, and these can be enumerated in $O(n^2\log n)$ time.
For $n=1$ the result is trivial, since all $24=4!$ possible labellings are canonical. For $n>1$ we observe that, if we choose (i) any one of the $n$ tetrahedra to label as tetrahedron 1, and (ii) any one of the $24$ possible labellings of its four vertices, then there is one and only one way to extend these choices to a canonical labelling of ${\mathcal{T}}$.
To see this, we can walk through the list of faces $F_{1,1},F_{1,2},F_{1,3},F_{1,4},F_{2,1},\ldots,F_{n,4}$, where $F_{t,i}$ represents face $i$ of tetrahedron $t$. The first face amongst $F_{1,1},\ldots,F_{1,4}$ that is joined to an unlabelled tetrahedron must in fact be joined to tetrahedron 2 using the identity map; this allows us to deduce tetrahedron 2 as well as the labels of its four vertices.
We inductively extend the labelling in this manner: once we have labelled tetrahedra $1,\ldots,k$ and their corresponding vertices, the first face amongst $F_{1,1},\ldots,F_{k,4}$ that is joined to an unlabelled tetrahedron must give us tetrahedron $k+1$ and the labels for its four vertices (again using the identity map). The resulting labelling is canonical, and all of the labels can be deduced in $O(n\log n)$ time using a single pass through the list $F_{1,1},\ldots,F_{n,4}$. The $\log n$ factor is required for manipulating tetrahedron labels, each of which requires $O(\log n)$ bits.
It follows that there are precisely $24n$ canonical labellings of ${\mathcal{T}}$, and that these can be enumerated in $O(n^2\log n)$ time using $24n$ iterations of the procedure described above.
[^1]: See [@burton10-quadoct] for explicit measurements of running time.
[^2]: It is sometimes useful to consider *bounded* triangulations where some faces are left unidentified, or *ideal* triangulations where the overall space only becomes a 3-manifold when we delete the vertices of each tetrahedron. Such triangulations do not concern us here.
[^3]: As Mijatovi[ć]{} notes, Pachner’s original result was proven only for true simplicial complexes, but it is easily extended to the more flexible definition of a triangulation that we use here [@mijatovic03-simplifying]. The key step is to remove irregularities by performing a second barycentric subdivision using Pachner moves.
[^4]: In general, a node at level $n$ can have up to $2n$ distinct neighbours at level $(n+1)$.
|
---
abstract: 'We present a universal Holevo-like upper bound on the locally accessible information for arbitrary multipartite ensembles. This bound allows us to analyze the indistinguishability of a set of orthogonal states under LOCC. We also derive the upper bound for the capacity of distributed dense coding with multipartite senders and multipartite receivers.'
author:
- Wei Song
title: Locally accessible information from multipartite ensembles
---
It is well known that any set of orthogonal states can be discriminated if there are no restrictions to measurements that one can perform. However, discrimination with certainty is not guaranteed for multipartite orthogonal states, if only local operations and classical communication (LOCC) are allowed [@Ghosh:2001; @Walgate:2000; @Chen:2003; @Fan:2004; @Watrous:2005; @Ye:2007]. For example, more than two orthogonal Bell states with a single copy cannot be distinguished by LOCC[@Ghosh:2001]. In Ref. [@Bennett:1999] Bennett *et al* constructed a set of orthogonal bipartite pure product states, that cannot be distinguished with certainty by LOCC. Another counterintuitive result was obtained in Ref. [@Horodecki:2003]: there are ensembles of locally distinguishable orthogonal states, for which one can destroy local distinguishability by reducing the average entanglement of the ensemble states. To understand these interesting results deeply, it is important to investigate the connection between classical and quantum information and extraction of classical information about the ensemble by local operations and classical communication.
An important step is made in Ref.[@Badziag:2003], Badzig *et al*. found a universal Holevo-like upper bound on the locally accessible information. They show that for a bipartite ensemble $\left\{ {p_x ,\rho _x^{AB} } \right\}$, the locally accessible information is bounded by
$$\label{eq1} I^{LOCC} \le S\left( {\rho ^A} \right) + S\left( {\rho
^B} \right) - \mathop {\max }\limits_{Z = A,B} \sum\limits_x {p_x
S\left( {\rho _x^Z } \right)} ,$$
where $\rho ^A$ and $\rho ^B$ are the reductions of $\rho ^{AB} = \sum\nolimits_x {p_x \rho _x^{AB} } $, and $\rho
_x^Z $ is a reduction of $\rho _x^{AB} $.
In this paper, we will prove a multipartite generalization of this bound. First we consider an arbitrary tripartite ensemble $R =
\left\{ {p_x ,\rho _x^{ABC} } \right\}$ to give an example. The central tool we will require is the following result[@Badziag:2003], which is a generalization of the Holevo bound on mutual information.
*Lemma 1*. If a measurement on ensemble $Q = \left\{ {p_x ,\rho
_x } \right\}$ produces result $y$ and leaves a postmeasurement ensemble $Q^y = \left\{ {p_{x\vert y} ,\rho _{x\vert y} } \right\}$ with probability $p_y $, then the mutual information $I$ (between the indentity of state in the ensemble and measurement outcome) extracted from the measurement is bounded by
$$\label{eq2} I \le \chi _Q - \bar {\chi }_{Q^y} ,$$
where $\bar {\chi }_{Q^y} $ is the average Holevo bound for the possible postmeasurement ensemble, i.e., $\sum\nolimits_y
{p_y \chi _{Q^y} } $. Suppose that Alice, Bob and Charlie are far apart and the allowed measurements strategies are limited to LOCC-based measurements. Without loss of generality, let Alice make the first measurement, and suppose that she obtains an outcome $a$ with probability $p_a $. Suppose that the postmeasurement ensemble is $R_a = \left\{ {p_{x\vert a} ,\rho _{x\vert a}^{ABC} } \right\}$. Lemma 1 bounds the mutual information obtained from Alice as follows: $I_1^A \le \chi _{R^A} - \bar {\chi }_{R_a^A } ,$ where $\chi _{R^A} $ is the Holevo quantity of the $A$ part of the ensemble $R$ and $\chi _{R_a^A } $ is the Holevo quantity of the $A$ part of the ensemble $R_a $. After Bob has learned the Alice’s result was $a$, his ensemble is denoted by $R_a^B = \left\{
{p_{x\vert a} ,\rho _{x\vert a}^B } \right\}$, with $\rho _x^B =
tr_{AC} \left( {\rho _x^{ABC} } \right)$, Suppose Bob performs the second measurement and obtains outcome $b$ with probability $p_b $, then the postmeasurement ensemble is $R_{ab} = \left\{ {p_{x\vert
ab} ,\rho _{x\vert ab}^{ABC} } \right\}$. Using Lemma 1, the mutual information obtained from Bob’s measurement has the bound: $I_2^B
\le \bar {\chi }_{R_{^a}^B } - \bar {\chi }_{R_{ab}^B } ,$ where $\bar {\chi }_{R_{^a}^B } = \sum\nolimits_a {p_a \left[ {S\left(
{\sum\nolimits_x {p_{x\vert a} \rho _{x\vert a}^B } } \right) -
\sum\nolimits_x {p_{x\vert a} S\left( {\rho _{x\vert a}^B } \right)}
} \right]} $, and $\bar {\chi }_{R_{^{ab}}^B } = \sum\nolimits_{ab}
{p_{ab} \left[ {S\left( {\sum\nolimits_x {p_{x\vert ab} \rho
_{x\vert ab}^B } } \right) - \sum\nolimits_x {p_{x\vert ab} S\left(
{\rho _{x\vert ab}^B } \right)} } \right]} $. Similarly, the information extracted from Charlie’s measurement is bounded as follows: $I_3^C \le \bar {\chi }_{R_{^{ab}}^C } - \bar {\chi
}_{R_{abc}^C } ,$ where we have assumed that Charlie obtains an outcome $c$ with probability $p_c $. This procedure goes for an arbitrary number of steps, thus the total information gathered from all steps is $I^{LOCC} = I_1^A + I_2^B + I_3^C + \cdots $, where the subscript $n$ denotes the information is extracted from the $n$th measurement. To proceed with our derivations, we need the following facts: **(i)**Concavity of the von Neumann entropy.**(ii)**A measurement on one subsystem does not change the density matrix at a distant subsystem.**(iii)**A measurement on one subsystem cannot reveal more information about a distant subsystem than about the subsystem itself. For example, after the first measurement by Alice, we have $\sum\nolimits_x {p_x S\left( {\rho _x^A } \right)} -
\sum\nolimits_a {p_a \sum\nolimits_x {p_{x\vert a} S\left( {\rho
_{x\vert a}^A } \right)} } \ge \sum\nolimits_x {p_x S\left( {\rho
_x^B } \right)} - \sum\nolimits_a {p_a \sum\nolimits_x {p_{x\vert a}
S\left( {\rho _{x\vert a}^B } \right)} } $.
Suppose that the last measurement is performed by Alice, then after $n$ steps of measurements, we obtain the following inequality
$$\label{eq3}
\begin{array}{l}
I^{LOCC} \le S\left( {\rho ^A} \right) + S\left( {\rho ^B} \right) +
S\left( {\rho ^C} \right) - \sum\nolimits_x {p_x S\left( {\rho
_x^C }
\right)} \\
- \sum\nolimits_{a,b,\ldots ,n} {p_{a,b,\ldots ,n} S\left( {\sum\nolimits_x
{p_{x\vert a,b,\ldots ,n} \rho _{x\vert a,b,\ldots ,n}^A } } \right)} \\
\end{array},$$
where $\left\{ {p_{x\vert a,b,\ldots ,n} ,\rho _{_{x\vert
a,b,\ldots ,n} }^{ABC} } \right\}$ is the postmeasurement ensemble obtained after the measurement in the $n$th step and $p_{a,b,\ldots
,n} $ is the probability of the sequence of measurement in steps $1,2,\ldots ,n$. If the last measurement is performed by Bob. We have
$$\label{eq4}
\begin{array}{l}
I^{LOCC} \le S\left( {\rho ^A} \right) + S\left( {\rho ^B} \right) +
S\left( {\rho ^C} \right) - \sum\nolimits_x {p_x S\left( {\rho
_x^A }
\right)} \\
- \sum\nolimits_{a,b,\ldots ,\left( {n + 1} \right)} {p_{a,b,\ldots ,\left(
{n + 1} \right)} S\left( {\sum\nolimits_x {p_{x\vert a,b,\ldots
,\left( {n + 1} \right)} \rho _{x\vert a,b,\ldots ,\left( {n + 1}
\right)}^B } } \right)}
\\
\end{array}.$$
When the last measurement is performed by Charlie, the inequality takes the form
$$\label{eq5}
\begin{array}{l}
I^{LOCC} \le S\left( {\rho ^A} \right) + S\left( {\rho ^B} \right) +
S\left( {\rho ^C} \right) - \sum\nolimits_x {p_x S\left( {\rho
_x^B }
\right)} \\
- \sum\nolimits_{a,b,\ldots ,\left( {n + 2} \right)} {p_{a,b,\ldots ,\left(
{n + 2} \right)} S\left( {\sum\nolimits_x {p_{x\vert a,b,\ldots
,\left( {n + 2} \right)} \rho _{x\vert a,b,\ldots ,\left( {n + 2}
\right)}^C } } \right)}
\\
\end{array}.$$
The last terms in Eqs.(\[eq3\])-(\[eq5\]) are all negative values. Neglecting these terms, we have
$$\label{eq6} I^{LOCC} \le S\left( {\rho ^A} \right) + S\left( {\rho
^B} \right) + S\left( {\rho ^C} \right) - \mathop {\max
}\limits_{Z = A,B,C} \sum\nolimits_x {p_x S\left( {\rho _x^Z }
\right)} .$$
For a multipartite ensembles more than three components we can prove the following Lemma by the same way as proving the above results.
*Lemma 2*. For an arbitrary multipartite ensemble $\left\{ {p_x
,\rho _{_x }^{B_1 B_2 \cdots B_N } } \right\}$, the maximal locally accessible mutual information satisfies the inequality:
$$\label{eq7}
\begin{array}{l}
I^{LOCC} \le S\left( {\rho ^{B_1 }} \right) + S\left( {\rho ^{B_2 }}
\right) + \cdots + S\left( {\rho ^{B_N }} \right) \\
- \mathop {\max }\limits_{Z = B_1 ,B_2 ,\ldots ,B_N } \sum\nolimits_x {p_x
S\left( {\rho _x^Z } \right)} \\
\end{array},$$
where $\rho ^{B_n }$ is the reduction of $\rho ^{B_1 ,B_2
,\ldots ,B_N } = \sum\nolimits_x {p_x \rho _x^{^{B_1 ,B_2 ,\ldots
,B_N }} } $and $\rho _x^{^{Z}} $ is a reduction of $\rho _x^{^{B_1
,B_2 ,\ldots ,B_N }} $.
While the ensemble states $\rho _{_x }^{B_1 B_2,\ldots ,B_N } $ are all pure states, it is possible to write Eq.(\[eq7\]) in a form of the average multipartite q-squashed entanglement. Notice that for the N-partite pure state $\left| \Gamma \right\rangle _{A_1 ,\ldots
,A_N } $, we have [@Yang:2007]$E_{sq}^q \left( {\left| \Gamma
\right\rangle _{A_1 ,\ldots ,A_N } } \right) = S\left( {\rho _{A_1 }
} \right) + \cdots + S\left( {\rho _{A_N } } \right)$, where $\rho
_{A_k } = Tr_{A_1 ,\ldots ,A_{k - 1} ,A_{k + 1} ,\ldots ,A_N }
\left( {\left| \Gamma \right\rangle \left\langle \Gamma \right|}
\right)$, then Eq.(\[eq7\]) can be rewritten as $I^{LOCC} \le
S\left( {\rho ^{B_1 }} \right) + S\left( {\rho ^{B_2 }} \right) +
\cdots + S\left( {\rho ^{B_N }} \right) - \sum\nolimits_x {p_x
\frac{E_{sq}^q \left( {\left| \psi \right\rangle _{_x }^{B_1 B_2
,\ldots ,B_N } } \right)}{N}} $, where $\left| \psi \right\rangle
_{_x }^{B_1 B_2 ,\ldots ,B_N } \left\langle \psi \right| = \rho
_x^{^{B_1 ,B_2 ,\ldots ,B_N }} $. Moreover, noting a recently inequality presented in Ref.[@Yang:2007], for a $N$-partite state $\rho _{_x }^{B_1 B_2,\ldots ,B_N } $, we have $\frac{E_{sq}^q
\left( {\rho _x^{^{B_1 ,B_2 ,\ldots ,B_N }} } \right)}{N} \ge
K_D^{\left( N \right)} \left( {\rho _x^{^{B_1 ,B_2 ,\ldots ,B_N }} }
\right)$, where $K_D^{\left( N \right)} \left( {\rho _x^{^{B_1 ,B_2
,\ldots ,B_N }} } \right)$ denotes the distillable key of the state $\rho _x^{^{B_1 ,B_2 ,\ldots ,B_N }} $. Thus Eq.(\[eq7\]) can be further written as $I^{LOCC} \le S\left( {\rho ^{B_1 }} \right) +
S\left( {\rho ^{B_2 }} \right) + \cdots + S\left( {\rho ^{B_N }}
\right) - \sum\nolimits_x {p_x K_D^{\left( N \right)} \left( {\left|
\psi \right\rangle _{_x }^{B_1 B_2 ,\ldots , B_N } } \right)} $. On the other hand, $S\left( {\rho ^{B_1 }} \right) + S\left( {\rho
^{B_2 }} \right) + \cdots + S\left( {\rho ^{B_N }} \right) \le D$, where $D = \log _2 d_1 d_2 \cdots d_N $, this gives the following complementarity relation $I^{LOCC} + \sum\nolimits_x {p_x
K_D^{\left( N \right)} \left( {\left| \psi \right\rangle _{_x }^{B_1
B_2 \cdots B_N } } \right)} \le D$. This inequality shows that the locally accessible information has close relation with the distillable key of the state for the pure ensemble states. We conjecture this relation also holds for the general mixed state ensembles however we were unable to verify or disprove this statement.
![ (Color online).Plot of $I^{LOCC}$ for the ensemble ${\cal E}_1$.[]{data-label="fig1"}](fig1.eps)
*Example 1*. Consider a tripartite ensemble ${\cal E}_1$ consisting(with equal probabilities) of the three states
$$\label{eq8} \left| \psi \right\rangle _{1,2} = a\left| {000}
\right\rangle \pm b\left| {111} \right\rangle , \quad \left| \psi
\right\rangle _3 = c\left| {001} \right\rangle + d\left| {110}
\right\rangle ,$$
where we have assumed that $a,b$ and $c,d$ are both positive real numbers with $a(c)^2 + b(d)^2 = 1$. In Fig.1, we plot the upper bound of $I^{LOCC}$ for all values of $a$ and $c$ with $0\le a(c)\le1$ according to Eq.(7).
*Example 2*. Let us evaluate the upper bound of the locally accessible information for the tripartite ensemble ${\cal E}_2$ consisting(with equal probabilities) of the six states
$$\begin{aligned}
\left| \psi \right\rangle _{1,2} = a\left| {000} \right\rangle \pm
b\left| {111} \right\rangle , \notag \\
\left| \psi \right\rangle _{3,4} = a\left| {001} \right\rangle \pm
b\left| {110} \right\rangle , \notag \\
\label{eq9} \left| \psi \right\rangle _{5,6} = a\left| {010}
\right\rangle \pm b\left| {101} \right\rangle .\end{aligned}$$
![ (Color online). Plots of $I^{LOCC}$ (blue line) and $I$ (red line) for the ensemble ${\cal E}_2$.[]{data-label="fig1"}](fig2.eps)
Using Lemma 2, we have $I^{LOCC} \le - \frac{2}{3}\left( {1 + a^2}
\right)\log \frac{1}{3}\left( {1 + a^2} \right) - \frac{2}{3}\left(
{2 - a^2} \right)\log \frac{1}{3}\left( {2 - a^2} \right)$, on the other hand, the ensemble ${\cal E}_2$ contains the information $I
= S\left( {\rho ^{ABC}} \right) = - a^2\log \frac{1}{3}a^2 - \left(
{1 - a^2} \right)\log \frac{1}{3}\left( {1 - a^2} \right)$. For a vivid comparison, we plot $I^{LOCC}$ and $I$ in Fig.2. It is shown that $I^{LOCC} < I$ whenever $0.222<a<0.975$ . Since the locally accessible information extracted is less than the information contained in the ensemble, it follows immediately that the tripartite ensemble ${\cal E}_2$ consisting of the six states is indistinguishable under LOCC if $0.222<a<0.975$.
*Example 3*. Consider the following 4-partite ensemble ${\cal
E}_3$ consisting(with equal probabilities) of the nine orthogonal states
$$\begin{aligned}
\left| \psi \right\rangle _1 = \frac{1}{2}\left( {\left| {0000} \right\rangle + \left| {0011} \right\rangle + \left| {1100} \right\rangle - \left| {1111} \right\rangle } \right), \notag \\
\left| \psi \right\rangle _2 = \frac{1}{2}\left( {\left| {0000} \right\rangle - \left| {0011} \right\rangle + \left| {1100} \right\rangle + \left| {1111} \right\rangle } \right), \notag \\
\left| \psi \right\rangle _3 = \frac{1}{2}\left( {\left| {0001} \right\rangle + \left| {0010} \right\rangle + \left| {1101} \right\rangle - \left| {1110} \right\rangle } \right), \notag \\
\left| \psi \right\rangle _4 = \frac{1}{2}\left( {\left| {0001} \right\rangle - \left| {0010} \right\rangle + \left| {1101} \right\rangle + \left| {1110} \right\rangle } \right), \notag \\
\left| \psi \right\rangle _5 = \frac{1}{2}\left( {\left| {0101} \right\rangle + \left| {0110} \right\rangle + \left| {1001} \right\rangle - \left| {1010} \right\rangle } \right), \notag \\
\left| \psi \right\rangle _6 = \frac{1}{2}\left( {\left| {0101} \right\rangle - \left| {0110} \right\rangle + \left| {1001} \right\rangle + \left| {1010} \right\rangle } \right), \notag \\
\left| \psi \right\rangle _7 = \frac{1}{2}\left( {\left| {0111} \right\rangle + \left| {0100} \right\rangle + \left| {1011} \right\rangle - \left| {1000} \right\rangle } \right), \notag \\
\left| \psi \right\rangle _8 = \frac{1}{2}\left( {\left| {0111} \right\rangle - \left| {0100} \right\rangle + \left| {1011} \right\rangle + \left| {1000} \right\rangle } \right), \notag \\
\left| \psi \right\rangle _9 = \frac{1}{2}\left( {\left| {0000} \right\rangle + \left| {0011} \right\rangle - \left| {1100} \right\rangle + \left| {1111} \right\rangle } \right).\label{eq10}\end{aligned}$$
In this case, it is easy to show that $I^{LOCC} \le 3$, while the ensemble ${\cal E}_3$ contains the information $I = \log 9 >
I^{LOCC}$. Thus we conclude that ensembles ${\cal E}_3$ is indistinguishable under LOCC.
As another application of Lemma 2, we can derive an upper bound for the capacity of a scheme of quantum dense coding for multipartite states. Suppose now there are $N$ Alices, say $A_1 ,A_2 ,\ldots ,A_N
$, who want to send information to $M$ receivers, Bobs, $B_1 ,B_2
,\ldots ,B_M $. They share the quantum state $\rho ^{A_1 ,A_2
,\ldots ,A_N B_1 ,B_2 ,\ldots ,B_M }$. Using the same techniques as Ref.[@Bruss:2004], we can show the capacity of distributed dense coding is bounded by the following quantity:
$$\label{eq11}
\begin{array}{l}
C\left( \rho \right) \le \log _2 d_{A_1 } + \cdots + \log _2 d_{A_N } +
S\left( {\rho ^{B_1 }} \right) + S\left( {\rho ^{B_2 }} \right) \\
+ \cdots + S\left( {\rho ^{B_M }} \right) - \mathop {\max }\limits_{Z = B_1
,B_2 ,\ldots ,B_M } \sum\nolimits_x {p_x S\left( {\rho _x^Z } \right)} . \\
\end{array}$$
Eq.(11) can be regarded as a generalization of the result of Ref.[@Bruss:2004] to the case with multipartite senders and multipartite receivers.
In summary, we have proposed a universal Holevo-like upper bound on the locally accessible information for arbitrary multipartite ensembles. This bound allows us not only to prove the indistinguishability of some multipartite ensembles but also enables us to obtain the upper bound for the capacity of distributed dense coding with multipartite senders and multipartite receivers.
This work is supported by the NNSF of China, the CAS, and the National Fundamental Research Program (under Grant No. 2006CB921900).
[99]{}
S. Ghosh, G. Kar, A. Roy, A. Sen(De), and U. Sen, Phys. Rev. Lett. **87**, 277902 (2001).
J.Walgate, A. J. Short, L. Hardy, and V.Vedral, Phys. Rev. Lett. **85**, 4972 (2000).
P. X. Chen and C. Z. Li, Phys. Rev. A **68**, 062107 (2003).
H. Fan, Phys. Rev. Lett. **92**, 177905 (2004).
J. Watrous, Phys. Rev. Lett. **95**, 080505 (2005).
M.-Y. Ye, W. Jiang, P.-X. Chen, Y.-S. Zhang, Z.-W. Zhou, and G.-C. Guo, Phys. Rev. A **76**, 032329 (2007).
C. H. Bennett, D. P. DiVincenzo, C. A. Fuchs, T. Mor, E. Rains, P. W. Shor, J. A. Smolin, and W. K. Wootters, Phys. Rev. A **59**, 1070 (1999).
M. Horodecki, A. Sen(De), U. Sen, and K. Horodecki, Phys.Rev. Lett. **90**, 047902 (2003).
P. Badziag, M. Horodecki, A. Sen(De), and U. Sen, Phys. Rev. Lett. **91**, 117901 (2003).
D. Yang, K. Horodecki, M. Horodecki, P. Horodecki, J. Oppenheim, W. Song, e-print arXiv:quant-ph/0704.22369(final version submitted to IEEE Tran. Info. Theory).
D. Bruss, G.M. DAriano, M. Lewenstein, C. Macchiavello, A. Sen(De), and U. Sen, Phys. Rev. Lett. **93**, 210501 (2004).
|
Dilution with Digital Microfluidic Biochips: How Unbalanced Splits Corrupt Target-Concentration
===============================================================================================
Sudip Poddar Robert Wille Hafizur Rahaman and Bhargab B. Bhattacharya S. Poddar is with the Advanced Computing and Microelectronics Unit, Indian Statistical Institute, Kolkata, India 700108. Email: sudippoddar2006@gmail.comBhargab B. Bhattacharya is with the Department of Computer Science & Engineering, Indian Institute of Technology Kharagpur, India 721 302; this work was done while he had been with Indian Statistical Institute, Kolkata, India 700108. Email: bhargab.bhatta@gmail.com.H. Rahaman is with the School of VLSI Technology, Indian Institute of Engineering Science and Technology, Shibpur, India 711103. E-mail: hafizur@vlsi.iiests.ac.in.R. Wille is with the Institute for Integrated Circuits, Johannes Kepler University Linz, Austria. E-mail: robert.wille@jku.at.
###### Abstract
Sample preparation is an indispensable component of almost all biochemical protocols, and it involves, among others, making dilutions and mixtures of fluids in certain ratios. Recent microfluidic technologies offer suitable platforms for automating dilutions on-chip, and typically on a digital microfluidic biochip (DMFB), a sequence of (1 : 1) mix-split operations is performed on fluid droplets to achieve the target concentration factor (*C**F*) of a sample. An (1 : 1) mixing model ideally comprises mixing of two unit-volume droplets followed by a (balanced) splitting into two unit-volume daughter-droplets. However, a major source of error in fluidic operations is due to unbalanced splitting, where two unequal-volume droplets are produced following a split. Such volumetric split-errors occurring in different mix-split steps of the reaction path often cause a significant drift in the target-CF of the sample, the precision of which cannot be compromised in life-critical assays. In order to circumvent this problem, several error-recovery or error-tolerant techniques have been proposed recently for DMFBs. Unfortunately, the impact of such fluidic errors on a target-CF and the dynamics of their behavior have not yet been rigorously analyzed. In this work, we investigate the effect of multiple volumetric split-errors on various target-CFs during sample preparation. We also perform a detailed analysis of the worst-case scenario, i.e., the condition when the error in a target-CF is maximized. This analysis may lead to the development of new techniques for error-tolerant sample preparation with DMFBs without using any sensing operation.
\\undef
###### Proof.
\\undef\\corollary\\IEEEpeerreviewmaketitle{IEEEkeywords}
Algorithmic microfluidics, embedded systems, fault-tolerance, healthcare devices, lab-on-chip.
1 Introduction
--------------
A digital microfluidic biochip (DMFB) is capable of executing multiple tasks of biochemical laboratory protocols in an efficient manner. DMFBs support droplet-based operations on a single chip with high sensitivity and reconfigurability. Discrete volume (nanoliter/picoliter) droplets are manipulated on DMFBs through electrical actuation on an electrode array \[[1](#bib.bib1)\]. Various fluid-handling operations such as dispensing, transport, mixing, split, dilution can be performed on these tiny chips with higher speed and reliability. Due to their versatile properties, these programmable chips are used in many applications such as in-vitro diagnostics (point-of-care, self-testing), drug discovery (high-throughput screening), biotechnology (process monitoring, process development), ecology (agriculture, environment, homeland security), and sample preparation \[[2](#bib.bib2), [3](#bib.bib3), [4](#bib.bib4), [5](#bib.bib5), [6](#bib.bib6)\].
Sample preparation imparts significant impact on accuracy, assay-completion time and cost, and plays a pivotal role in biomedical engineering and life science \[[7](#bib.bib7)\]. It involves dilution or mixture preparation and comprises a sequence of mixing steps necessary to produce a mixture of input reagents having a desired ratio of the constituents. Note that sample collection, transportation, and preparation consume up to 90% cost and 95% of time \[[8](#bib.bib8)\]. In last few years, a large number of sample-preparation algorithms had been developed for reducing assay-completion time and cost \[[6](#bib.bib6), [9](#bib.bib9), [10](#bib.bib10), [11](#bib.bib11), [12](#bib.bib12), [13](#bib.bib13), [14](#bib.bib14), [15](#bib.bib15)\], based on the (1:1) mixing model and similar. In the conventional (1:1) mixing model, two unit-volume of droplets are mixed together and split into two equal-sized daughter droplets following the mixing operation. These algorithms output a particular sequence of mix-split operations (represented as a sequencing graph) in order to dilute the target-droplet to a desired target-concentration. For the convenience of dilution algorithms, a concentration factor (CF) is approximated with a binary fraction, which is reachable and satisfies a user-defined error-tolerance limit. The detailed description of sample preparation can be found elsewhere \[[16](#bib.bib16)\].
Although droplet-based microfluidic biochips enable the integration of fluid-handling operations and outcome sensing on a single biochip, errors are likely to occur during fluidic operations due to various permanent faults (e.g., dielectric breakdown or charge trapping), or transient faults (e.g., unbalanced split due to imperfect actuation). For example, two daughter-droplets may be of different volume after split-operation while executing mix-split steps on a DMFB platform. The unequal-volume droplets produced after an erroneous mix-split step, when used later will negatively impact the correctness of the desired target-CF. Therefore, unbalanced-split errors pose a significant threat to sample preparation. Hence, from the viewpoint of error-management, it is essential to introduce some error-management scheme to handle such faults during sample preparation.
![]()
Figure 1: Initial sequencing graph.
In this paper, we focus especially on volumetric split-errors and investigate their effects on the target-CF during sample preparation. Split-errors may unexpectedly occur in any mix-split step of the mixing-path during sample preparation, thus affecting the concentration factor (CF) of the target-droplet \[[8](#bib.bib8)\]. Moreover, due to the unpredictable characteristics of fluidic-droplets, a daughter droplet of larger or smaller size may be used in the mixing path following an erroneous split operation¹¹depending on the selection of the erroneous droplet (larger or smaller volume) to be used in a subsequent step. on the mixing-path. Although a number of cyber-physical based approaches have been proposed for error-recovery \[[17](#bib.bib17), [18](#bib.bib18), [19](#bib.bib19), [20](#bib.bib20), [7](#bib.bib7)\], they do not provide any guarantee on the number of rollback iterations that are needed to rectify the error. Thus, most of the prior approaches to error-recovery in biochips are non-deterministic in nature. On the other hand, the approach proposed in \[[8](#bib.bib8)\] performs error-correction in a deterministic sense; however, it assumes only the presence of single split-errors while classifying them as being critical or non-critical. A split-error occurring at a particular step is called critical (non-critical), if a single split-error when inserted at the corresponding step, causes the target-CF to exceed (bound within) the allowable error-tolerance range. This approach does not consider the possibility of multiple split-errors during classification. Furthermore, in a cyber-physical settings, it requires some additional time for sensing the occurrence of a critical error, if any, at every such step. Hence, when the number of critical errors becomes large, sensing time may outweigh the gain obtained in roll-forwarding assay-time, and as a result, we may need a longer overall execution time compared to that of the proposed method.
In this paper, we present a thorough analysis of the impact of multiple split-errors on a given target-CF. Based on these observations, methods for sample preparation that can deal with split errors even without any sensors and/or rollback can be derived. In fact, the findings discussed in this paper yield a method (described in \[[16](#bib.bib16)\]) that produces a target-CF within the allowable error-tolerance limit without using any sensor.
The remainder of the paper is organized as follows. Section [2](#S2 "2 Error-recovery approaches: prior art ‣ Dilution with Digital Microfluidic Biochips: How Unbalanced Splits Corrupt Target-Concentration") introduces the basic principle of earlier error-recovery approaches. We describe the effect of one or more volumetric split-errors on the target-CF, in Section [3](#S3 "3 Effect of split-errors on the target concentration ‣ Dilution with Digital Microfluidic Biochips: How Unbalanced Splits Corrupt Target-Concentration"). Section [4](#S4 "4 Worst-case error in the target- C F ‣ Dilution with Digital Microfluidic Biochips: How Unbalanced Splits Corrupt Target-Concentration") presents the worst-case scenario, i.e., when CF-error in the target-droplet becomes maximum. A justification behind the maximum CF-error is then reported in Section [5](#S5 "5 Maximum CF-error: A justification ‣ Dilution with Digital Microfluidic Biochips: How Unbalanced Splits Corrupt Target-Concentration"). Finally, we draw our conclusions in Section [6](#S6 "6 Conclusion ‣ Dilution with Digital Microfluidic Biochips: How Unbalanced Splits Corrupt Target-Concentration").
2 Error-recovery approaches: prior art
--------------------------------------
Earlier approaches perform error-recovery operations by repeating the concerned operations of the bioassay \[[21](#bib.bib21)\] for producing the target concentration factor within the allowable error-range. For example, all mix-split operations and dispensing operations of the initial sequencing graph (shown in Fig. [1](#S1.F1 "Figure 1 ‣ 1 Introduction ‣ Dilution with Digital Microfluidic Biochips: How Unbalanced Splits Corrupt Target-Concentration")) were re-executed when an error is detected at the end (after execution of the bio-assay). However, the repetition of such experiments leads to wastage of precious reagents and hard-to-obtain samples, and results in longer assay completion-time.
### 2.1 Cyber-physical technique for error-recovery
In order to avoid such repetitive execution of on-chip biochemical experiments, recently, cyber-physical DMFBs were proposed for obtaining the desired outcome \[[18](#bib.bib18)\]. A diagram of a cyber-physical biochip is shown in Fig. [2](#S2.F2 "Figure 2 ‣ 2.1 Cyber-physical technique for error-recovery ‣ 2 Error-recovery approaches: prior art ‣ Dilution with Digital Microfluidic Biochips: How Unbalanced Splits Corrupt Target-Concentration") for demonstration purpose.
![]()
Figure 2: Schematic of a cyberphysical error-recovery system.
It consists of the following components: a computer, a single-board microcontroller or an FPGA, a peripheral circuit, and the concerned biochip. Two interfaces are required for establishing the connection between control software and hardware of the microfluidic system. The first interface is required for converting the output signal of the sensor to an input signal that feeds the control software installed on the computer. The second interface transforms the output of the control software into a sequence of voltage-actuation maps that activate the electrodes of the biochip. The error-recovery operation is executed by the control software running in the back-end.
![]()
Figure 3: Generation of a target-droplet by cyber-physical error-recovery approaches.
### 2.2 Compilation for error-recovery
Note that cyber-physical based DMFBs need to constantly monitor the output of the intermediate mix-split operations at designated checkpoints using on-chip sensors (integrated with the biochip). The original actuation sequences are interrupted when an error is detected during the execution of a bioassay. At the same time, the recovery actions, e.g., the re-execution of corresponding dispensing and mixing operations is initiated to remedy the error. However, the error-recovery operations will have to be translated into electrode-actuation sequences in real-time.
The compilation of error-recovery actions can either be performed before actual execution of the bio-assay or during the execution of the bio-assay. So, depending on the compilation-time of operations, error-recovery approaches can be divided into two categories: i) offline (at design time), and ii) online (at run time).
In the offline approach, all possible errors of interest that might occur (under the assumed model) during the execution of a bio-assay are identified, and compilation is performed to pre-compute and store the corresponding error-recovery actuation sequences. They will provide an alternative schedule, which is stored in the memory. When an error is detected during actual execution of the bio-assay, the cyber-physical biochip executes the error-recovery actions by loading the corresponding schedules from the memory. However, this approach can be used to rectify only a limited number of errors (≤ 2) since a very large-size controller memory will be required to store the recovery sequences for all possible consequences of errors \[[19](#bib.bib19)\].
On the other hand, in the online approach, appropriate actions are carried out depending on the feedback given by the sensor. Compilation of error-recovery actions into electrode-actuation sequences is performed only at run-time.
### 2.3 Working principle of cyber-physical based DMFBs
In spite of the above difference, cyber-physical DMFBs perform error-recovery operations as follows. During actual execution of the bio-assay, a biochip receives control signals from the software running on the computer system. At the same time, the sensing system of the biochip sends a feedback signal to the software by processing it using field-programmable gate array (FPGA), or ASIC chips. If an error is detected by a sensor, the control software immediately discards the erroneous-droplet for preventing error-propagation, and performs the necessary error-recovery operations (i.e., corresponding actuation sequences are determined online/offline) for generating the correct output.
In order to produce the correct output, the outcome of the intermediate mix-split operations are verified using on-chip sensors suitably placed at designated checkpoints. For example, in Fig. [3](#S2.F3 "Figure 3 ‣ 2.1 Cyber-physical technique for error-recovery ‣ 2 Error-recovery approaches: prior art ‣ Dilution with Digital Microfluidic Biochips: How Unbalanced Splits Corrupt Target-Concentration"), the outcomes of Mix-split 4 and Mix-split 7 are checked by the sensor. When an error is detected, a portion of the bio-assay is re-executed. For instance, the operations shown within the blue box in Fig. [3](#S2.F3 "Figure 3 ‣ 2.1 Cyber-physical technique for error-recovery ‣ 2 Error-recovery approaches: prior art ‣ Dilution with Digital Microfluidic Biochips: How Unbalanced Splits Corrupt Target-Concentration") are re-executed when an error is detected at the last checkpoint. Note that the accuracy of a cyberphysical system also depends on the sensitivity of sensors. Unfortunately, due to cost constraints, only a limited number of sensors can be integrated into a DMFB \[[18](#bib.bib18)\]. Additionally, in order to check the status of intermediate droplets, they need to be routed to a designated sensor location on the chip. This may introduce a significant latency to the overall assay-completion time (Fig. [4](#S2.F4 "Figure 4 ‣ 2.3 Working principle of cyber-physical based DMFBs ‣ 2 Error-recovery approaches: prior art ‣ Dilution with Digital Microfluidic Biochips: How Unbalanced Splits Corrupt Target-Concentration")). As a result, prior cyber-physical based error-recovery methods for sample preparation become expensive in terms of assay-completion time and reagent cost.
![]()
Figure 4: Routing of droplets for sensing operation in a cyber-physical biochip.
![]()
Figure 5: Sequence of mix-split operations for the target-CF = $\\frac{87}{128}$.
![]()
Figure 6: Effect of choosing larger-/smaller-volume erroneous droplet on the target-CF = $\\frac{87}{128}$.
To summarize, cyber-physical error-recovery methods suffer from the following shortcomings:
-
•
They are expensive in terms of assay-completion time and reagent-cost. Hence, they are unsuitable for field deployment and point-of-care testing in resource-constrained areas.
-
•
Prior cyberphysical solutions fail to provide any guarantee on the number of rollback attempts, i.e., how many iterations will be required to correct the error. Hence, error-recovery becomes non-deterministic.
-
•
Each component used in the design of cyberphysical coupling may become a possible source of failure, which ultimately reduces the reliability of the biochip.
Now, we present below a detailed analysis of multiple volumetric split-errors and their effects on a target-CF.
3 Effect of split-errors
on the target concentration
---------------------------
Generally, in the (1:1) mixing model (where two 1X-volume droplets are used for mixing operation), two 1X-volume daughter-droplets are produced after each mix-split operation. One of them is used in the subsequent mix-split operation and another one is discarded as waste droplet or stored for later use \[[6](#bib.bib6)\] (see Fig. [1](#S1.F1 "Figure 1 ‣ 1 Introduction ‣ Dilution with Digital Microfluidic Biochips: How Unbalanced Splits Corrupt Target-Concentration")). An erroneous mix-split operation may produce two unequal-volume droplets. Unless an elaborate sensing mechanism is used, it is not possible to predict which one of the resulting droplets (smaller/larger) is going to be used in the subsequent mix-split operation. Moreover, their effect on the target-CF becomes more complex when multiple volumetric split-errors occur in the mix-split path.
### 3.1 Single volumetric split-error
In order to analyze the effect of single volumetric split-error on the target-CF, we perform experiments with different erroneous droplets and present the results in this section. We assume an example target-CF = $\\frac{87}{128}$ of accuracy level = 7. The mix-split sequence that needs to be performed using twoWayMix algorithm \[[6](#bib.bib6)\] for generating the target-CF is shown in Fig. [5](#S2.F5 "Figure 5 ‣ 2.3 Working principle of cyber-physical based DMFBs ‣ 2 Error-recovery approaches: prior art ‣ Dilution with Digital Microfluidic Biochips: How Unbalanced Splits Corrupt Target-Concentration").
Let us consider the scenario of injecting 7% volumetric split-error at Mix-Split Step 4. Two unequal-volume daughter droplets are produced after this step when a split-error occurs. As stated earlier, it may not be possible to predict which droplet (smaller/larger) will be used for the mixing operation in the next step. The effect of the erroneous droplet on the target-CF depends on the choice of the daughter-droplet to be used next. For example, the effect of 3% volumetric split-error (at Step 4) on the target-CF = $\\frac{87}{128}$ is shown in Fig. [6](#S2.F6 "Figure 6 ‣ 2.3 Working principle of cyber-physical based DMFBs ‣ 2 Error-recovery approaches: prior art ‣ Dilution with Digital Microfluidic Biochips: How Unbalanced Splits Corrupt Target-Concentration"). The effect of two errors on the target-CF (when the larger or smaller volume droplet is used at Mix-Split Step 4) is also shown in Fig. [6](#S2.F6 "Figure 6 ‣ 2.3 Working principle of cyber-physical based DMFBs ‣ 2 Error-recovery approaches: prior art ‣ Dilution with Digital Microfluidic Biochips: How Unbalanced Splits Corrupt Target-Concentration"). The blue (green) box represents the scenario when the next operation is executed with the larger (smaller) erroneous droplet. It has been seen from Fig. [6](#S2.F6 "Figure 6 ‣ 2.3 Working principle of cyber-physical based DMFBs ‣ 2 Error-recovery approaches: prior art ‣ Dilution with Digital Microfluidic Biochips: How Unbalanced Splits Corrupt Target-Concentration") that the CF-error in the target increases when the smaller erroneous droplet is used in the mixing path compared to the use of the larger one.
Table 1: Impact on target-CF = $\\frac{87}{128}$ for different volumetric split-errors.
Volumetric split-error = 3%.
Erroneous mix-split step
Selected-droplet
Target-CF×128\*
Within error-tolerance limit?
Larger
Smaller
(CF-error×128<0.5?)
1
✳
✵
86.98
Yes
1
✵
✳
87.01
Yes
2
✳
✵
87.01
Yes
2
✵
✳
86.99
Yes
3
✳
✵
87.04
Yes
3
✵
✳
86.95
Yes
4
✳
✵
86.88
Yes
4
✵
✳
87.12
Yes
5
✳
✵
87.04
Yes
5
✵
✳
86.96
Yes
6
✳
✵
86.39
No
6
✵
✳
87.62
No
Volumetric split-error = 5%.
Erroneous mix-split step
Selected-droplet
Target-CF×128
Within error-tolerance limit?
Larger
Smaller
(CF-error×128<0.5?)
1
✳
✵
86.98
Yes
1
✵
✳
87.02
Yes
2
✳
✵
87.01
Yes
2
✵
✳
86.98
Yes
3
✳
✵
87.08
Yes
3
✵
✳
86.92
Yes
4
✳
✵
86.81
Yes
4
✵
✳
87.19
Yes
5
✳
✵
87.06
Yes
5
✵
✳
86.94
Yes
6
✳
✵
86.00
No
6
✵
✳
88.05
No
Volumetric split-error = 7%.
Erroneous mix-split step
Selected-droplet
Target-CF×128
Within error-tolerance limit?
Larger
Smaller
(CF-error×128<0.5?)
1
✳
✵
86.97
Yes
1
✵
✳
87.03
Yes
2
✳
✵
87.02
Yes
2
✵
✳
86.98
Yes
3
✳
✵
87.11
Yes
3
✵
✳
86.89
Yes
4
✳
✵
86.73
Yes
4
✵
✳
87.27
Yes
5
✳
✵
87.09
Yes
5
✵
✳
86.91
Yes
6
✳
✵
85.61
No
6
✵
✳
88.49
No
-
•
\*: Results are shown up to two decimal places.
![]()
Figure 7: Mix-split operations for generating target-CF = *C**t* with accuracy level *n* = 7.
Similarly, we perform further experiments for finding the effect of erroneous droplets on the target-CF. We report the results for volumetric split-error 3%, 5% and 7% occurring on the mixing path, in Table [1](#S3.T1 "Table 1 ‣ 3.1 Single volumetric split-error ‣ 3 Effect of split-errors on the target concentration ‣ Dilution with Digital Microfluidic Biochips: How Unbalanced Splits Corrupt Target-Concentration"). We observe that the CF-error in the target-droplet exceeds the error-tolerance limit in all cases when a volumetric split-error occurs in the last but one step. Moreover, the CF-error in the target-CF increases when the magnitude of volumetric split-error increases.
### 3.2 Multiple volumetric split-errors
To this end, we have analyzed the effect of single volumetric split-error on the target-CF (with different erroneous-volume droplets) and observed that the CF-error in the target-droplet increases when the magnitude of split-error increases. However, due to unpredictable characteristics of fluid droplets, such split-errors may occur in multiple mix-split steps of the mixing path; they may change the CF of the desired target-droplet significantly. Moreover, volumetric split-errors may occur in any combination of signs (use of larger or smaller droplet following a split step) on the mixing path during sample preparation. We derive expressions that capture the overall effect of such errors on the target-CF.
Let *ϵ**i* indicate the percentage of the volumetric split-error occurring at the *i**t**h* mix-split step. A fundamental question in this context is the following: *“How is the CF of a target-droplet affected by multiple volumetric split-errors {*ϵ*₁, *ϵ*₂,…, *ϵ**i* − 1} occurring at different mix-split steps in the mixing path during sample preparation?”*.
In order to find a reasonable answer to the above question, let us consider the dilution problem for generating a target-CF = *C**t* using twoWayMix \[[6](#bib.bib6)\] as shown in Fig. [7](#S3.F7 "Figure 7 ‣ 3.1 Single volumetric split-error ‣ 3 Effect of split-errors on the target concentration ‣ Dilution with Digital Microfluidic Biochips: How Unbalanced Splits Corrupt Target-Concentration"). Here, *O**i* represents the *i**t**h* (1:1) mix-split step, *C**i* is the resulting CF after the *i**t**h* mix-split step, and *r**i* is the CF of the source (100% for sample, 0% for buffer) used in *i**t**h* mix-split operation. Without loss of generality, let us assume that a volumetric split-error *ϵ**i* occurs after the *i**t**h* mix-split step of the mixing path, i.e., a two-unit volume droplet produces, after splitting, two daughter-droplets of volume 1+*ϵ* and 1-*ϵ*, *ϵ*>0. Initially, sample and buffer are mixed at the first mix-split step (*O*₁). After this mixing operation, the CF and volume of the resulting droplet become *C*₁ = $\\frac{{P\_{0} \\times {({1 \\pm \\epsilon\_{0}})}} + {2^{- 1} \\times r\_{0}}}{{Q\_{0} \\times {({1 \\pm \\epsilon\_{0}})}} + 2^{- 1}}$ and *V*₁ = $\\frac{{Q\_{0} \\times {({1 \\pm \\epsilon\_{0}})}} + 2^{- 1}}{2^{0}}$, respectively, where *P*₀ = *Q*₀ = $\\frac{1}{2}$, *ϵ*₀ = *r*₀ = 0. Note that *r**i* = 1 (0) indicates whether a sample (buffer) is used in the *i**t**h* mix-split step of the mixing path. Furthermore, the sign + (−) in the expression indicates whether a larger (smaller) droplet is used in the next mix-split step followed by a split operation.
A volumetric split-error may occur in one or more mix-split operations of the mixing path while preparing a target-CF. For example, volumetric split-errors {*ϵ*₁, *ϵ*₂, …, *ϵ*₆} may occur, one after another, in the mix-split operations {*O*₁, *O*₂, …, *O*₆} as shown in Fig. [7](#S3.F7 "Figure 7 ‣ 3.1 Single volumetric split-error ‣ 3 Effect of split-errors on the target concentration ‣ Dilution with Digital Microfluidic Biochips: How Unbalanced Splits Corrupt Target-Concentration"). In Table [2](#S3.T2 "Table 2 ‣ 3.2 Multiple volumetric split-errors ‣ 3 Effect of split-errors on the target concentration ‣ Dilution with Digital Microfluidic Biochips: How Unbalanced Splits Corrupt Target-Concentration"), we report the volume and concentration of the resulting daughter-droplets after each mix-split operation when all of the preceding steps suffer from split-errors.
Table 2: Impact of split-errors on the resulting daughter-droplets.
[TABLE]
-
•
$\\overline{CF}$: Concentration of the resulting daughter-droplets after next mix-split step; $\\overline{V}$: Volume of the resulting daughter-droplets after next mix-split step.
![]()
Figure 8: Effect of multiple volumetric split-errors on the target-CF = $\\frac{87}{128}$.
Hence, for the occurrence of multiple volumetric split-errors, say {*ϵ*₁, *ϵ*₂, *ϵ*₃, …, *ϵ**i* − 2, *ϵ**i* − 1} at mix-split steps {*O*₁, *O*₂,, *O*₃, …, *O**i* − 2, *O**i* − 1}, the CF and volume of the generated target-droplet after the final mix-split operation can be computed using the following expressions:
$$C\_{i} = \\frac{{P\_{i - 1} \\times {({1 \\pm \\epsilon\_{i - 1}})}} + {2^{i - 2} \\times r\_{i - 1}}}{{Q\_{i - 1} \\times {({1 \\pm \\epsilon\_{i - 1}})}} + 2^{i - 2}}$$
(1)
$$V\_{i} = \\frac{{Q\_{i - 1} \\times {({1 \\pm \\epsilon\_{i - 1}})}} + 2^{i - 2}}{2^{i - 1}}$$
(2)
where *P**i* = *P**i* − 1 × (1 ± *ϵ**i* − 1) + 2*i* − 2 × *r**i* − 1 and *Q**i* = *Q**i* − 1 × (1 ± *ϵ**i* − 1) + 2*i* − 2. In this way, the impact of multiple volumetric split-errors occurring on different mix-split steps of the mixing path on the target-CF can be precomputed.
In order to find the effect of multiple volumetric split-errors on the target-CF, we perform several experiments. We continue with the example target-CF = $\\frac{87}{128}$ of accuracy level = 7, and inject 7% volumetric split-error simultaneously at different mix-split steps of the mixing path. The effects of such split-errors are shown in Fig. [8](#S3.F8 "Figure 8 ‣ 3.2 Multiple volumetric split-errors ‣ 3 Effect of split-errors on the target concentration ‣ Dilution with Digital Microfluidic Biochips: How Unbalanced Splits Corrupt Target-Concentration"). During simulation, we assume that the larger erroneous droplet is always used later when a split-error occurs in the mix-split path (i.e., *ϵ* is positive). For example, the effect of multiple 7% volumetric split-errors in Mix-Split Step 1 and Step 3 is shown in Fig. [8](#S3.F8 "Figure 8 ‣ 3.2 Multiple volumetric split-errors ‣ 3 Effect of split-errors on the target concentration ‣ Dilution with Digital Microfluidic Biochips: How Unbalanced Splits Corrupt Target-Concentration")(b). Only the effect of three concurrent volumetric split-errors is also shown in Fig. [8](#S3.F8 "Figure 8 ‣ 3.2 Multiple volumetric split-errors ‣ 3 Effect of split-errors on the target concentration ‣ Dilution with Digital Microfluidic Biochips: How Unbalanced Splits Corrupt Target-Concentration") (c). It has been observed that CF-error in the target-droplet rapidly grows to $\\frac{0.08}{128}$ and $\\frac{0.17}{128}$ when two or three such split-errors are injected in the mix-split path.
![]()
Figure 9: Effect of multiple volumetric split-errors on the target-CF = $\\frac{87}{128}$.
4 Worst-case error in the target-*C**F*
---------------------------------------
So far we have analyzed the effect of multiple volumetric split-errors on a target-CF when a larger erroneous droplet is selected following each mix-split step. However, in a “sensor-free” environment, one cannot select the larger erroneous droplet at will for the subsequent operations. In reality, multiple volumetric split-errors may consist of an arbitrary combination of large and small daughter-droplets. Hence, further analysis is required to reveal the role of such random occurrence of volumetric split-errors and their effects on the target-CF.
In order to facilitate the analysis, we define “error-vector” as follows: An error-vector of length k denotes the sequence of larger or smaller erroneous droplets, which are chosen corresponding to k mix-split errors in the mixing path. For example, an error-vector \[+,*ϕ*,−,*ϕ*,*ϕ*,+\] denotes volumetric split-error in Mix-Split Step 1, Step 3, and Step 6, where *ϕ* denotes no-error. In Step 1, the larger droplet is passed to the next step, whereas in Step 3, the smaller one is used in the next step, and so on. For k volumetric split-errors, 3*k* error-vectors are possible. While executing actual mix-split operations, the target-CF can be affected by any one of them.
We perform simulated experiments for finding the effect of different error-vectors for the target-CF = $\\frac{87}{128}$. Initially, we observe the effect of three errors corresponding to the mix-split operations {Mix-Split 1, Mix-Split 3, Mix-Split 6} to the target-CF (for 7% split-error). See Fig. [9](#S3.F9 "Figure 9 ‣ 3.2 Multiple volumetric split-errors ‣ 3 Effect of split-errors on the target concentration ‣ Dilution with Digital Microfluidic Biochips: How Unbalanced Splits Corrupt Target-Concentration") for an example.
We observe that CF-error in the target-droplet increases noticeably for the error-vectors \[+,*ϕ*,+,*ϕ*,*ϕ*,+\], \[+,*ϕ*,−,*ϕ*,*ϕ*,+\] and \[−,*ϕ*,−,*ϕ*,*ϕ*,+\] as depicted in the Fig. [9](#S3.F9 "Figure 9 ‣ 3.2 Multiple volumetric split-errors ‣ 3 Effect of split-errors on the target concentration ‣ Dilution with Digital Microfluidic Biochips: How Unbalanced Splits Corrupt Target-Concentration") (a)-(c). It has been seen from the Fig. [9](#S3.F9 "Figure 9 ‣ 3.2 Multiple volumetric split-errors ‣ 3 Effect of split-errors on the target concentration ‣ Dilution with Digital Microfluidic Biochips: How Unbalanced Splits Corrupt Target-Concentration") that CF-error exceeds the error-tolerance limit ($\\frac{0.5}{128}$) in each cases. Thus target-CF is affected badly for these error-vectors. We also perform similar experiments with volumetric split-error 3% and found that CF-error decreases for all cases.
Moreover, we perform simulation for revealing the effect of remaining error-vectors on the target-CF and report the generated CFs by all possible error-vectors (\# error-vectors = 8) in Table. [3](#S4.T3 "Table 3 ‣ 4 Worst-case error in the target- C F ‣ Dilution with Digital Microfluidic Biochips: How Unbalanced Splits Corrupt Target-Concentration"). It has been observed that the CF-error exceeds allowable error-tolerance limit in all such cases. On the other hand, the maximum CF-error in the target-CF occurs for the error-vector \[−,*ϕ*,+,*ϕ*,*ϕ*,−\] which is $\\frac{1.61}{128}$ (> error-tolerance limit).
Table 3: Effect of different error-vectors on the target-CF = $\\frac{87}{128}$ for split-error = 7%.
| Error-vector | Produced CF×128\* | Produced CF-error×128 | CF-error×128 < 0.5? |
|----------------------------|-------------------|-----------------------|------------------------|
| \[+, *ϕ*, +, *ϕ*, *ϕ*, +\] | 85.71 | 1.29 | No |
| \[+, *ϕ*, +, *ϕ*, *ϕ*, −\] | 88.56 | 1.56 | No |
| \[+, *ϕ*, −, *ϕ*, *ϕ*, +\] | 85.47 | 1.53 | No |
| \[+, *ϕ*, −, *ϕ*, *ϕ*, −\] | 88.36 | 1.36 | No |
| \[−, *ϕ*, +, *ϕ*, *ϕ*, +\] | 85.76 | 1.24 | No |
| \[−, *ϕ*, +, *ϕ*, *ϕ*, −\] | 88.61 | 1.61 | No |
| \[−, *ϕ*, −, *ϕ*, *ϕ*, +\] | 85.52 | 1.48 | No |
| \[−, *ϕ*, −, *ϕ*, *ϕ*, −\] | 88.41 | 1.41 | No |
-
•
\*: Results are shown up to two decimal places.
Note that volumetric split-error may also occur in the remaining mix-split steps, i.e., each mix-split step of the mixing path may suffer from volumetric split-errors. Therefore, it is also essential to reveal the effect of multiple volumetric split-errors on the target-CF when an error occurs in each mix-split operation.
We further perform experiments to find the effect of such volumetric split-errors on the target-CF = $\\frac{87}{128}$. The mix-split graph of the target-CF = $\\frac{87}{128}$ consist of 7 mix-split operations (see Fig. [5](#S2.F5 "Figure 5 ‣ 2.3 Working principle of cyber-physical based DMFBs ‣ 2 Error-recovery approaches: prior art ‣ Dilution with Digital Microfluidic Biochips: How Unbalanced Splits Corrupt Target-Concentration")). During simulation, we inject split-error in each mix-split step of the mixing path except the final mix-split operation (Mix-Split Step 7) since any volumetric split-error in the final mix-split operation will not alter the target-CF anymore (only the volumes of two resulting target-droplets may change). So there will be six potential mix-split steps (except the final one) where split-error can occur. Thus, there will be 64 possible error-vectors. We set split-error = +0.07 or -0.07, in each mix-split step, depending on the sign of the error in the corresponding position of vector. We perform experiments exhaustively and report the results for some representative error-vectors for the target-CF = $\\frac{87}{128}$ in Table [4](#S4.T4 "Table 4 ‣ 4 Worst-case error in the target- C F ‣ Dilution with Digital Microfluidic Biochips: How Unbalanced Splits Corrupt Target-Concentration"). We see that the CF-error exceeds the allowable error-range in every case.
Table 4: Effect of some error-vectors of length 6 on the target-CF = $\\frac{87}{128}$ for split-error = 7%.
| Error-vector | Produced CF×128\* | Produced CF-error×128 | CF-error×128 < 0.5? |
|-----------------|-------------------|-----------------------|------------------------|
| \[+,+,+,+,+,+\] | 85.58 | 1.42 | No |
| \[+,−,+,+,+,+\] | 85.53 | 1.47 | No |
| \[+,−,−,+,+,+\] | 85.26 | 1.74 | No |
| \[+,−,+,+,−,+\] | 85.08 | 1.92 | No |
| \[−,+,−,−,+,−\] | 88.78 | 1.78 | No |
| \[−,+,+,−,−,−\] | 88.82 | 1.82 | No |
| \[−,+,−,−,−,−\] | 88.64 | 1.64 | No |
| \[−,−,−,−,−,−\] | 88.61 | 1.61 | No |
-
•
\*: Results are shown up to two decimal places.
![]()
Figure 10: Value of (CF-error×128) for all possible error-vectors with 7% split-error for the target-CF = $\\frac{41}{128}$ and $\\frac{87}{128}$.
![]()
Figure 11: Value of (CF-error×128) for all possible error-vectors with 3% split-error for the target-CF = $\\frac{41}{128}$ and $\\frac{87}{128}$.
We also show the CF-error by all possible error-vectors in Fig. [10](#S4.F10 "Figure 10 ‣ 4 Worst-case error in the target- C F ‣ Dilution with Digital Microfluidic Biochips: How Unbalanced Splits Corrupt Target-Concentration") for the target-CF = $\\frac{41}{128}$ and $\\frac{87}{128}$ (complement of $\\frac{41}{128}$) for demonstration purpose. We plot error-vectors (setting + → 0, −→ 1) along the X-axis, and arrange them from left-to-right following a gray-code, so that any two adjacent vectors are only unit Hamming distance apart. The Y-axis shows the corresponding values of CF-error×128. Based on exhaustive simulation, we observe that the CF-error in both target-CFs becomes maximum (1.977) for the error-vector \[−,+,+,−,+,−\] (at the 57***t**h* position on the X-axis). Note that for the target-CF = $\\frac{41}{128}$, CF-errors are multiplied with -1 for the ease of analysis. We notice that none of these outcomes lies within the safe-zone (within error-tolerance limit). We also perform similar experiment for both the target-CFs when split-error becomes 3% and observe that for 12 cases, the errors lies within the tolerance zone, and the maximum CF-error reduces to $\\frac{0.84}{128}$ corresponding to the same error-vector \[−,+,+,−,+,−\] (Fig. [11](#S4.F11 "Figure 11 ‣ 4 Worst-case error in the target- C F ‣ Dilution with Digital Microfluidic Biochips: How Unbalanced Splits Corrupt Target-Concentration")) for both CFs. However, for the target-CF = $\\frac{17}{128}$, a large number of CF-errors = 29 (32) generated by all possible error-vectors of length 6 lie within the error-tolerance zone for 7% (3%) split-errors (see Fig. [12](#S4.F12 "Figure 12 ‣ 4 Worst-case error in the target- C F ‣ Dilution with Digital Microfluidic Biochips: How Unbalanced Splits Corrupt Target-Concentration")). Note that the magnitude of CF-errors decreases in each case when split-error reduces to 3%.
![]()
Figure 12: Value of (CF-error×128) for all possible error-vectors for 7% and 3% split-error for the target-CF = $\\frac{17}{128}$.
We further perform simulation for measuring maximum CF-error generated for all target-CFs of accuracy level 7 (with 7% split-error). We plot the results in Fig. [13](#S4.F13 "Figure 13 ‣ 4 Worst-case error in the target- C F ‣ Dilution with Digital Microfluidic Biochips: How Unbalanced Splits Corrupt Target-Concentration"). We observe that the CF-error for the target-CF = $\\frac{63}{128}$ and $\\frac{65}{128}$ becomes maximum ($\\frac{4.12}{128}$) compared to those produced by other error-vectors. The error-vector \[−, −, −, −, −, −\] generates the maximum CF-error for both these target-CFs.
![]()
Figure 13: Maximum value of (CF-error×128) for all target-CFs with accuracy level = 7.
5 Maximum CF-error: A justification
-----------------------------------
Motivated by the need for a formal proof for generating maximum CF-error in the target-CF, we have performed a rigorous theoretical analysis and further experiments to study the properties of CF-error in a target-CF. The following analysis, as shown below, reveals how the problem of error-tolerance can be handled in a more concrete fashion.
Consider a particular target-CF = *C**t* and its dilution tree. Let the current mix-split step be *i* (other than the last step, where the occurrence of split-error does not matter), and the intermediate-CF arriving at *i* be *C**i*. If a 1X sample (buffer) droplet is added in this step, it produces CF = $\\frac{C\_{i} + 1}{2}$ (= $\\frac{C\_{i}}{2}$), assuming that the volume of the droplet arriving at *i* is correct (1X).
Consider the first case, and assume that the droplet arriving at *i* suffers a volumetric split-error of magnitude *ϵ* at the previous step. Hence, after mixing with a sample droplet, the intermediate-CF will become: $\\frac{{C\_{i}{({1 + \\epsilon})}} + 1}{2 + \\epsilon}$; the sign of *ϵ* is set to positive (negative) when the incoming intermediate-droplet is larger (smaller) than the ideal volume 1X. Thus, the error (*E**r*) in the intermediate-CF becomes:
$$E\_{r} = {\\frac{C\_{i} + 1}{2} - \\frac{{C\_{i}{({1 + \\epsilon})}} + 1}{2 + \\epsilon}} = \\frac{\\epsilon{({1 - C\_{i}})}}{4 + {2\\epsilon}}$$
(3)
![]()
Figure 14: CF-error at the next mix-split step (for positive and negative single split-error).
From Equation [3](#S5.E3 "(3) ‣ 5 Maximum CF-error: A justification ‣ Dilution with Digital Microfluidic Biochips: How Unbalanced Splits Corrupt Target-Concentration"), it can be observed that the magnitude of *E**r* becomes larger when *ϵ* is negative, because a negative error reduces the value of the denominator. In other words, the error in CF will be more if a droplet of smaller-volume arrives at Step *i* compared to the case when a larger-volume droplet arrives at the mixer. In other words, the effect of the error is not symmetrical; however, since the volumes of the two daughters will be proportionately different as well, when they are mixed, the error is canceled. We perform an experiment assuming volumetric error (7%), i.e., by setting *ϵ* = +0.07 or -0.07 in one mix-split step, for all values of intermediate-CFs. The corresponding results are shown in Fig. [14](#S5.F14 "Figure 14 ‣ 5 Maximum CF-error: A justification ‣ Dilution with Digital Microfluidic Biochips: How Unbalanced Splits Corrupt Target-Concentration"). It can be observed that a negative split-error always produces larger CF-error in the target-CF for a single split-error (error-vector of length 1). Similar effects will be observed when a buffer droplet is mixed at Step *i*.
We also perform simulation by varying *C**i* from 0 to 1, and *ϵ* from -0.07 to 0.07 in Equation [3](#S5.E3 "(3) ‣ 5 Maximum CF-error: A justification ‣ Dilution with Digital Microfluidic Biochips: How Unbalanced Splits Corrupt Target-Concentration") and calculate CF-errors. We report the results as 3-dimensional (3D) plots (with different views) in Fig. [16](#S5.F16 "Figure 16 ‣ 5 Maximum CF-error: A justification ‣ Dilution with Digital Microfluidic Biochips: How Unbalanced Splits Corrupt Target-Concentration") and Fig. [16](#S5.F16 "Figure 16 ‣ 5 Maximum CF-error: A justification ‣ Dilution with Digital Microfluidic Biochips: How Unbalanced Splits Corrupt Target-Concentration"), respectively. We observe that simulation results favorably match with theoretical results (see Fig. [14](#S5.F14 "Figure 14 ‣ 5 Maximum CF-error: A justification ‣ Dilution with Digital Microfluidic Biochips: How Unbalanced Splits Corrupt Target-Concentration")), i.e., the negative split-error (single) always produces larger CF-error for a single split-error. However, the effect of error on a target-CF becomes much more complicated when multiple split-errors are considered.
| | |
|-------|-------|
| ![]() | ![]() |
Figure 15: CF-error at the next mix-split step (for positive and negative single split-error).
Figure 16: CF-error at the next mix-split step (for positive and negative single split-error).
In order to demonstrate the intricacies, we have performed a representative analysis considering three consecutive split-errors. For simplicity, let us assume that an error of magnitude *ϵ* is injected in each of these three mix-split steps. Generalizing Equation [3](#S5.E3 "(3) ‣ 5 Maximum CF-error: A justification ‣ Dilution with Digital Microfluidic Biochips: How Unbalanced Splits Corrupt Target-Concentration"), we can show that the corresponding CF-error observed after three steps will be:
$$\\begin{array}{l}
{E\_{r} = {\\frac{\\left( {{\\left( \\left( {{\\left( {{C\_{i}\\left( {1 + \\epsilon} \\right)} + r\_{1}} \\right)\\left( {1 + \\epsilon} \\right)} + r\_{2}} \\right) \\right)\\left( {1 + \\epsilon} \\right)} + r\_{3}} \\right)}{\\left( {{\\left( \\left( {2 + {\\left( {2 + \\epsilon} \\right)\\left( {1 + \\epsilon} \\right)}} \\right) \\right)\\left( {1 + \\epsilon} \\right)} + 4} \\right)} -}} \\\\
\\frac{({C\_{i} + r\_{1} + r\_{2} + r\_{3}})}{8} \\\\
\\end{array}$$
where *r**i* = 1 (for sample droplet)
= 0 (for buffer droplet), for Step *i*, *i* = 1, 2, 3.
![]()
Figure 17: CF-error for triple split-errors.
As before, we assume that *ϵ* = +0.07 or -0.07, and since we have three consecutive split steps, we can have eight possible combinations of such error vectors \[*ϕ**α*, −, −, −, *ϕ**β*\], \[*ϕ**α*, −, −, +, *ϕ**β*\], … , \[*ϕ**α*, +, +, +, *ϕ**β*\] for a given combination of *r*₁, *r*₂, *r*₃, where 0 ≤ *α*≤ (*n* - 4), 0 ≤ *β*≤ (*n* - 4) and *α* + *β*+ 3 = *n* - 1 (*n* is the accuracy level). Thus, altogether, there will be 8 combinations. Fig. [17](#S5.F17 "Figure 17 ‣ 5 Maximum CF-error: A justification ‣ Dilution with Digital Microfluidic Biochips: How Unbalanced Splits Corrupt Target-Concentration") shows the errors in CF observed after three consecutive split-errors by setting *r*₁ = 0, *r*₂ = 1, *r*₃ = 1, for all values of starting-CF, and for all eight combinations of error-vectors. From the nature of the plot, it is apparent that it is very hard to predict for which error-vector the maximum CF-error will occur, even for a given combination of *r*-values. The maximum error depends on the CF-value from which the critical-split-section begins and also on the error-vector that is chosen (i.e., whether to proceed with the larger or the smaller daughter-droplet). Furthermore, the error-expression becomes increasingly complex when the number of split-errors becomes large.
As an example, we perform experiments to study the fluctuations of the error in a particular target-CF for all combinations of error-vectors and showed the plot in Fig. [10](#S4.F10 "Figure 10 ‣ 4 Worst-case error in the target- C F ‣ Dilution with Digital Microfluidic Biochips: How Unbalanced Splits Corrupt Target-Concentration"). Note that there are several peaks up and down in [10](#S4.F10 "Figure 10 ‣ 4 Worst-case error in the target- C F ‣ Dilution with Digital Microfluidic Biochips: How Unbalanced Splits Corrupt Target-Concentration"), and based on exhaustive simulation, the value of (maximum-error × 128) is observed to be 1.977, which occurs for the error vector \[−, +, +, −, +, −\] (at the 57***t**h* position on the X-axis in Fig. [10](#S4.F10 "Figure 10 ‣ 4 Worst-case error in the target- C F ‣ Dilution with Digital Microfluidic Biochips: How Unbalanced Splits Corrupt Target-Concentration")).
From the above analysis and experimental results, we conclude that it is hard to formulate a mechanism that will identify the exact “maximum-error-vector” without doing exhaustive simulation. In other words, it may not be possible to develop a procedure that will generate the maximum-error-vector without doing exhaustive analysis.
6 Conclusion
------------
In this paper, initially, we have analyzed the effect of single volumetric split-errors (using the larger- or smaller-volume erroneous daughter-droplet) and found both theoretically and experimentally that the CF-error in the target-droplet becomes larger when the smaller-volume daughter droplet is used in the assay, i.e., when *ϵ* becomes negative. We also observed that the CF-error in a target-droplet increases with increasing magnitude of the split-error. Next, we have performed various experiments to observe the effect of multiple CF-errors on the target-CF and noticed that it may be affected by any combination of erroneous droplets (smaller/larger) during the execution of mix-split operations. We also observed that the CF-error in a target-droplet increases when the target-CF is affected by a large number of split-errors. We performed rigorous analysis to identify the error vector that causes the maximum CF-error in the target-droplet. Unfortunately, it appears that it is very difficult to come up with an algorithmic solution for identifying an error vector that maximizes the CF-error in the target under multiple split-errors. But still, the observations and findings summarized in this paper will provide useful inputs to the development of methods for sample preparation that can deal with split errors even without any sensors and/or rollback (such as recently presented e.g. in \[[16](#bib.bib16), [22](#bib.bib22)\]).
References
----------
-
1 F. Mugele and J.-C. Baret, “Electrowetting: from basics to applications,” *Journal of Physics: Condensed Matter*, vol. 17, no. 28, pp. 705–774, 2005.
-
2 K. Chakrabarty and F. Su, *Digital Microfluidic Biochips - Synthesis, Testing, and Reconfiguration Techniques*. CRC Press, 2007.
-
3 V. Srinivasan, V. K. Pamula, and R. B. Fair, “An integrated digital microfluidic lab-on-a-chip for clinical diagnostics on human physiological fluids,” *Lab Chip*, vol. 4, pp. 310–315, 2004.
-
4 K. Chakrabarty, R. B. Fair, and J. Zeng, “Design tools for digital microfluidic biochips: Toward functional diversification and more than Moore,” *IEEE Trans. on CAD*, vol. 29, no. 7, pp. 1001–1017, 2010.
-
5 M. Alistar, P. Pop, and J. Madsen, “Redundancy Optimization for Error Recovery in Digital Microfluidic Biochips,” *Design Automation for Embedded Systems*, vol. 19, no. 1-2, pp. 129–159, 2015.
-
6 W. Thies, J. P. Urbanski, T. Thorsen, and S. P. Amarasinghe, “Abstraction layers for scalable microfluidic biocomputing,” *Natural Computing*, vol. 7, no. 2, pp. 255–275, 2008.
-
7 Y.-L. Hsieh, T.-Y. Ho, and K. Chakrabarty, “Biochip Synthesis and Dynamic Error Recovery for Sample Preparation Using Digital Microfluidics,” *IEEE Trans. on CAD*, vol. 33, no. 2, pp. 183–196, 2014.
-
8 S. Poddar, S. Ghoshal, K. Chakrabarty, and B. B. Bhattacharya, “Error-correcting sample preparation with cyberphysical digital microfluidic lab-on-chip,” *ACM TODAES*, vol. 22, no. 1, pp. 2:1–2:29, 2016.
-
9 S. Roy, B. B. Bhattacharya, and K. Chakrabarty, “Optimization of dilution and mixing of biochemical samples using digital microfluidic biochips,” *IEEE Trans. on CAD*, vol. 29, pp. 1696–1708, 2010.
-
10 J.-D. Huang, C.-H. Liu, and T.-W. Chiang, “Reactant minimization during sample preparation on digital microfluidic biochips using skewed mixing trees,” in *Proc. of ICCAD*, 2012, pp. 377–384.
-
11 C.-H. Liu, T.-W. Chiang, and J.-D. Huang, “Reactant Minimization in Sample Preparation on Digital Microfluidic Biochips,” *IEEE Trans. on CAD*, vol. 34, no. 9, pp. 1429–1440, 2015.
-
12 D. Mitra, S. Roy, S. Bhattacharjee, K. Chakrabarty, and B. B. Bhattacharya, “On-Chip Sample Preparation for Multiple Targets Using Digital Microfluidics,” *IEEE Trans. on CAD*, vol. 33, no. 8, pp. 1131–1144, 2014.
-
13 S. Bhattacharjee, S. Poddar, S. Roy, J.-D. Huang, and B. B. Bhattacharya, “Dilution and mixing algorithms for flow-based microfluidic biochips,” *IEEE Trans. on CAD*, vol. 36, no. 4, pp. 614–627, 2017.
-
14 Y.-L. Hsieh, T.-Y. Ho, and K. Chakrabarty, “A Reagent-Saving Mixing Algorithm for Preparing Multiple-Target Biochemical Samples Using Digital Microfluidics,” *IEEE Trans. on CAD*, vol. 31, no. 11, pp. 1656–1669, 2012.
-
15 S. Bhattacharjee, R. Wille, J.-D. Huang, and B. Bhattacharya, “Storage-aware sample preparation using flow-based microfluidic lab-on-chip,” in *Proc. of DATE*, 2018, pp. 1399–1404.
-
16 S. Poddar, R. Wille, H. Rahaman, and B. B. Bhattacharya, “Error-Oblivious Sample Preparation with Digital Microfluidic Lab-on-Chip,” *IEEE Trans. on CAD*, 2018, doi: [{10.1109/TCAD.2018.2864263}](%7B10.1109/TCAD.2018.2864263%7D).
-
17 Y. Zhao, T. Xu, and K. Chakrabarty, “Integrated control-path design and error recovery in the synthesis of digital microfluidic lab-on-chip.” *ACM JETC*, vol. 6, no. 3, pp. 11:1–11:28, 2010.
-
18 Y. Luo, K. Chakrabarty, and T.-Y. Ho, “Error Recovery in Cyberphysical Digital Microfluidic Biochips,” *IEEE Trans. on CAD*, vol. 32, no. 1, pp. 59–72, 2013.
-
19 ——, “Real-time error recovery in cyberphysical digital-microfluidic biochips using a compact dictionary,” *IEEE Trans. on CAD*, vol. 32, no. 12, pp. 1839–1852, 2013.
-
20 ——, “Biochemistry Synthesis on a Cyberphysical Digital Microfluidics Platform Under Completion-Time Uncertainties in Fluidic Operations,” *IEEE Trans. on CAD*, vol. 33, no. 6, pp. 903–916, 2014.
-
21 C. A. Mein, B. J. Barratt, M. G. Dunn, T. Siegmund, A. N. Smith, L. Esposito, S. Nutland, H. E. Stevens, A. J. Wilson, M. S. Phillips, N. Jarvis, S. Law, M. D. Arruda, and J. A. Todd, “Evaluation of single nucleotide polymorphism typing with invader on pcr amplicons and its automation.” *Genome Research*, vol. 10, no. 3, pp. 330–343, 2000.
-
22 Z. Zhong, R. Wille, and K. Chakrabarty, “Robust sample preparation on low-cost digital microfluidic biochips,” in *Proc. of ASP-DAC*, 2019.
|
---
abstract: 'Extreme-order statistics is applied to the branches of an observer in a many-worlds framework. A unitary evolution operator for a step of time is constructed, generating pseudostochastic behaviour with a power-law distribution when applied repeatedly to a particular initial state. The operator models the generation of records, their dating, the splitting of the wavefunction at quantum events, and the recalling of records by the observer. Due to the huge ensemble near an observer’s end, the branch with the largest number of records recalled contains almost all “conscious dimension”.'
author:
- |
L. Polley\
Institute of Physics\
Oldenburg University\
26111 Oldenburg, FRG
title: 'Modelling an observer’s branch of extremal consciousness'
---
Introduction
============
Extreme-order statistics, dealing with distributions of largest, second-largest values, etc., in a random sample [@Embrechts2003], has not received much attention in quantum theory. However, statistics of outliers can be striking in a many-worlds scenario, due to the huge number of branches, providing the statistics is of the power-law type. If a random draw of some information-related quantity is made in each branch, the excess $l$ of the largest over the second-largest draw would be huge. That excess exponentiates to $2^l$ if information is processed in qubits, each of which has two Hilbert-space dimensions. Thus, using the dimension as a weight of a branch [@Isham1994], the weight of the extremal branch may exceed by far the total weight of all other branches. This might be a realisation, for an entire history of an observer rather than for a single measurement, of the idea that “massive redundancy can cause certain information to become objective, at the expense of other information” [@Zurek2005].
The many-worlds scenario [@Everett1973] is implicit in decoherence theory [@Zurek1981] which has been able to explain why macroscopic superpositions evolve, rapidly, into states with the properties of classical statistical ensembles. There remains, however, the “problem of outcomes” [@SchlosshauerBuch2007]: Why is it that an observer of a superposition always finds himself ending up in a pure state instead of a mixture? The problem is not with objective facts, but with the consciousness of the observer. For, if any statement is derived from observations alone, it can only involve observations from one branch of the world because of vanishing matrix elements between different branches. Nowhere on the branching tree an objective contradiction arises. As to a conscious observer, however, we cannot tell by present knowledge whether he needs to “observe” his various branches in order to become aware of them, or whether it suffices for him to “be” the branches, whatever that means in physical terms [@Penrose1997]. In the model constructed below, the “observer” could in principle be aware of all of his branches, but, due to statistics of extremes under a power law, this amounts to being aware of one extremal branch plus fringe effects.
We shall be modelling, rather abstractly, an observer “as a finite information-processing structure” [@Donald1999], with an emphasis on “processing”, because it means nothing to the observer if a bit of information “has” a certain value unless the value is revealed by interaction. We shall take a history-based approach mainly for reasons of extreme-value statistics under a power-law: outliers are most pronounced in large random samples, suggesting to use as a sample the entire branching tree. In addition, “the history of a brain’s functioning is an essential part of its nature as an object on which a mind supervenes” [@Donald1997].
Ideally, a model of quantum measurement would be based on some Hamiltonian without explicit time dependence; in particular, without an external random process in time. Such an approach via the Schrödinger equation, however, would be hindered by difficulties in solving the equation. We shall greatly simplify our task by considering time evolution only in discrete steps, and by constructing a unitary operator directly for a time step. It would be possible, though of little use, to identify a Hamiltonian generating the evolution operator. A further simplification will be to consider evolution only during an observer’s lifetime.
If we describe measurements in the aforementioned way, we must show how stochastic behaviour can emerge *in the observation* of quantum systems. It will be sufficient to use random numbers in the construction of the time-step operator, in presumed approximation to real-world dynamical complexity. Under repeated application of the (constant) operator, evolution is deterministic in principle. When the application is to a particular initial state, however, the built-in randomness becomes effective.
The evolution operator is constructed as a product of unitaries. This facilitates evaluations; in particular, it enables a straightforward definition of “conscious dimension”.
The scenario of the model is as follows. A quantum system is composed of a record-generating part, like some kind of moving body; of records keeping track of the motion; and of subsystems associated with the records, allowing for demolition-free reading. The observer appears through the subsystems and through part of the evolution operator. At his “birth”, all records are in blank states, while the record-generating body is in some quasiclassical state which *determines* the subsequent evolution. The evolution operator provides four kinds of event: Quasiclassical motion for most of the time, accompanied by the writing of records; dating of records by conservative ageing; splitting of the motion into a superposition of two equal-amplitude branches at certain points of the body’s orbit; and the reading of records by a “scattering” interaction with the subsystems of the records.
Random elements in the evolution are: Duration of quasiclassical sections; the states of the body at which evolution continues after a split; and most crucially, the number of records being recalled within a timestep. For the latter number, a power-law distribution is assumed[^1]. No attempt is made here to justify the distribution—it should be regarded as a working hypothesis for the purpose of demonstrating the potential relevance of power-law statistics for quantum measurement.
The state of superposition, emerging by repeated steps of evolution from an initial state of the chosen variety, can be made explicit to a sufficient degree. It can be put in correspondence with a branching tree of the general statistical theory of branching processes. Statistical independence as required by that theory is exactly satisfied by the model evolution, due to random draws employed in constructing the evolution operator.
Consciousness is assumed to reside in unitary rotations of the recalling subsystems of the records, triggered when a record of a special class is encountered. The trigger is associated with a random draw determining the number of “redundant” records to be processed. Somewhere on the branching tree (that is, in some factor of the tensor products superposed) that number takes the extremal value. Because of the branching structure, the probability is large for that value to occur near the end of an observer’s lifetime. Hence, it singles out (almost) an entire history. A study into the probability distribution for the difference between the largest and the second-largest draw finally shows that the dimension of the subspace affected by conscious rotations in all branches of the superposition is almost certainly exhausted by the dimension of conscious rotations in the extremal branch.
The “objective” factors of the evolution operator are constructed in sections \[secQclStates\] to \[secDating\]. Their effect on the initial state is evaluated in section \[secTheSuperposition\]. Conscious processing of records is modelled in section \[secRR\], while its statistics is analysed in section \[secExtremalBranch\]. Section \[Born\] shows how the model would generalise to branching with non-equal amplitudes or into more than two branches. Conclusions are given in section \[Conclusion\].
Construction of the evolution operator\[secConstruction\]
=========================================================
Record-generating system\[secQclStates\]
----------------------------------------
A basic assumption of the model is that for all but a sparse subset of time steps, evolution is quasi-classical, like a moving body represented by a coherent state. Evolution on such a section is determined by a small set of dynamical variables, while a large number of “redundant” records is written along the path. In the model’s approximation, instantaneous dynamical variables are represented by one number from the orbital set $$\label{TriggerIndicesSet}
\{ 1,\ldots,K \} = : {\cal O}$$ The ordering is such that quasi-classical evolution takes the record-generating system from index $k$ to index $k+1$ within a step of time. The corresponding basis vectors of the record-generating system are denoted by $$\label{BasisRecordGeneratingSystem}
\psi_k \qquad k \in {\cal O}$$ They are assumed to be orthonormal.
![\[RecordAgeing\]States of a record represented by $N$ dots on a circle, ageing (without loss of information) under repeated writing operations.](RecordAgeing)
Structure of records and the writing operation
----------------------------------------------
The focus of the model is on an observer’s interaction with records generated during his personal history. The term “record” will refer to observer’s individual memories as well as to more objective forms of recording. Moreover, it will be convenient to use the term “record” as an abbreviation for “recording unit”. Thus, records can be in “blank” or “written” states.
Each recording unit $r_i$ decomposes into a subsystem holding the information, and another subsystem allowing the observer to interactively read the information without destroying it. The information consists, firstly, of some quality represented by the index of the record, and secondly, in the time of recording, or rather the age of the record. The age is encoded in canonical basis vectors as follows. $$\label{AgeBasis}
\begin{array}{l} e_0 = \mbox{blank state} \\
e_j = \mbox{written since $j$ steps of time} \qquad j = 1,\ldots,N-1
\end{array}$$ This is illustrated in figure \[RecordAgeing\]. The information about age will be crucial for composing an observer’s conscious history by one extremal reading. Only ages up to $N-1$ steps of time are possible, which is sufficient if we are dealing with a single observer.
Both the generation of a record and its ageing can be described by a writing operation $W$. Acting on an indicated record, it acts on the first factor of (\[DefRecordState\]) according to $$\label{DefWrite}
\begin{array}{l}
W e_i = e_{i+1} \qquad i=0,1,\ldots,N-1 \\
W e_N = e_0
\end{array}$$ The second of these equalities is unwarranted, expressing an erasure of the record and thus limiting the model to less than $N$ steps of time, but there does not seem to be any better choice consistent with unitarity. Obviously, writing operations on different records commute, $$\label{Wcommuting}
W_i W_j = W_j W_i \qquad \mbox{for all }i,j$$
To allow for the observer’s conscious interaction with a record, a two-dimensional factor space is provided, vaguely representing the firing and resting states of a neuron. Reading is modelled as a scattering of some unit vector $s$ into some other unit vector $s'$, in a way that could, in an extended model, depend on the index and the age of the record. In the present model, only dimensions will be counted, so no further specification of $s$ and $s'$ is required. The Hilbert space of a single record is thus spanned by product vectors of the form $$\label{DefRecordState}
r_i = e_{n_i} \otimes s_i \qquad \mbox{with }~
\left\{\begin{array}{l} \displaystyle e_{n_i}\mbox{ an $N$-dimensional canonical unit vector}\\
s_i \mbox{ a $2$-dimensional unit vector} \end{array}\right.$$ The Hilbert space of all records possible is spanned by product vectors $$r_1 \otimes r_2 \otimes \cdots \otimes r_I$$ where $I$, in view of the redundancy required, is much larger than $K$. The set of all indices of records will be denoted by $\cal I$. It will be convenient to use the following abbreviation. $$\label{RecordStatesShorthand}
V({\cal A}) ~ = ~ \parbox{60mm}{any tensor product of records in which all $r_i$
with $i\in{\cal A}$ are blank}$$
Initial state\[secInitialState\]
--------------------------------
We assume that when the observer is “born” the record-generating system is in some quasiclassical state $\psi_{k_\mathrm{in}}$ in which also the observer’s identity is encoded. All records of the personal history are initially blank. Using abbreviation (\[RecordStatesShorthand\]), the assumed initial state can be written as $$\label{InitialState}
|\mathrm{in}\rangle = \psi_{k_\mathrm{in}} \otimes V({\cal I})$$ This choice of an initial state will imply that, in the terms of [@Zurek2005], we are restricting to a “branching-state ensemble”. Such a restriction is necessary for pseudorandom behaviour to emerge under evolution by repeated action of a unitary time-step operator. By contrast, eigenstates of that operator would evolve without any randomness.
$k_\mathrm{in}$ is located on some string of quasiclassical events, as defined in section \[secRS\]. It is this string, chosen out of many similar ones, that acts as a “seed” which determines the observer’s pseudo-random history.
Quasiclassical evolution and quantum events\[secRS\]
----------------------------------------------------
The idea of quasiclassical evolution, assumed to prevail for most of the time, is $\psi_k\to\psi_{k+1}$ in a timestep (section \[secQclStates\]). This is to be accompanied by the writing of records. When the record-generating system is in the state $\psi_k$ we assume that writing operations $W_i$ act on all records $r_i$ whose indices, or addresses, are in a set ${\cal A}_{\mathrm{W}}(k)$. While these records are redundant, we assume that $k$ can be retrieved from each of them, which requires $$\label{AW(k)intersection}
{\cal A}_\mathrm{W}(k) \cap {\cal A}_\mathrm{W}(l) = \emptyset ~~\mbox{for}~~ k\neq l$$ When the record-generating system arrives at an index $k$ in a sparse subset ${\cal Q}\subset{\cal O}$, we assume that a superposition of two branches (“up” and “down”) is formed, with equal amplitudes in both branches. Quasiclassical evolution is assumed to jump from $k$ to $u(k)$ or $d(k)$, respectively, and continue there, as illustrated in figure \[SplittingEvolution\].
![\[SplittingEvolution\] (a) Sections of quasi-classical evolution (thick lines) which, when an index in the set $\cal Q$ is encountered, split into superpositions of continued quasi-classical evolution. Length of lines, and addresses after splitting, are random elements of the evolution operator. When an initial state is chosen, defining an observer’s origin and the “seed” for pseudo-random evolution, a branching tree results whose first and second branches are shown in (b).](ModelEvolution)
For simplicity of evaluation, we choose $u(k)$ and $d(k)$ to coincide with starting points of quasiclassical sections. These are indices in ${\cal O}$ subsequent to an index in ${\cal Q}$. We include index $1$ as a starting point and assume $K\in{\cal Q}$ to avoid “boundary” effects. Also, we must avoid temporal loops to occur within an observer’s lifetime. That is, evolution operator $U_\mathrm{O}$ should be constructed such that no loops arise within $T$ repeated applications. Loops cannot be avoided entirely because for every $k\in {\cal Q}$ there are two jumping addresses, so every section of quasi-classical evolution will have to be targeted by two jumps on average.
To keep branches apart for a number $n$ of splittings, assume ${\cal Q}$ to be decomposable into $2^n$ subsets ${\cal J}[s]$, mutually disjoint, and each big enough to serve as an ensemble for a random draw. Let $s$ be a register of the form $$\label{Register}
s = [s_1,s_2,\ldots,s_n] ~~ \mbox{ where }~~ s_j \in \{ u,d \}$$ Then, for $k\in{\cal J}[s_1,s_2,\ldots,s_n]$, define the jumping address $u(k)$ as follows. $$\label{Defu(k)}
\begin{array}{l}
\mbox{Draw $k'$ at random from } {\cal J}[s_2,s_3,\ldots,s_n,u]. \\
\mbox{Put }u(k)\mbox{ at first point of quasiclassical section leading to }k'.
\end{array}$$ Likewise for $v(k)$. $$\label{Defv(k)}
\begin{array}{l}
\mbox{Draw $k'$ at random from } {\cal J}[s_2,s_3,\ldots,s_n,v]. \\
\mbox{Put }v(k)\mbox{ at first point of quasiclassical section leading to }k'.
\end{array}$$ The new entry, $u$ or $v$, will be in the register for $n$ subsequent splittings. Later on, the $u/v$ information is lost, allowing for inevitable loops to close. Given observer’s lifetime $T$, and a minimal length $d_\mathrm{min}$ of quasiclassical sections, parameter $n$ should be chosen as $ n = T/d_\mathrm{min}$. The splitting addresses are collected in a set, $$\label{AkSuperposition}
{\cal A}_\mathrm{S}(k) = \{u(k),d(k)\}$$ Quasiclassical motion $k\to k+1$ and the branching $k\to(u,d)$ constitute the “orbital” factor of evolution which is to be described by a unitary operator $U_\mathrm{O}$. Under its action, images of orthonormal vectors must be orthonormal. Since loops are inevitable in the set of indices, orthogonality of images cannot be ensured by the states of the record-generating system alone, but can be accomplished by orthogonalities (blank vs. written) in the accompanying states of records[^2]. To this effect, certain records must be blank before the action of $U_\mathrm{O}$. Their addresses are $$\label{UQBlankRecords}
{\cal B}(k) = \mbox{union of all }{\cal A}_\mathrm{W}(l)\mbox{ with nonempty }
{\cal A}_\mathrm{S}(l) \cap {\cal A}_\mathrm{S}(k) \qquad k\in{\cal Q}$$ Using this, definitions (\[Defu(k)\]), (\[Defv(k)\]), and abbreviation (\[RecordStatesShorthand\]), we define $$\label{DefUO}
\begin{array}{ll}
\displaystyle
U_\mathrm{O}~\psi_k \otimes V({\cal B}(k)) =
\frac{ \psi_{u(k)} + \psi_{d(k)} }{\sqrt2}
\otimes \left(\prod_{i\in{\cal A}_\mathrm{W}(k)} W_i\right) V({\cal B}(k))
& \quad k\in{\cal Q} \\
\displaystyle
U_\mathrm{O}~\psi_k \otimes V({\cal A}_{\mathrm{W}}(k)) =
\psi_{k+1} \otimes
\left(\prod_{i\in{\cal A}_\mathrm{W}(k)} W_i\right) V({\cal A}_{\mathrm{W}}(k)) & \quad k\notin{\cal Q}
\end{array}$$ This is a mapping of orthonormal vectors onto orthonormal image vectors, so it can be extended to a definition of a unitary operator $U_\mathrm{O}$ by choosing any unitary mapping between the orthogonal complements of the originals and images. However, due to the loop-avoiding construction, only properties (\[DefUO\]) are required for the evolution of the initial states of section \[secInitialState\].
Dating of records\[secDating\]
------------------------------
The time at which a record was written can be retrieved from information about its age. An ageing operator for the $i$th record, $U_\mathrm{A}(i)$, is defined by its action on the first tensorial factor of (\[DefRecordState\]) as follows. $$\label{DefUA(i)}
\begin{array}{l}
U_\mathrm{A}(i) e_0 = e_0 \\
U_\mathrm{A}(i) e_i = e_{i+1} \qquad i=1,\ldots,N-2 \\
U_\mathrm{A}(i) e_{N-1} = e_1
\end{array}$$ In particular, a record in blank state $e_0$ remains unchanged. Also, tensorial factors with indices different from $i$ are unaffected by $U_\mathrm{A}(i)$. At the limiting age, corresponding to $N$ steps of time, $U_\mathrm{A}(i)$ becomes senseless. The ageing operator for the entire system of recording units is $$\label{DefUA}
U_\mathrm{A} = \prod_{i=1}^I U_\mathrm{A}(i)$$ The ageing of records is conservative, without loss of information.
Explicit form of superposition \[secTheSuperposition\]
------------------------------------------------------
The operators constructed in (\[DefUO\]) and (\[DefUA\]) define the “objective” part of time evolution. They do not act on the second tensorial factor of a record (equation (\[DefRecordState\])). They are assumed to multiply in the order $$\label{Uobj}
U_\mathrm{O} U_\mathrm{A} = U_\mathrm{obj}$$ Starting from our preferred initial product state $|\mathrm{in}\rangle$ (section \[secInitialState\]), let us formulate the sequences $\{b_n\}$ of indices to which evolution branches under the action of $U_\mathrm{obj}$. If quasiclassical evolution has promoted the record-generating system to an index $k\in{\cal Q}$, branching to addresses $j\in{\cal A}_\mathrm{S}(k)$ occurs in the next step of time. A choice of $j$, here renamed $b_n$, characterises the branch. From $b_n$, quasiclassical evolution proceeds through indices numerically increasing until the next index in ${\cal Q}$ is reached, and branching to $b_{n+1}$ occurs. This takes a number of steps, $d_n$. The possible sequences of branching addresses $b_n$ and intervals $d_n$ of quasiclassical evolution must satisfy the following recursion relations. $$\label{BranchingSequence}
\begin{array}{rcl}
b_0 &=& k_\mathrm{in} \\
q_n & = & \min \big\{ q \in {\cal Q} ~ | ~ q > b_n \big\} \quad
\mbox{(auxiliary)} \\
b_{n+1} & \in & {\cal A}_\mathrm{S}(q_n) \\
d_n &=& q_n - b_n + 1
\end{array}$$ The time $t_n(b)$ at which a branching index $b_n$ is reached, depending on the branch considered, is $$\label{DefTau}
t_n(b) = \sum_{m=0}^{n-1} d_m(b)$$ For a convenient representation of the stages of evolution in various branches, let us use the following abbreviation. $$\label{integerTheta}
[t] = \left\{\begin{array}{ll} 0 & \quad t \leq 0 \\
t & \quad t > 0
\end{array} \right\} = \Theta(t-\epsilon)$$ Moreover, let $p(b,t)$ be the number of branching points passed by time $t$. Referring to definitions (\[BranchingSequence\]) and (\[DefTau\]), the record-generating system then is in the state $$\label{psi(b,t)}
\psi_{k(b,t)} ~~ \mbox{with}~~ k(b,t) = b_{p(b,t)} + t - t_{p(b,t)}$$ Denoting by $V({\cal I})$ the state in which all records are blank, the evolved state after $t$ steps of time may be expressed as $$\label{UrwsPsi}
\left( U_\mathrm{obj}\right)^t |\mathrm{in}\rangle = \sum_b \left(\frac1{\sqrt2}\right)^{p(b,t)}
\psi_{k(b,t)} \otimes
\left( \prod_{n=0}^\infty \prod_{l=0}^{d_n-1}
~ \prod_{i\in{\cal A}_{\mathrm{W}}(b_n+l)} W_i^{[t - t_n - l]} \right) V({\cal I})$$ To see this, first note that once a record is written, its ageing is the same as repeated writing by (\[DefWrite\]) and (\[DefUA(i)\]). Writing operations can be assembled to powers because they commute (equation (\[Wcommuting\])). Thus, the linear rise of the powers with $t$ is the result of the ageing operator $U_\mathrm{A}$. It remains to consider the time of the first writing of a record. At time $t_n+l$, the record-generating system is in the state with index $b_n+l$. Corresponding records, with indices in ${\cal A}_{\mathrm{W}}(b_n+l)$, are written at the next step of time, that is, when the exponent $[t - t_n - l]$ of the writing operator is nonzero for the first time.
For later reference we note that the product vectors constituting different branches are orthogonal. This is because two branches differ by at least one record, so that there is at least one tensorial factor $r_i$ which is in the blank state in one branch and written, hence orthogonal, in the other.
Consciousness modelled as triggered recall\[secRR\]
---------------------------------------------------
A third factor of the evolution operator is supposed to model reflections in the observer’s mind. Neurophysical detail is beyond the scope of this paper, but consciousness “supervenes” on neural *dynamics* [@Donald1999]. The other ingredient of the present model is power-law statistics, which appears to be common in neurophysics, but is usually discussed in highly specialised context. It is essentially a working hypothesis here.
As this part of the evolution operator is going to make its dominant impact near the end of an observer’s histories, it must be prevented from writing records of an objective sort, that is, from writing any records at all in the model’s terms. Otherwise, a scenario would result in which the facts constituting an observer’s histories would be generated within a step of time. Thus the factor $U_\mathrm{C}$, defined below, should be only reflective, like reading a record by elastic scattering. Activities like writing this article would be regarded as “subconscious”, that is, rather a matter of $U_\mathrm{O} U_\mathrm{A}$ within the scope of the model.
### Triggering records
Conscious reflection is assumed to be triggered by the reading of a record $r_m$ in a sparse index set ${\cal M} \subset {\cal I}$. Moreover, it is assumed for simplicity that $$\label{OneMemoryindexAW}
\mbox{for each }k\in{\cal O}\backslash{\cal Q}
\mbox{ there is exactly one such $m$ in } {\cal A}_\mathrm{W}(k)$$ If $r_m$ is blank, no reflection occurs. If $r_m$ is in a written state, a “scattering” operation $S_l$ will be triggered on all $r_l$ with indices in a set ${\cal A}_{\mathrm{R}}(m)$ specified in equation (\[DefAR\]) below.
Let $P^0_m$ denote the projection on the blank state of $r_m$, and $P^\perp_m$ the projection on all written states of $r_m$. We define the reflection triggered by $r_m$ as $$\label{DefUrr}
U_\mathrm{C}(m) = P^0_m + P^\perp_m \prod_{l\in{\cal A}_{\mathrm{R}}(m)} S_l
\qquad m \notin{\cal A}_{\mathrm{R}}(m)$$ with implicit unities for all tensor factors whose indices do not appear. A scattering operation $S$, in the indicated space, is assumed to modify the second factor of (\[DefRecordState\]) in a way dependent on the first factor, $$\label{DefRecall}
\begin{array}{l}
S_l ~ e_0 \otimes s = e_0 \otimes s \\
S_l ~ e_i \otimes s = e_{i} \otimes u_{li} s \qquad u_{li} \neq 1 \qquad i=1,\ldots,N-1
\end{array}$$ For records in a written state, all we assume about the unitary $2\times 2$ matrices $u_{li}$ is that they be different from $\bf 1$ so as to make “something” go on in the observer’s mind.
A crucial assumption is made on the statistics, in random draws for the construction, of the lengths $L({\cal A}_{\mathrm{R}})$ of the address sets ${\cal A}_{\mathrm{R}}$. Let $\overline{F}(L)$ be the complementary cumulative distribution function, that is the fraction of sets whose length is greater than $L$. We assume a capped Pareto distribution $$\label{AkPowerLaw}
\overline{F}_1(L) = \left\{\begin{array}{cr} (L_0/L)^\alpha & L_0 \leq L \leq I \\
0 & L > I \end{array} \right\} \qquad 1 < \alpha < 2$$ where the cap is assumed to be practically irrelevant due to the size of the index set $\cal I$. To ensure the statistical independence required for the theorems of order statistics to apply, let us construct the index sets by explicit use of independent random draws. In a first step, the lengths of sets are determined. $$\label{RandomDrawSize}
\mbox{For all $m\in{\cal M}$,} ~ L(m) = \mbox{random draw from distribution (\ref{AkPowerLaw})}$$ In a second step, $L(m)$ indices are selected by another random procedure, and collected into ${\cal A}_{\mathrm{R}}(m)$. The procedure is as follows.
### Searching for potential records \[RetrieveRecords\]
Operator $U_\mathrm{C}$ must select $L(m)$ records that *may* have been written during time evolution. In fact, if recall operations were searching for records irrespective of causal relations, the scenario envisioned would not work statistically. The search would be based on mere chance—on a *probability* proportional to $L(m)$, which would either have to be very small, or could not be power-law distributed, since probabilities are bounded above.
Tracing back histories that may have lead to a memory index $m$, there emerges a backward-branching structure because there are, on average, two indices of ${\cal Q}$ from which quantum jumps are directed to a given section of quasiclassical evolution; see section \[secRS\]. Starting from the memory-triggering index $m$, all sequences $\{c^m_n\}_{n=0,1,\ldots}$ of branching points that may have lead to the writing at $m$ must satisfy the following relations. $$\label{BackwardBranchingSequence}
\begin{array}{rcl}
c^m_0 &=& \{ k\in{\cal O} ~|~ m \in {\cal A}_\mathrm{W}(k)\} \\
j_n & = & 1 + \max \big\{ q \in {\cal Q} ~ | ~ q < c^m_n \big\} \quad
\mbox{(auxiliary\footnotemark)} \\
d^m_n &=& c^m_n - j_n \quad \mbox{(length of quasiclassical section)} \\
j_n & = & {\cal A}_\mathrm{S}(c^m_{n+1}) \quad \mbox{(preceding points of branching)}
\end{array}$$ Indices $c^m_n,c^m_n-1,\ldots,c^m_n-d^m_n$ constitute the $n$th section on a branch of *possible* evolution. The average length of such a section is $K/Q$. We wish to distribute $L(m)$ conscious recalls equally over a lifetime. Hence there are $$\begin{aligned}
L(m)/T && \mbox{ recalls per time} \label{RecallsPerTime} \\
l(m) = K L(m)/ TQ && \mbox{ recalls per section} \label{effectiveL}\end{aligned}$$ Thus, for every sequence $c$ branching backward from $m$ and for every section number $n$ let us define $$\begin{array}{rl}
{\cal C}(m,c,n) ~ = & \mbox{set of $l(m)$ randomly chosen elements unequal $m$} \\ & \mbox{of }
{\cal A}_\mathrm{W}(c^m_n) \cup{\cal A}_\mathrm{W}(c^m_n-1) \cup \cdots
\cup {\cal A}_\mathrm{W}(c^m_n-d^m_n)
\end{array}$$ In terms of ${\cal C}(m,c,n)$ we can specify the index sets already used in (\[DefUrr\]). $$\label{DefAR}
{\cal A}_\mathrm{R}(m) = \bigcup_{n=0}^{TQ/K} ~ \bigcup_{\{c\}} ~ {\cal C}(m,c,n)$$ The full consciousness-generating part of the evolution operator is, referring to (\[DefUrr\]) again, $$\label{Urecall}
U_\mathrm{C} = \prod\limits_{m\in{\cal M}} U_\mathrm{C}(m)$$
Branch of extremal consciousness\[secExtremalBranch\]
=====================================================
By $T$ steps of evolution, a superposition of product states builds up, which in equation (\[UrwsPsi\]) was expressed as a sum over branches, each branch being generated by a product of writing operations. One-to-one correspondence to a branching tree can be seen by factoring out $W$ operations of common parts of the branches. The loop-avoiding construction of section \[secRS\] is important here.
On the branching tree, certain memory-triggering records are in a written state. One of those records will trigger the maximal number of recalls, whose excess we wish to quantify statistically. It would be straightforward to estimate the excess on the basis of mean values alone, similar to the argument given in [@Polley2008], but fluctuations in branching processes are as big as the mean values [@Harris1963] so analysis in terms of probability distributions is required.
Statistics of branching and recall-triggering
---------------------------------------------
The general theory of Galton-Watson processes [@Harris1963] deals with familiy trees whose members are grouped in generations $n=1,2,3,\ldots$. Each member generates a number $j=0,1,\ldots$ of members of the next generation with probability $p_j$. In our model, a new generation occurs at each step of time. The number of members in a generation, $Z_t$, is the number of product states superposed at time $t$. The probability $p_0$, corresponding to an end of a branch of the observer’s history, is zero within the lifetime $T$ considered. The probability $p_1$, corresponding to a product state continuing as a product state after a step of time, is close to one. The probability $p_2$, corresponding to the splitting of a branch into a superposition of two product states, is small but nonzero. Probabilities $p_3,p_4,\ldots$ are zero by the model assumptions.
In our model, splitting in two branches occurs at $Q$ randomly distributed points of $K$, so the parameters for the branching process here are $$\label{BranchingParameters}
p_2 = \frac{Q}{K} = : \sigma ~~~~~~~~~ p_1 = 1 - p_2 ~~~~~~~~~~
p_j = 0 \mbox{ for }j = 0,3,4,5,\ldots$$ The mean number of offspring generated by a member thus is $$\label{DefBranchingMeanValue}
\mu = 1 + \sigma > 1$$ Because of $p_0=0$, we are dealing with zero “probability of extinction”.
For the statistics of the extremes, we need to know the total number of recall-triggering factors on the tree. By assumption (\[OneMemoryindexAW\]) that number equals the “total progeny” $Y_t = \sum_{\tau=1}^t Z_\tau$. By Theorem 6 of [@Pakes1971], the probability distribution for the values of $Y_t$ has an asymptotic form which can be described as follows. There exists a sequence of positive constants $C_t$, $t=1,2,\ldots$, with $C_{t+1}/C_t \to \mu $ for $t\to\infty$ such that $$\label{YtProbability}
\lim_{t\to\infty} P\{Y_t \leq x C_t\} = P\{W\leq x\sigma/\mu\}$$ where $W$ is a non-degenerate random variable which has a continuous distribution on the set of positive real numbers. Let $w$ be the probability density for $W$. We shall treat $Y$ as continuous, too, and assume that by an observer’s lifetime $T$ the limiting form of (\[YtProbability\]) already applies. Then the probability of $Y$ is, by differentiating (\[YtProbability\]) and using $\mu\approx 1$, $$\label{DensityMemorisingIndices}
w\left(\frac{\sigma Y}{C_T}\right)\,\frac{\sigma}{C_T} \, \mathrm{d}Y$$ Each occurrence of a memory-triggering index $m$ is characterised by the location on the tree, in particular the time $t$, and by the length $L(m)$ of the recalling sequence according to (\[RandomDrawSize\]). Since the location results from random draws in $U_\mathrm{O}$, and the length from a random draw in $U_\mathrm{C}$, they are statistically independent, so their joint probability is the product of the separate probabilities. The time $t$ of occurrence, that is the generation number in the general theory, has a probability $Z_t/Y_t$ whose asymptotic form, under the same conditions as for (\[YtProbability\]), is given by Lemma 2.2 of [@Pakes1998]. If $j = 0,1,2,\ldots$ denotes the distance form the latest time on the tree, the probability is $$P_j = (1-\mu^{-1}) \mu^{-j}$$ Taking the latest time on the branching tree to be $T$, and treating $L$ as continuous, $P_j$ and the Pareto distribution (\[AkPowerLaw\]) give the joint probability of $t$ and $L$, $$\label{JointProbabilityLt}
P(t,L)\, \mathrm{d}L = P_{T-t} \,\alpha L_0^\alpha L^{-\alpha - 1} \, \mathrm{d}L
\qquad 0\leq t \leq T, ~ L \geq L_0$$ If the memory-triggering $m$ occurs at $t$, then by (\[RecallsPerTime\]) the number of records recalled is $$\label{defR}
R = \frac{t}{T}\,L(m)$$ It is the extreme-order statistics of this quantity that matters. The density of $R$ is obtained by taking the expectation of $\delta(R - tL/T)$ with the probability distribution (\[JointProbabilityLt\]). In the range $R>L_0$ this gives another Pareto distribution with complementary cumulative distribution function $$\label{DensityIndicesRecalled}
\overline{F}_2(R) = (R_0/R)^\alpha ~ \mbox{ where } ~ R_0^\alpha =
L_0^\alpha \sum_{t=0}^T P_{T-t} \left(\frac{t}{T}\right)^\alpha$$ Thus, with probability given by (\[DensityMemorisingIndices\]), we have a number $Y$ of memory-triggering indices on the branching tree of a lifetime, each of which with a probability given by (\[DensityIndicesRecalled\]) induces $R$ recalls along its branch.
We now use a result of order statistics, conveniently formulated for our purposes in [@Embrechts2003], table 3.4.2 and corollary 4.2.13, which relates the number of random draws, here $Y$ (different letters used in [@Embrechts2003]), to the spacing $D$ between the largest and the second-largest draw of $R$ from an ensemble given by (\[DensityIndicesRecalled\]). $$\label{FrechetSeparation}
D = R_{\mbox{\small largest}} - R_{\mbox{\small second-largest}} = R_0 \, Y^{1/\alpha} \, X$$ where $X$ is a random variable independent of $Y$. The probability density $g(x)$ of $X$, given in integral representation, can be seen to be uniformly bounded. The cumulative distribution function for $D$ is, for a given value of $Y$, of the form $G\left(Y^{-1/\alpha}D/R_0\right)$ where $G'(x)=g(x)$. Hence, the joint probability of $Y$ and $D$, expressed by density (\[DensityMemorisingIndices\]) for $Y$ and the cumulative distribution function for $D$, is $$\label{JointProbabilityYD}
G\left(Y^{-1/\alpha}D/R_0\right)
w\left(\frac{\sigma Y}{C_T}\right)\,\frac{\sigma}{C_T}\,\mathrm{d}Y$$
Dimension of conscious subspace\[secConsciousDimension\]
--------------------------------------------------------
Consciousness, in the model’s approximation, is assumed to reside in unitary rotations $u\neq 1$ of the right tensor factors of (\[DefRecall\]). Transformations of the left factors, as generated by $U_\mathrm{obj}$, are assumed to be unconscious. In the superposition generated by $U_\mathrm{obj}$, equation (\[UrwsPsi\]), the branches (product vectors) are mutually orthogonal by the “objective” left factors alone, as was noted at the end of section \[secTheSuperposition\]. Hence, the unitary rotations of consciousness take place in subspaces which, for different branches, are orthogonal. Thus, a “conscious dimension” $d_\mathrm{C}$ can be assigned to each branch. $$\label{DefConsciousDimension}
d_\mathrm{C} ~ = ~ \parbox[t]{100mm}{Hilbert-space dimension of the tensor factors rotating under
$U_\mathrm{C}$ while the remainder of factors is constant.}$$ The number of tensor factors rotating in a branch is $R$, as defined in equation (\[defR\]), so the dimension of the conscious subspace in a branch is $2^R$. It should be noted that the subspace as such *depends* on the vectorial value taken by the nonrotating factors.
The proposition of the paper is that the conscious dimension in the branch with the largest $R$ exceeds, by a huge factor $E$, the sum of conscious dimensions in all other branches. The latter sum can be estimated, denoting by $Z_T$ the number of branches (terms of superposition) at time $T$, as $$< ~ 2^{R_{\mbox{\small second-largest}}} Z_T$$ Evaluating this would require a joint distribution of $R$, $Y$, and $Z$, so a more convenient estimate, using $Z_T < Y_T$, is $$< ~ 2^{R_{\mbox{\small second-largest}}} Y_T$$ Taking binary logarithms, the customised proposition is that the last term in the equation $$\label{PropositionCustomised}
R_{\mbox{\small largest}} = R_{\mbox{\small second-largest}} + \log_2 Y + \log_2 E$$ almost certainly takes a large value. By (\[FrechetSeparation\]), $\log_2 E = D - \log_2 Y$, so the relevant cumulative distribution function is obtained from (\[JointProbabilityYD\]) as $$\label{defF3}
F_3(x) = P\{D - \log_2 Y < x \} = \int_1^\infty \!\! G\Big(Y^{-1/\alpha}(x + \log_2 Y)/R_0\Big)
w\left(\frac{\sigma Y}{C_T}\right)\,\frac{\sigma}{C_T} \,\mathrm{d}Y$$ Substituting $$\label{DifferenceRescaled}
\frac{\sigma Y}{C_T} = y ~~~~~~~~~~~~ \left(\frac{\sigma}{C_T}\right)^{1/\alpha} \frac{x}{R_0} = z$$ and putting $\sigma/C_T\approx 0$ in the lower limit of integration, the integral becomes $$\int_0^\infty G\left(y^{-1/\alpha} z + \left(\frac{\sigma}{C_T}\right)^{1/\alpha} R_0^{-1}
y^{-1/\alpha}(\log_2 y + \log_2 C_T - \log_2\sigma)\right)
w(y) \,\mathrm{d}y$$ Asymptotically for $C_T\to\infty$, which represents an exponentially grown number of branches, the integral simplifies to $$F_3(x) = \int_0^\infty G\left(y^{-1/\alpha} z )\right) w(y) \,\mathrm{d}y$$ because $G$ has the uniformly bounded derivative $g$ (see text following (\[FrechetSeparation\])) while $\int_0^\infty y^{-1/\alpha} \log_2 y \, w(y) \,\mathrm{d}y$ converges for $\alpha$ in the range given by (\[AkPowerLaw\]) and the coefficients $C_T^{-1}$ and $C_T^{-1}\log_2 C_T$ become vanishingly small. Inserting $z$ from (\[DifferenceRescaled\]), and $x=\log_2 E$ from (\[defF3\]), the cumulative distribution function for the excess factor $E$ is given by $$\label{cdfH}
F_3\left(\left(\frac{\sigma}{C_T}\right)^{1/\alpha}R_0^{-1} \log_2 E\right)$$ Due to logarithmation, followed by a rescaling which broadens the distribution by a large factor, $E$ almost certainly takes a huge value, rather independently of the exact form of the distribution function $F_3$.
Complying with Born’s rule\[Born\]
==================================
In section \[secRS\] the wavefunction is modelled to split in two branches with equal amplitudes. Born’s rule is trivially satisfied in this case. Does the model generalise correctly to a splitting with unequal amplitudes? Technically, this is accomplished by a unitary transformation devised in [@Zurek1998; @Deutsch1999; @Zurek2002] which entangles a two-state superposition with a large number of auxiliary states so as to form another equal-amplitude superposition. It will be argued that in this way the model scenario is consistent with Born’s rule in general.
The crucial point here is that “if you believe in determinism, you have to believe it all the way” [@tHooft2011]. When an observer encounters a wavefunction for a measurement, like $$\label{SystemObserverBefore}
a |A\rangle + b |B\rangle$$ that wavefunction is given to him by the total operator of evolution. The operator is thus only required to handle wavefunctions that it provides itself. Extending the model accordingly would be based on the following considerations.
In the course of measurement, a result $A$ or $B$ is obtained, but it always comes with many irrelevant properties of the constituents, like the number of photons scattered off the apparatus, the number of observer’s neurons firing, etc. Let $n$ be the number of irrelevant properties to be taken into account. Then, after the measurement, we have a state vector of the form $$\label{Result+Irrelevants}
\sum_{k=1}^m c_k |A,k\rangle + \sum_{k=m+1}^n c_k |B,k\rangle$$ The measuring evolution should commute with the projections on the spaces defined by $A$ and $B$, so we have constraints on the absolute values, $$\label{ckConstraint}
\sum_{k=1}^m |c_k|^2 = |a|^2 \qquad \qquad \sum_{k=m+1}^n |c_k|^2 = |b|^2$$ On the other hand, state vectors differing only in the phases of the $c_k$ can be regarded as equivalent for the measurement process, as has been shown by different arguments in [@Zurek1998] and [@Deutsch1999], and elaborately in [@Zurek2002].
Since the $k$-properties in (\[Result+Irrelevants\]) are “irrelevant”, we expect the evolution to produce a state belonging to the equivalence class at the peak number of representatives. A measure of this number is given by the surface element in the space of $n$-dimensional normalised states $$\delta\left(1-\sum_{k=1}^n |c_k|^2\right) \prod_{k=1}^n \mathrm{d}^2 c_k$$ which is defined uniquely, up to a constant, by its invariance under unitary changes of basis for the span of the vectors. The number of representatives is obtained by integrating over the phases, which gives $$\label{ckSurface}
\delta\left(1-\sum_{k=1}^n |c_k|^2\right) \prod_{k=1}^n 2\pi|c_k|\mathrm{d}|c_k|$$ At the maximum, all moduli must be nonzero because of the $|c_k|$ factors. It follows by the permutation symmetry of constraints (\[ckConstraint\]) that $$|c_k| = \frac{|a|}{\sqrt{m}} \quad k=1,\ldots,m \qquad
|c_k| = \frac{|b|}{\sqrt{n-m}}\quad k=m+1,\ldots,n$$ Finally extremising in the parameter $m$, by extremising the product of moduli in (\[ckSurface\]) we find $$\label{ckSpecified}
|c_k| = \frac1{\sqrt{n}} ~~ \mbox{ for all }k$$ So the number $m$ of branches with property $A$ equals $|a|^2 n$. A similar argument, with a discussion of fluctuations about the maximum, was given for a different scenario in [@Polley2005]. Equations (\[Result+Irrelevants\]) and (\[ckSpecified\]), in conjunction with an arbitrary choice of phases, like $c_k=\sqrt{1/n}$, now specify the state vector that an evolution operator for a measurement should generate.
Instead of splitting into “up” and “down” branches we now have splitting into $n$ branches, of which $m$ correspond to result $A$ of the measurement, and $n-m$ to result $B$. Information about $A$ or $B$ can be regarded as implicit in the indices of the records, so parameter $m$ need not even appear. In the equations of section \[secConstruction\] the following replacements have to be made. In (\[DefUO\]), $\psi_u+\psi_d$ extends to a sum over $n$ equal parts. In (\[Register\]) the entries $u,d$ of the register change to $1,\ldots,n$, and the same holds for the address sets (\[AkSuperposition\]). Normalisation factors change from $\sqrt{1/2}$ to $\sqrt{1/n}$ in equations (\[DefUO\]) and (\[UrwsPsi\]). In section \[secExtremalBranch\] the nonzero branching probabilities become $p_n \ll 1$ and $p_1 = 1-p_n$, and all powers of 2 change to powers of $n$.
Conclusion\[Conclusion\]
========================
Observer’s consciousness playing a role in the projection postulate has been pondered since the beginnings of quantum theory [@vNeumann1932]. The model presented here is a proposal of how the idea might be realised, albeit with modifications, in a framework of unitary quantum-mechanical evolution. In the model scenario, the projection involved is on the conscious subspace at the time of the extremal draw as defined in section \[secConsciousDimension\]. But this projection assigns an outcome, “up” or “down”, not only to a single measurement (or quantum event, as it was called here) but to all measurements of an observer’s lifetime. Moreover, an assumption on statistics in the dynamics of consciousness was crucial to the functioning of the model. It might be objected then that we have only replaced the projection postulate with a statistical postulate based on speculation about consciousness. But there are well-known mechanisms for generating power-law statistics, some of which may be adaptable so as replace the postulate by the workings of an extended model. For the present model, it was important also to demonstrate that the preconditions of extreme-order theorems can be realised *exactly* in a framework of unitary evolution. This was made obvious by employing random draws in the construction of the time-step operator.
A rather complicated part in that construction, section \[RetrieveRecords\], was to identify by backward branching the records that had a chance to be written before a given record. This might suggest to alternatively *accumulate* the extremal value in the course of evolution. But “when the sum of \[…\] independent heavy-tail random variables is large, then it is very likely that only one summand is large” [@Barbe2004], so the alternative approach is very likely to reduce to the one already taken.
A scenario of consciousness, all generated by one extremal event in a short interval of time, may also contribute to improving the physical notion of the metaphysical present. In the physical approach, time is parameterised by the reading of a clock, and it is possible to quantify the time intervals of cognitive processing. But outside the science community, such an approach is often felt to inadequately represent the existential quality of a moment. The model scenario would suggest a more complex relation to physical time. Physicswise, it is a moment indeed, but since it covers all individual experience, the moment appears to be lasting.
The undifferentiated usage of “consciousness” in this paper will be unsatisfactory from a biological or psychological point of view, although consciousness as a processing of memories was discussed first in those fields [@Edelman1989]. For the present purpose, the meaning of the term was defined in section \[secConsciousDimension\]. As a consequence, activities of an observer which are conscious in the usual sense had to be regarded as “unconscious”. Apparently, various levels of consciousness should be taken into account by an extended model.
[10]{}
P. Embrechts, C. Klüppelberg, and T. Mikosch. . Springer, Berlin, 4th edition, 2003.
C. J. Isham. Quantum logic and the histories approach to quantum theory. , 35:2157–2185, 1994.
R. Blume-Kohout and W. H. Zurek. Quantum [D]{}arwinism: [E]{}ntanglement, branches, and the emergent classicality of redundantly stored quantum information. , 73:062310, 2006.
H. Everett. . University Press, Princeton (N.J.), 1973. edited by B. S. DeWitt and N. Graham.
W. H. Zurek. Pointer basis of quantum apparatus: Into what mixture does the wave packet collapse? , 24:1516–1525, 1981.
M. Schlosshauer. . Springer, Berlin, 2007.
R. Penrose et al. . Cambridge University Press, Cambridge, 1997. edited by M. Longair.
M. J. Donald. Progress in a [M]{}any-[M]{}inds [I]{}nterpretation of [Q]{}uantum [T]{}heory. arXiv:quant-ph/9904001.
M. J. Donald. On [M]{}any-[M]{}inds [I]{}nterpretations of [Q]{}uantum [T]{}heory. arXiv:quant-ph/9703008.
M. V. Simkin and V. P. Roychowdhury. Re-inventing [W]{}illis. arXiv:physics/0601192.
L. Polley. Extreme-value statistics of dimensions determining an observer’s branch of the world? arXiv:0807.0121.
Th. E. Harris. . Springer, Berlin, 1963.
A. G. Pakes. Some limit theorems for the total progeny of a branching process. , 3:176–192, 1971.
A. G. Pakes. Extreme [O]{}rder [S]{}tatistics on [G]{}alton-[W]{}atson [T]{}rees. , 47:95–117, 1998.
W. H. Zurek. Decoherence, einselection and the existential interpretation (the rough guide). , 356:1793–1821, 1998.
D. Deutsch. Quantum theory of probability and decisions. , 455:3129, 1999.
W. H. Zurek. Environment-assisted invariance, causality, and probabilities in quantum physics. , 90:120404, 2003.
G. ’t Hooft. How a wave function can collapse without violating [S]{}chrödinger’s equation, and how to understand [B]{}orn’s rule. arXiv:1112.1811, 2011.
L. Polley. “[M]{}easurement” by neuronal tunneling: [I]{}mplications of [B]{}orn’s rule. arXiv:quant-ph/0504092.
J. von Neumann. . Springer, Berlin, 1932.
Ph. Barbe and M. Broniatowski. Blowing number of a distribution for a statistics and loyal estimators. , 69:465–475, 2004.
G. M. Edelman. . Basic Books, New York, 1989.
[^1]: The standard mechanism for generating such a distribution is a supercritical chain reaction stopped by an event with a constant rate of incidence [@SimkinRoychowdhury2006]. Phenomenologically, power-law distributions are not uncommon in neurophysics, but it seems they are always discussed in highly specialised contexts.
[^2]: There is a rudiment of decoherence in this model.
|
---
abstract: 'The interest in deep learning methods for solving traditional signal processing tasks has been steadily growing in the last years. Time delay estimation (TDE) in adverse scenarios is a challenging problem, where classical approaches based on generalized cross-correlations (GCCs) have been widely used for decades. Recently, the frequency-sliding GCC (FS-GCC) was proposed as a novel technique for TDE based on a sub-band analysis of the cross-power spectrum phase, providing a structured two-dimensional representation of the time delay information contained across different frequency bands. Inspired by deep-learning-based image denoising solutions, we propose in this paper the use of convolutional neural networks (CNNs) to learn the time-delay patterns contained in FS-GCCs extracted in adverse acoustic conditions. Our experiments confirm that the proposed approach provides excellent TDE performance while being able to generalize to different room and sensor setups.'
address: |
$^{\star}$ Dipartimento di Elettronica, Informazione e Bioingegneria - Politecnico di Milano, via Ponzio 34/5 - 20133 Milano, Italia\
$^{\dagger}$ Departament d’Informàtica, Universitat de València, 46100 Burjassot, Spain
bibliography:
- 'strings.bib'
- 'refs.bib'
title: 'Time Difference of Arrival Estimation from Frequency-Sliding Generalized Cross-Correlations Using Convolutional Neural Networks'
---
Time delay estimation, GCC, Convolutional Neural Networks, Localization, Distributed microphones
Introduction {#sec:intro}
============
The estimation of the Time Difference of Arrival (TDoA) between the signal acquired by two microphones is relevant for many applications dealing with the localization, tracking and identification of acoustic sources. The Generalized Cross-Correlation (GCC) with Phase Transform (PHAT) [@knapp1976generalized] has been widely used for this problem and is regarded as a robust method for TDoA estimation in noisy and reverberant environments. Nonetheless, several problems arise in such scenarios, leading to TDoA errors derived from spurious peaks in the GCC due to reflective paths, excessive noise for some time instants, or other unexpected interferers. To mitigate these problems, the authors recently proposed the Frequency-Sliding GCC (FS-GCC), an improved GCC-based method for robust TDoA estimation [@Cobos2019FSGCC]. The FS-GCC is based on the analysis of the cross-power spectrum phase in a sliding window fashion, resulting in a set of sub-band GCCs that capture the time delay information contained in different frequency bands. As a result of such analysis, a complex matrix constructed by stacking all the sub-band GCCs is obtained, which can be later processed to obtain a reliable GCC representation, for example, by means of rank-one approximations derived from Singular Value Decomposition (SVD). This paper proposes an alternative processing scheme for FS-GCC representations based on Deep Neural networks (DNNs).
In recent years, several approaches have been proposed for acoustic source localization using DNNs. Most published methods focus on estimating the Direction of Arrival (DoA) of one or multiple sources considering different kinds of DNN inputs, including magnitude [@NelsonYalta2017] and phase [@chakrabarty2017broadband; @adavanne2018direction] information from Short-Time Fourier Transform (STFT) coefficients, beamforming related features [@Salvati2018ExploitingCF], MUSIC eigenvectors [@takeda2016discriminative; @takeda2016sound] and GCC-based features [@xiao2015learning; @he2018deep]. End-to-end localization approaches accepting raw audio inputs have also been proposed, using binaural signals for DOA estimation [@vecchiotti2019end] or multi-channel microphone signals for estimating the three-dimensional source coordinates [@vera2018towards]. A fewer amount of works address directly the problem of TDoA estimation. In [@houegnigan], a multilayer perceptron using the raw signals from a pair of sensors was proposed. More recently, the use of recurrent neural networks to exploit the temporal structure and time-frequency masking was proposed in [@pertila2019time], accepting log-mel spectrograms and GCC-PHAT features as input.
This work combines the structured information extracted by the FS-GCC with Convolutional Neural Networks (CNNs) to address the TDoA estimation problem in adverse acoustic scenarios. The network is designed as a convolutional denoising autoencoder that acts both as a denoising and as a dereverberation system. The network is trained using two-dimensional inputs corresponding to the magnitude of the FS-GCC matrices extracted from simulated data in noisy and reverberant environments, while the target outputs are defined as the equivalent matrices under ideal anechoic conditions. We demonstrate the effectiveness of our method by showing performance results in two different rooms over a range of reverberation conditions. The paper is structured as follows. Section \[sec:signal\_model\_and\_background\] describes the signal model and the GCC and FS-GCC approaches. Section \[sec:proposed\_method\] presents the proposed method, while Section \[sec:results\] describes the results. Finally, Section \[sec:conclusions\] draws some conclusions.
|
---
abstract: 'The absorption of surface acoustic waves (SAW’s) by an array of quantum dots in which the mean level spacing $\Delta$ is larger than the sound frequency $\omega$, the temperature $T$, and the phase breaking rate $\tau^{-1}_\phi$ is considered. The direct and the intra-level (Debye) contributions to the SAW attenuation coefficient $\Gamma$ are evaluated, and it is shown that the sensitivity to weak magnetic fields and spin-orbit scattering (“weak localization effects”) is dramatically enhanced as compared to the case of a continuous spectrum, $\Delta < \tau^{-1}_\phi$. It is argued that the non-invasive measurement of $\Gamma$ represents a new tool for the investigation of the temperature dependence of the energy relaxation rate, $\tau^{-1}_\epsilon$, and the phase breaking rate, $\tau^{-1}_\phi$, of isolated electronic systems.'
author:
- 'Andreas Knäbchen, Ora Entin-Wohlman, Yuri Galperin, Yehoshua Levinson'
title: |
Absorption of surface acoustic-waves by quantum dots:\
discrete spectrum limit
---
euromacr.tex
€
Very recently, surface acoustic waves (SAW’s) have been used to study mesoscopic systems [@Shilton96; @Tilke96; @Nash96]. In ref. [@Shilton96], the direct acousto-electric current induced by a SAW through a single quantum point contact, which can be considered as a quasi-one-dimensional channel of length 0.5 $\mu$m, has been observed. In refs. [@Tilke96] and [@Nash96], the SAW attenuation and the change of the sound velocity due to arrays of quantum wires and quantum dots patterned in a two-dimensional electron gas (2DEG) have been measured as a function of a quantizing magnetic field. The experiments reported in ref. [@Nash96] include dots with a lithographic size as small as 250 nm, their electronic size being probably much smaller. The measurements of refs. [@Tilke96] and [@Nash96] were done on isolated dots and wires, [*i.e.*]{}, no current-carrying contacts were attached to the mesoscopic systems. This preserves the phase coherence of the electronic wave functions and eliminates Coulomb blockade effects. Thus, the SAW method provides a novel tool for non-invasive investigations of nanostructures. Experimental studies of isolated mesoscopic systems may also use the polarizability of, or the microwave absorption due to small metal particles, as recently discussed in refs. [@Efetov96; @Noat96] and [@Zhou96], respectively.
In this Letter we calculate the attenuation coefficient, $\Gamma$, of SAW’s due to an array of quantum dots. We consider dots with an electronic size in the nanometer range, [*e.g.*]{} $L=300$ nm, which corresponds to the mean level spacing $\Delta \sim 1$ K. Then, at low enough temperatures, the phase breaking rate $\tau_\phi^{-1} \ll \Delta$, [*i.e.*]{}, the energy levels are only slightly broadened due to inelastic processes. Since typical SAW frequencies are in the range $\omega \sim 10^8 \div 10^9~{\hbox{\rm s}}^{-1} = 1 \div 10~{\hbox{\rm mK}} \ll \Delta$, the discreteness of the spectrum is relevant, requiring the consideration of both direct and relaxational absorption processes, to be explained below. We present the dependence of $\Gamma$ on the frequency $\omega$, the size $L$ of the dots and the temperature $T$, showing that the relaxational absorption is dominant in a wide range of parameters. $\Gamma$ also exhibits weak localization effects, [*i.e.*]{}, it is sensitive to weak magnetic fields $B$ and the spin-orbit scattering rate $\tau_{{\hbox{\rm so}}}^{-1}$. This sensitivity is dramatically enhanced as compared to a quasicontinuous spectrum ($ \tau_\phi^{-1} > \Delta$ or $\omega \gg \Delta$), because the sound absorption resolves level correlation effects appearing on a very small energy scale, $\epsilon <\Delta$. Direct and relaxational processes involve different time scales of the electronic system, namely the phase coherence time $\tau_\phi$ and the energy relaxation time $\tau_\epsilon$, suggesting that SAW attenuation measurements may provide information on their temperature dependence and magnitude in [*isolated*]{} mesoscopic systems. This could contribute to the clarification of the controversial issue related to the significant discrepancies [@Bird95; @Clarke95; @Mittal96] between the theoretical predictions for these times and recent experimental results obtained from [*transport*]{} measurements. This applies to both the electron-electron interaction and the interaction between electrons and thermal phonons; we comment below on the relevance of these two interactions for the present considerations.
At sufficiently low temperatures, when the inelastic level broadening and the thermal broadening of the distribution function are significantly smaller than $\Delta$, the sound absorption depends strongly on the existence of narrow pairs of levels whose separations are $\ll \Delta$. In general, two absorption mechanisms can be identified [@scatt]: (i) direct transitions between the energy levels involving absorption and emission of surface-acoustic phonons, and (ii) the absorption due to relaxation processes [@Gorter36]. The latter arise from the periodic motion of the energy levels under the influence of the external SAW field which, at finite temperatures $T>0$, leads to a non-equilibrium occupation of the levels. Energy dissipation is then due to inelastic relaxation mechanisms, which attempt to restore instantaneous equilibrium occupancies among the energy levels. Such processes were first introduced by Debye in connection with the relaxation of a vibrating dipole in a liquid, and then extensively discussed in connection with energy dissipation in various physical systems. In the following, we call the relaxational absorption simply Debye absorption. The Debye processes involve the occupation relaxation rate of the electronic system which is related to the energy relaxation rate $\tau_\epsilon$. The direct transitions depend on the phase coherence time $\tau_\phi$. In terms of the density matrix formalism, $\tau_\epsilon^{-1}$ and $\tau_\phi^{-1}$ can be identified with the diagonal and the off-diagonal relaxation rates, respectively; see the discussion in ref. [@Kamenev95]. Indeed, the former describes the approach to equilibrium, while the latter is associated with the destruction of coherence effects, [*i.e.*]{}, the suppression of the off-diagonal elements of the density matrix.
To describe the energy spectrum of the dots at the scale $\epsilon < \Delta$, we use the Random Matrix Theory (RMT) [@Mehta67]. According to RMT, the statistical properties of the spectrum depend only on the global symmetries of the system, such that three pure cases can be distinguished: systems with time reversal symmetry (orthogonal ensemble, $B=0$), systems with broken time reversal symmetry (unitary ensemble, $B\ne 0$), and systems without rotational symmetry (symplectic ensemble, $\tau_{{\hbox{\rm so}}}$ small and $B=0$). The crossover between these ensembles depends on the energy scale of interest, which may allow to discriminate between different sound attenuation mechanisms, cf. below.
In the following, we give a brief derivation of the major results. We focus on the case of diffusive dots [@ballistic], where the size of the dots exceeds the elastic mean free path, $L>\ell$, and the dimensionless conductance is large, $g\gg 1$. For $T<\Delta$, only a narrow pair of levels (say 1 and 2) with energetic separation $\epsilon_1-\epsilon_2=\epsilon<\Delta$ is of importance for the Debye processes because all other levels, up to exponentially small corrections, are completely filled or completely empty; thus a non-equilibrium occupation does not occur. The power absorbed due to these processes by a two-level system is [@Gorter36] $$\label{qdcetls}
Q_{{\hbox{\rm D}}}(\epsilon)=
\left(- \frac{\partial f(\epsilon)}{\partial \epsilon} \right)
\frac{\omega^2\tau_\epsilon(\epsilon)}{1+\omega^2 \tau_\epsilon^2(\epsilon)}
|M_{11}-M_{22}|^2.$$The matrix elements $M$ are calculated with the eigenfunctions of the states 1 and 2 and the piezoelectric field induced by the SAW. (The deformation-potential coupling is negligibly small.) The occupation probabilities of the levels 1 and 2 are given by $f(\epsilon)=[\exp{(\epsilon/T)}+1]^{-1}$ and $f(-\epsilon)$, respectively. Equation (\[qdcetls\]) has to be averaged over different realizations of disorder in different dots, being associated with statistically independent [@tls] variations of the eigenfunctions 1 and 2 and the level spacing $\epsilon$. We assume that one can average separately over the wave functions in $\tau_\epsilon$ and in the difference of matrix elements. The latter quantity is calculated below \[eq. (\[amael\])\] and found to be independent of $\epsilon$ in the range of interest. The remaining averaging over $\epsilon$ uses the level correlation function $R(\epsilon)$ [@Mehta67], that has the limits $$\label{reps}
R(\epsilon) = \frac{1}{\Delta} \left\{ \begin{array}{lll}
c_\beta |\epsilon/\Delta |^\beta
&{\hbox{\rm for}} &|\epsilon| \ll \Delta, \\
1&{\hbox{\rm for}} &|\epsilon| \gg \Delta , \end{array} \right.$$where $c_\beta \simeq 1$, and, as usual, $\beta=1,2$, and 4 for the orthogonal, unitary, and symplectic ensembles, respectively. As a result (numerical factors of order unity are skipped), $$\label{qdr}
Q_{{\hbox{\rm D}}}=
|M_{11}-M_{22}|^2
\frac{\omega^2\tau_\epsilon(T)}{1+\omega^2 \tau_\epsilon^2(T)}
R(T) ,$$where we have used that the characteristic energy scale in eq. (\[qdcetls\]) is given by $(-\partial f/\partial \epsilon)$ and, hence, is of order $T$. Indeed, the energy relaxation rate $\tau_\epsilon^{-1}(\epsilon) \propto \epsilon^p$ depends too weakly on $\epsilon$ in order to provide a cut-off on the average over $\epsilon$. In particular, one can show [@Knabchen97d] that inelastic transitions induced by the piezoelectric interaction with thermal phonons lead for $\epsilon<T$ to $\tau_\epsilon^{-1}(\epsilon) \propto \epsilon^2$, [*i.e.*]{}, $p=2$, while electron-electron processes are expected to yield $p$ in the range between 1 and 2; see [*e.g.*]{} ref. [@Blanter96c]. This is an important difference compared to the case of 3D metallic particles, where the deformation-potential coupling, associated with $p=4$, is dominant (see ref. [@Zhou96]), thus providing a cut-off depending on $\omega$ and $\tau_\epsilon$.
Let us now turn to the power absorbed due to direct transitions, $$\label{dre1}
Q_{{\hbox{\rm dir}}} =
\omega^2 \sum_{m\neq n} \frac{\tau_\phi}{1+\tau^2_\phi(\epsilon_{mn}-\omega)^2}
\frac{f_n-f_m}{\epsilon_{mn}} |M_{mn}|^2 ,$$where $\epsilon_{mn}=\epsilon_m-\epsilon_n$ and $f_n$ measures the occupation probability of level $n$. Since $Q_{{\hbox{\rm dir}}}$ is determined by transitions close to the last occupied level (at $T=0$), we neglect a possible dependence of $\tau_\phi^{-1}$ on $\epsilon_{mn}$ and explicitly indicate in the following only its temperature dependence. Two main contributions to the sum in eq. (\[dre1\]) can be identified: the first is due to dots where at $T=0$ the last occupied level is separated from the first empty one by a gap of order $\omega$, while the second arises from dots with gaps of order $\Delta$. In the latter case, transitions occur only due to the overlap of the tails of the broadened levels. This overlap is strongest for adjacent states, [*i.e.*]{}, $\epsilon_{mn}=\epsilon_{m,m\pm 1}\simeq \Delta$. Thus, the level correlation function drops out and we obtain $$\label{dre2}
Q_{{\hbox{\rm t}}} =
|M_{12}|^2 (\omega/\Delta)^2 [\tau_\phi(T) \Delta]^{-1} ,$$ where the off-diagonal matrix element $M_{12}$, that is independent of $\epsilon_{12}$, is given in eq. (\[amael\]) below. For dots with gaps as small as $\omega$, the level broadening is not relevant and, hence, the Lorentzian in eq. (\[dre1\]) can be replaced by $\pi\delta(\epsilon_{mn}-\omega)$. The occupation probabilities of these narrow pairs of levels are again given by $f(\pm \epsilon)$, cf. eq. (\[qdcetls\]). This yields the resonant contribution (cf. the calculations in refs. [@Shklovskii82] and [@Sivan86]) $$\label{dre3}
Q_{{\hbox{\rm r}}} =
|M_{12}|^2 \omega \tanh{(\omega /2T)}\, R(\omega) .$$ This result is valid for $\omega, T \ll \Delta $.
The matrix elements in eqs. (\[qdr\]), (\[dre2\]), and (\[dre3\]) are given by $M_{mn} = \left\langle m|
V_\omega \exp{(i{\mbox{\boldmath ${q}$}}{\mbox{\boldmath ${r}$}})} | n\right\rangle $, where $V_\omega=\gamma_q/({\cal L}\varepsilon)$ represents the screened piezoelectric potential arising from the SAW. The interaction vertex $\gamma_q$ was calculated in ref. [@Knabchen96], and ${\cal L}^2$ is the normalization area in the plane of the quantum dots. The screening of the SAW potential is described by the dielectric function $\varepsilon$ that is very much dependent on the fact that we are studying a 2D system: First, the piezoelectric field penetrates the quantum dot completely and, hence, $\varepsilon$ may be taken to be ${\mbox{\boldmath ${r}$}}$-independent. The sound attenuation thus clearly probes the behaviour of the electron wavefunctions within the whole dot. In 3D metallic particles only the electrons in a distance up to the screening length from the surface are subject to an external field [@Zhou96], since the field decreases exponentially towards the interior of the particle. Secondly, the Coulomb interaction between different dots (which is in leading approximation a dipole-dipole interaction decaying like $r^{-3}$) is not very effective for a 2D array of dots. Indeed, experiments have shown [@Dahl93] that the interdot coupling leads even in dense arrays to a change of the local electric field of no more than 25%. This is again very different from the case of a 3D system containing metallic grains, where their mutual coupling has to be taken into account with care; cf. the discussion in ref. [@Sivan86]. Neglecting the interdot coupling, the calculation of $\varepsilon$ coincides in the linear screening regime with the evaluation of the density response function. Interestingly, while the SAW attenuation probes the properties of a few levels close to the last occupied level at $T=0$, the density response function is determined by (virtual) transition processes with large energy transfers $|\epsilon_{mn}| \sim E_c=D/L^2=g\Delta \gg \Delta$, where $D$ is the diffusion coefficient of the electron gas. Consequently, the screening is neither sensitive to the symmetry of the system ($\beta=1,2,4$) nor depends on the discreteness of the spectrum. It is therefore natural that the dielectric function reduces to its familiar expression for a diffusive system, taken in the limits $\omega \rightarrow 0$ and $q \sim L^{-1}$, [*i.e.*]{}, $\varepsilon \simeq L/ a_B\gg 1$ [@Knabchen97a], where $a_B$ is the Bohr radius.
For typical sound frequencies, $qL \ll 1$, and, hence, the matrix element simplifies to $M_{mn} = i V_\omega {\mbox{\boldmath ${q}$}} \left\langle m|{\mbox{\boldmath ${r}$}}|
n\right\rangle$. \[For $m=n$ the first expansion term is canceled after taking the difference of diagonal matrix elements, cf. with eq. (\[qdr\]), [*i.e.*]{}, we have again to consider the matrix element of ${\mbox{\boldmath ${r}$}}$.\] Disorder averaged products of these matrix elements can be calculated using, [*e.g.*]{}, quasi-classical methods [@Gorkov65; @Sivan86] or wavefunction correlators derived within the supersymmetry approach [@Blanter96]. We obtain $$\label{amael}
|M_{11}-M_{22}|^2 \simeq |M_{12}|^2 \simeq (\gamma_q q a_B)^2/({\cal
L}^2 g).$$This result is valid for energy differences $|\epsilon_1-\epsilon_2| \le E_c$.
The SAW attenuation coefficient $\Gamma$ is related to the energy loss $Q$ according to $ \Gamma= {\cal N}{\cal L}^2 Q /(\hbar \omega s) $, where ${\cal N}$ is the areal density of quantum dots and $s$ the sound velocity. Collecting the above results for $Q$ and the matrix elements yields $$\begin{aligned}
\Gamma_{{\hbox{\rm D}}} & =&
A\,
\frac{\omega}{s}
\frac{\omega \tau_\epsilon(T)}{1+\omega^2\tau^2_\epsilon(T)}
\hbar \omega R(T) , \label{gamd} \\
\Gamma_{{\hbox{\rm t}}} & =&
A \,
\frac{\omega}{s}
\left( \frac{\hbar\omega}{\Delta}\right)^3 \frac{1}{\omega \tau_\phi(T)} ,
\label{gamt} \\
\Gamma_{{\hbox{\rm r}}}& =&
A\,
\frac{\omega}{s}
\tanh{(\frac{\hbar\omega}{2T})}
\hbar \omega R(\hbar\omega) , \label{gamr} \end{aligned}$$ for the Debye attenuation, the attenuation due to the tails of the broadened levels, and the resonant attenuation, respectively, and $A=({\cal N}/g) (a_B \gamma_q/\hbar s)^2 $. Let us now substitute some realistic parameters in order to estimate the relative magnitude of these terms. In GaAs/Al${}_x$Ga${}_{1-x}$As heterostructures, $\gamma_q/s \approx 1$ [@Knabchen96]. For the parameters $L=300$ nm and $\Delta=1$ K introduced above, a value of $g\sim 5$ results in $D \sim 10^{-2}$ m${}^2$s${}^{-1}$. We put $T\sim 0.1$ K, and assume $\tau_\phi^{-1}(T) \sim 10^9$ s${}^{-1}$ and $\tau^{-1}_\epsilon(T)\sim 10^8$ s${}^{-1}$. The phase breaking rate is about one order of magnitude smaller than the one recently derived from [*transport*]{} experiments [@Bird95; @Clarke95] with ballistic quantum dots of size $L\sim 2~\mu$m. This reduction of the dephasing seems to be reasonable since we are considering isolated dots without leads. The energy relaxation rate is only sensitive to inelastic scattering events with a substantial energy transfer and, hence, should be always smaller than $\tau_\phi^{-1}$ [@Altshuler82a]. Then, for typical SAW frequencies, we find the inequalities $\Delta > T > \tau_\phi^{-1}
\ge \omega \ge \tau^{-1}_\epsilon$, where $T> \omega$ is most probably always fulfilled. Based on these parameters we obtain $\Gamma_{{\hbox{\rm D}}} \gg \Gamma_{{\hbox{\rm r}}} \ge \Gamma_{{\hbox{\rm t}}}$ for $\beta=1$, and $\Gamma_{{\hbox{\rm D}}} \gg \Gamma_{{\hbox{\rm t}}} \gg \Gamma_{{\hbox{\rm r}}}$ for $\beta=2$. The Debye absorption remains dominant if $\omega\tau_\epsilon(T)$ varies in the range $10^{-3}-10^2$. Thus, the measurement of $\Gamma(T)=\Gamma_{{\hbox{\rm D}}}(T)$ \[eq. (\[gamd\])\] at a fixed SAW frequency accesses the temperature dependence of the energy relaxation time $\tau_\epsilon(T)$ in systems with or without time reversal symmetry. Very interesting is the case of broken rotational symmetry, $\beta=4$, because $\Gamma_{{\hbox{\rm D}}}$ may now decrease below $\Gamma_{{\hbox{\rm t}}}$ \[$\gg \Gamma_{{\hbox{\rm r}}}$\], permitting the measurement of both $\tau_\epsilon(T)$ and $\tau_\phi(T)$ at different frequencies. Generally, $\Gamma_{{\hbox{\rm D}}}$ is \[depending on the density of the dot lattice and other parameters\] about 2-3 orders of magnitude smaller than the maximum SAW attenuation due to an extended 2DEG, which should be within the experimental resolution.
This discussion has shown that the global symmetries of the system (described by the parameter $\beta$) determine directly which of the various contributions to the attenuation coefficient \[eqs. (\[gamd\])–(\[gamr\])\] is dominant. The parameter $\beta$ enters via the level correlation function $R(\epsilon)$, eq. (\[reps\]), if energy scales smaller than $\Delta$ are relevant. This is not the case for the tail absorption so that $\Gamma_{{\hbox{\rm t}}}$ is essentially independent of the symmetries. The crossover from the time reversal invariant case ($\beta=1$) to that with broken time reversal symmetry ($\beta=2$) is achieved by applying a magnetic field $B\gg B^*$. The very small threshold field is given by $B^*\simeq (\Phi_\circ/L^2) \sqrt{\epsilon^*/E_c}$, where $\Phi_\circ$ is the flux quantum and $\epsilon^*$ is the typical energy scale of the transitions. Since $\Gamma_{{\hbox{\rm D}}}$ and $\Gamma_{{\hbox{\rm r}}}$ are associated with $\epsilon^*=T$ and $\epsilon^*=\omega$, respectively, the threshold fields are different, $$\label{flux}
B^*_{{\hbox{\rm D}}} / B^*_{{\hbox{\rm r}}} = \sqrt{T/\omega} .$$Similarly, the rotational symmetry can be broken by increasing the spin-orbit scattering rate $\tau^{-1}_{{\hbox{\rm so}}}$ above $\epsilon^*$. One should note that the variation of $\beta$ results in a significant change of the magnitude of $\Gamma$. In this sense, the dependence of the sound absorption on weak magnetic fields and spin-orbit scattering is much more pronounced in the discrete spectrum limit than in the quasicontinuous case. In the latter, this dependence arises solely from weak localization [*corrections*]{} to the conductance $g$ (entering $A$) [@Knabchen97a]. In the present case, these corrections can safely be ignored as compared to the effects associated with $R(\epsilon)$.
In summary, we have considered the attenuation of surface-acoustic waves due to an array of quantum dots in the discrete spectrum limit, arising from both direct and Debye processes. It has been shown that non-invasive measurements of $\Gamma$ can yield the temperature dependence of both the energy and the phase breaking rate of an isolated electronic system. The sensitivity to weak magnetic fields or the spin-orbit scattering (“weak localization effects”) is greatly enhanced as compared to the case of a continuous spectrum. It might be possible to observe the transition from the thermal-phonon to the electron-electron dephasing, which is known to occur at around 1 K [@Mittal96]. We hope that these calculations will stimulate further experimental investigations in this field.
Financial support by the German-Israeli Foundation, the Fund for Basic Research administered by the Israel Academy of Sciences and Humanities, and the Deutsche Forschungsgemeinschaft (A. K.) is gratefully acknowledged. We thank Y. Gefen, Y. Imry, and A. Wixforth for valuable discussions.
[10]{}
.
.
.
.
, Report No. cond-mat/9610086.
, Report No. cond-mat/9508057.
.
.
in (Kluwer Academic Press, Dordrecht) .
In principle, the [*elastic*]{} scattering of surface phonons may show up in the experiments as an attenuation mechanism ($\Gamma_{{\hbox{\rm sc}}}$) that is not affected by the fact that $\omega \ll \Delta$. We find $\Gamma_{{\hbox{\rm sc}}} \propto \omega^5$ which is small unless $\omega$ becomes very large.
.
.
(Academic, New York) .
We believe that our results remain valid (by order of magnitude) in the ballistic regime if $E_c=D/L^2$ is replaced by the inverse time of flight $v_F/L$.
This is a general statement of RMT. Note, however, that this decoupling is not valid for a localized tunneling two-level system, where the overlap between the states 1 and 2 determines both their splitting and the transition matrix elements.
unpublished.
, Report No. cond-mat/9604101.
.
.
, Report No. cond-mat/9604137.
(VDI, Düsseldorf) .
, Report No. cond-mat/9608074.
, \[ \].
, Report No. cond-mat/9604139.
.
|
---
abstract: 'The bosonic representation of the half string ghost in the full string basis is examined in full. The proof that the comma 3- vertex (matter and ghost) in the bosonic representation satisfy the Ward-like identities is established thus completing the proof of the Bose Fermi equivalence in the half string theory.'
author:
- |
A Abdurrahman[^1] and M Gassem[^2]\
Department of Physics\
Shippensburg University of Pennsylvania\
1871 Old Main Drive\
Shippensburg, PA 17257\
USA
title: 'Bose-Fermi Equivalence in the Half String Theory'
---
Introduction
============
The work of references [Sen-Zwiebach,Rastelli-Sen-Zwiebach,Gross-Taylor-I,Gross-Taylor-II]{} has generated much interest in the half string formulation of Witten’s theory of interacting open bosonic strings. An important role in the formulation of the comma theory is played by the $BRST$ charge $Q$ of the first quantized theory[@A-A-B-II; @Abdu-BordesN-II]. In general, the $BRST$ invariance of the first quantized theory becomes a gauge invariance of the second theory [@TBanks-Mpeskin; @E.Witten-CST; @WSiegel; @Siegl-Zwiebach]. In the interacting half string theory, the role of the ghost fields is quite subtle. The rich structure of the ghost sector of the interacting theory deserves more consideration. It is possible to study the ghosts using either bosons or fermions. However, to relate the fermionic ghost formulation to the bosonic ghost formulation, we have to carry a bosonization procedure explicitly[@E.Witten-CST; @Siegl-Zwiebach; @Hlousek-Jevicki; @ABA]. The bosonic and the fermionic realization of the ghost fields for the half string theory have been used before[@A-A-B-I; @A-A-B-II], to write down the ghost part of the three-comma vertex and the proof of equivalence was only addressed partially[@ABA]. Although both formulations give a gauge invariant theory, their equivalence is not at all transparent. It is the purpose of this paper to complete the proof of equivalence for both formulations. The key to the proof as we have seen in the first part of the proof [@ABA] lies on the various identities satisfied by the $G$-coefficients that define the comma interacting vertex. To complete the proof of equivalence[@ABA], we have to show that both realizations of the comma vertex satisfy the same Ward-like identities.
Writing the half string three vertex in the full string basis
=============================================================
The half string ghost coordinates $\phi _{j}^{L,R}\left( \sigma \right) $ are defined in the usual way[@A-A-B-I; @ABA] $$\phi _{j}^{L}\left( \sigma \right) =\phi _{r}(\sigma )-\phi _{r}(\frac{\pi }{2})\text{, \ \ \ }\phi _{r}^{R}\left( \sigma \right) =\phi _{r}(\pi -\sigma
)-\phi _{r}(\frac{\pi }{2})\text{ \ \ \ \ for \ }0\leq \sigma \leq \frac{\pi
}{2} \notag$$where $\phi _{j}(\sigma )$ is the full string bosonic ghost coordinate and $r=1,2,..,N$ (number of strings) $$\phi _{j}\left( \sigma \right) =\phi _{0}^{j}+\sqrt{2}\sum_{n=1}^{\infty
}\phi _{n}^{j}\cos n\sigma \text{, \ \ \ for \ }0\leq \sigma \leq \frac{\pi
}{2}$$The mode expansion of the left and right pieces reads$$\phi _{r}^{h}\left( \sigma \right) =\sqrt{2}\sum_{n=1}^{\infty }\phi
_{2n+1}^{h,r}\cos \left( 2n+1\right) \sigma \text{, \ \ \ for \ }0\leq
\sigma \leq \frac{\pi }{2},\text{ \ \ \ }h=1,2$$where $h=1,2$ refers to the left $\left( L\right) $ and right $\left(
R\right) $ parts of the string, respectively.
Like in the matter sector for the half string formulation of Witten’s theory for open bosonic strings, in the half-string approach to the ghost part of string theory, the elements of the theory are defined by $\delta -function$ type overlaps$$V_{3}^{HS,\phi }=\exp \left( i\sum_{r=1}^{3}Q_{r}^{\phi }\phi \left( \pi
/2\right) \right) V_{3,0}^{HS,\phi }$$where$$V_{3,0}^{HS,\phi }=\prod\limits_{r=1}^{3}\prod\limits_{\sigma =0}^{\pi
/2}\delta \left( \phi _{r}^{L}\left( \sigma \right) -\phi _{r-1}^{R}\left(
\sigma \right) \right)$$It is to be understood that $r-1=0\equiv 3$. The factor $Q_{r}^{\phi }$ is the ghost number insertion at the mid-point which is needed for the $BRST$ invariance of the theory[@Gross-Jevicki-I; @A-A-B-I] and in this case $Q_{1}^{\phi }=Q_{2}^{\phi }=Q_{1}^{3}=1/2$. As we have seen before in the Hilbert space of the theory, the $\delta -functions$ translate into operator overlap equations which determine the precise form of the vertex. The ghost part of the half string vertex in the full string basis has the same structure as the coordinate one apart from the mid-point insertions $$|V_{\phi }^{HS}>=e^{\frac{1}{2}i\sum\limits_{r=1}^{3}\phi ^{r}\left( \pi
/2\right) }V_{\phi }^{HS}\left( \alpha ^{\phi ,1\dag },\alpha ^{\phi ,2\dag
},\alpha ^{\phi ,3\dag }\right) \left\vert 0,N_{ghost}=\frac{3}{2}\right)
_{123}^{\phi } \label{eqnGhost3vertexHS}$$where the $\alpha ^{\prime }$s are the bosonic oscillators defined by the expansion of the bosonized ghost $\left( \phi \left( \sigma \right) ,p^{\phi
}\left( \sigma \right) \right) $ fields and $V_{\phi }^{HS}\left( \alpha
^{\phi ,1\dag },\alpha ^{\phi ,2\dag },\alpha ^{\phi ,3\dag }\right) $ is the exponential of the quadratic form in the ghost creation operators with the same structure as the coordinate piece of the vertex
$$\begin{aligned}
V_{\phi }^{HS}\left( \alpha ^{\phi ,1\dag },\alpha ^{\phi ,2\dag },\alpha
^{\phi ,3\dag }\right) &=&\exp \left[ \frac{1}{2}\sum_{r,s=1}^{3}\sum_{n,m=1}^{\infty }\alpha _{-n}^{\phi ,r}G_{nm}^{rs}\alpha _{-m}^{\phi
,s}\right. + \notag \\
&&\left. \sum_{r,s=1}^{3}p_{0}^{\phi ,r}G_{0m}^{rs}\alpha _{-m}^{\phi ,s}+\frac{1}{2}\sum_{r,s=1}^{3}p^{\phi ,r}G_{00}^{rs}p_{0}^{\phi ,s}\right]\end{aligned}$$
where the the matrix elements $G_{nm}^{rs}$ have been constructed in previous work [@Gassem].
In the full string, the fermionic ghost overlap equations are$$\begin{aligned}
c_{r}\left( \sigma \right) &=&c_{r-1}(\pi -\sigma )\text{\ \ \ }\sigma \in \left[ 0,\frac{\pi }{2}\right] \notag \\
c_{r}\left( \sigma \right) &=&-c_{r+1}(\pi -\sigma )\text{\ \ , \ \ \ }\sigma \in \left[ \frac{\pi }{2},\pi \right]\end{aligned}$$and$$\begin{aligned}
b_{r}\left( \sigma \right) &=&b_{r-1}(\pi -\sigma )\text{\ \ \ }\sigma \in \left[ 0,\frac{\pi }{2}\right] \notag \\
b_{r}\left( \sigma \right) &=&b_{r+1}(\pi -\sigma )\text{\ \ , \ \ \ }\sigma
\in \left[ \frac{\pi }{2},\pi \right]\end{aligned}$$The proof of the bose-fermi equivalence involves two major obstacles. The first is to show that the bosonized half string ghosts, ([eqnGhost3vertexHS]{}), satisfy the $c-$ and $b-overlap$ equations displayed above. To carry out the proof, the authors [@ABA], utilized the bosonization formulas$$c_{+}\left( \sigma \right) =:e^{i\phi _{+}\left( \sigma \right) }:\text{, \
\ \ \ }b_{+}\left( \sigma \right) =:e^{-i\phi _{-}\left( \sigma \right) }:$$where$$\phi \left( \sigma \right) =\frac{1}{2}\left( \phi _{+}\left( \sigma \right)
+\phi _{-}\left( \sigma \right) \right)$$and$$\phi _{\pm }\left( \sigma \right) =\phi _{0}\pm \sigma \left( p_{0}^{\phi }+\frac{1}{2}\right) +i\sum_{n=1}^{\infty }\frac{1}{n}\left( \alpha _{n}^{\phi
}e^{\mp in\sigma }-\alpha _{-n}^{\phi }e^{\pm in\sigma }\right)$$The fermionic ghost coordinates of the bosonic string are anticommuting fields$$\begin{aligned}
c_{\pm }\left( \sigma \right) &=&c(\sigma )\pm i\pi _{b}\left( \sigma
\right) \text{\ \ \ }\sigma \in \left[ 0,\pi \right] \notag \\
b\pm \left( \sigma \right) &=&\pi _{c}\left( \sigma \right) \pm ib(\sigma )\text{\ \ , \ \ \ }\sigma \in \left[ 0,\pi \right]\end{aligned}$$The $c_{+}\left( c_{-}\right) $ are the ghosts for reparametrization of $z=\tau +i\sigma $ $\left( \overline{z}=\tau -i\sigma \right) $, respectively and the $b_{\pm }$ are the corresponding anti-ghosts. These obey the anticommutation relations$$\begin{aligned}
\left\{ c_{n},c_{m}\right\} &=&\left\{ b_{n},b_{m}\right\} =0 \notag \\
\left\{ c_{n},b_{m}\right\} &=&\delta _{n+m\text{ }0}\end{aligned}$$The fermionic half string ghosts are also defined in the usual way[A-A-B-II]{} $$\begin{aligned}
c_{r}^{L}\left( \sigma \right) &=&c_{r}(\sigma )-c_{r}(\frac{\pi }{2})\text{, \ \ \ }\sigma \in \left[ 0,\frac{\pi }{2}\right] \notag \\
c_{r}^{R}\left( \sigma \right) &=&c_{r}(\pi -\sigma )-c_{r}(\frac{\pi }{2})\text{, \ \ \ }\sigma \in \left[ 0,\frac{\pi }{2}\right]\end{aligned}$$and similar expressions for $b^{L}(\sigma )$ and $b^{R}(\sigma )$.
In the bosonization of the fermionic coordinates, using the standard procedure (see ref. 17 in ref. 16), it is not obvious that all ingredients of the theory employing the bosonic field $\phi ^{L}\left( \sigma \right) $ and $\phi ^{R}\left( \sigma \right) $ are equivalent to those constructed using the original Fermi fields appearing in the left hand side of the above relations. It has been shown[@ABA], that the ghost vertices in the half string operator formulation obey the same overlap equations as the fermionic vertices and are identical. Consequently one is free to use either formulation of the ghost sector of the theory. In fact this statement was only partially true, the authors[@ABA] failed to establish that the half string ghost (plus matter) vertex in the bosonic realization of the ghosts satisfy the same Ward-like identities obeyed by the half string ghost (plus matter) vertex in the fermionic realization of the ghosts. To complete the equivalence between the two realization of the half string ghost vertex, we need to establish the Ward-like identities utilizing the bosonic representation of the half string ghost as well.
The proof of the Ward-like identities
=====================================
The Ward-like identities for the Witten vertex matter plus ghost[Hlousek-Jevicki]{} in the fermionic representation are given by$$W_{m}^{x+c,r}|V_{W}^{x+c}>=0\text{, \ \ }m=1,2,... \label{eqnWardIdFermi}$$where $W_{m}^{x+c,r}$ is the Ward-like operator defined by $$W_{m}^{x+c,r}=W_{m}^{x,r}+W_{m}^{c,r}=L_{m}^{x+c,r}+\sum_{s=1}^{3}\sum_{n=0}^{\infty }m\widetilde{N}_{mn}^{rs}L_{-n}^{x+c,s}$$and$$|V_{W}^{x+c}>=|V_{W}^{x}>|V_{W}^{c}>$$The Virasoro generators for both matter and ghost coordinates are given by
$$\begin{aligned}
L_{m}^{x,r} &=&\sum_{k=1}^{\infty }\alpha _{-k}^{r}\alpha _{k+m}^{r}+\frac{1}{2}\sum_{k=1}^{m-1}\alpha _{m-k}^{r}\alpha _{k}^{r}+p_{0}^{r}\alpha _{m}^{r}
\label{eqnVirosorXm} \\
L_{0}^{x,r} &=&\frac{1}{2}\left( p_{0}^{r}\right) ^{2}+\sum_{k=1}^{\infty
}\alpha _{-k}^{r}\alpha _{k}^{r}, \label{eqnVirosorX0}\end{aligned}$$
and$$\begin{aligned}
L_{m}^{c,r} &=&\sum_{k=1}^{\infty }\left[ \left( 2m+k\right)
b_{-k}^{r}c_{k+m}^{r}-\left( m-k\right) c_{-k}^{r}b_{k+m}^{r}\right] \notag
\\
&&+\sum_{k=1}^{m-1}\left( m+k\right)
b_{m-k}^{r}c_{k}^{r}+mb_{m}^{r}c_{0}^{r}-2mb_{0}^{r}c_{m}^{r} \\
L_{0}^{c,r} &=&\sum_{k=1}^{\infty }k\left(
b_{-k}^{r}c_{k}^{r}-c_{-k}^{r}b_{k}^{r}\right) ,\end{aligned}$$respectively. Here $m$ takes integral values greater than $0$. If the half string bosonic ghost version of the vertex is equivalent to the fermionic version then it must obey an identity of the form (\[eqnWardIdFermi\]) and the anomaly of the ghost part must cancel the anomaly of the coordinate. Our job is thus to show that the half string full vertex (i.e., matter plus ghost) in the bosonic representation satisfies the following identity$$W_{m}^{x+\phi ,r}|V_{H}^{x+\phi }>=0\text{, \ \ }m=1,2,... \label{eqnWardID}$$where the Ward-like operator in this case is expressed in the bosonic representation of the ghost$$W_{m}^{x+\phi ,r}=W_{m}^{x,r}+W_{m}^{\phi ,r}=L_{m}^{x+\phi
,r}+\sum_{s=1}^{3}\sum_{n=0}^{\infty }m\widetilde{N}_{mn}^{rs}L_{-n}^{x+\phi
,s}$$As before in the above expression $L_{n}^{x+\phi }=L_{n}^{x}+L_{n}^{\phi }$ and $|V_{H}^{x+\phi }>=|V_{H}^{x}>|V_{H}^{\phi }>$. The Virasoro generators for the ghost in terms of the bosonic operators are given by$$\begin{aligned}
L_{m}^{\phi ,r} &=&\sum_{k=1}^{\infty }\alpha _{-k}^{\phi ,r}\alpha
_{k+m}^{\phi ,r}+\frac{1}{2}\sum_{k=1}^{m-1}\alpha _{m-k}^{\phi ,r}\alpha
_{k}^{\phi ,r}+\left( p_{0}^{\phi ,r}-\frac{3}{2}m\right) \alpha _{m}^{\phi
,r}\text{, } \label{eqnVirosorGm} \\
L_{0}^{\phi ,r} &=&\frac{1}{2}\left( p_{0}^{\phi ,r}\right) ^{2}-\frac{1}{8}+\sum_{k=1}^{\infty }\alpha _{-k}^{\phi ,r}\alpha _{k}^{\phi ,r}\text{,}
\label{eqnVirosorG0}\end{aligned}$$where $m>0$. The extra term in (\[eqnVirosorGm\]), that is, the linear term in $\alpha _{m}^{\phi ,r}$ arises because of the presence of the $R\phi
$ term in the action [@E.Witten-CST] of the bosonized ghosts$$I_{\phi }=\frac{1}{2\pi }\int d^{2}\sigma \left( \partial _{\beta }\phi
\partial ^{\beta }\phi -3iR\phi \right)$$or alternatively because of the extra linear term in the ghost energy-momentum tensor$$T_{\phi }=\frac{1}{2\pi }\left[ \left( \partial _{\pm }\phi \right) ^{2}-\frac{3}{2}\partial _{\pm }\phi \right]$$The extra term is needed[@E.Witten-CST; @Gross-Jevicki-I; @A-A-B-I] and must have precisely the coefficient given in (\[eqnVirosorGm\]), so that $\phi $ can cancel the Virasoro anomaly of the $x^{\mu }$ so that the total Fourier components of the energy momentum$$L_{m}^{r}=L_{m}^{x,r}+L_{m}^{\phi ,r}-\frac{9}{8}\delta _{n0}$$obey the Virasoro algebra$$\left[ L_{m}^{r},L_{n}^{s}\right] =\left( m-n\right) L_{m+n}^{r}$$which is free of central charge.
We will show that the comma vertex $|V_{HS}^{x+\phi
}>=|V_{HS}^{x}>|V_{HS}^{\phi }>$ indeed satisfies the Ward-like identities stated in equation (\[eqnWardID\]). It is more convenient to recast the comma three-point vertex in the full string oscillator basis[@Gassem].
To express the comma vertex in an exponential form in the creation operators only we need to commute the annihilation part of the ghost mid-point insertion $\frac{1}{2}i\sum \phi ^{r}\left( \pi /2\right) $ in ([eqnGhost3vertexHS]{}) through the creation part of the vertex. The normal mode expansion of the ghost field $\phi \left( \sigma \right) $ is at $\tau
=0$$$\phi ^{r}\left( \sigma \right) =\phi _{0}^{r}+\sqrt{2}\sum_{n=1}^{\infty
}\phi _{n}^{r}\cos n\sigma$$The mid-point of the ghost coordinate is obtained by substituting $\sigma
=\pi /2$ in the above expression $$\phi ^{r}\left( \frac{\pi }{2}\right) =\phi _{0}^{r}+i\sum_{n=1}^{\infty
}\lambda _{n}\left( \alpha _{n}^{\phi ,r}-\alpha _{-n}^{\phi ,r}\right)$$where $\lambda _{n}=\left( n\right) ^{-1}\cos \left( n\pi /2\right) $, $n=1,2,3,...$. Let us first consider the first factor in ([eqnGhost3vertexHS]{}) $$\begin{aligned}
\exp \left( \frac{1}{2}i\sum\limits_{r=1}^{3}\phi ^{r}\left( \pi /2\right)
\right) &=&N_{1}\exp \left( \frac{1}{2}i\sum\limits_{r=1}^{3}\phi
_{0}^{r}\right) \exp \left( \frac{1}{2}\sum\limits_{r=1}^{3}\sum\limits_{n=1}^{\infty }\lambda _{n}\alpha _{-n}^{\phi ,r}\right) \notag \\
&&\times \exp \left( -\frac{1}{2}\sum\limits_{r=1}^{3}\sum\limits_{n=1}^{\infty }\lambda _{n}\alpha _{n}^{\phi ,r}\right) \label{eqnexpomdpoex}\end{aligned}$$where $N_{1}=\exp \left[ -3\cdot 2^{3}\sum_{n=1}^{\infty }n\lambda _{n}\right] $. In obtaining this result we made use of the well known identity$$e^{\hat{A}_{1}}e^{\hat{A}_{2}}=e^{\frac{1}{2}\left[ \hat{A}_{1},\hat{A}_{2}\right] }e^{\hat{A}_{1}+\hat{A}_{2}} \label{Eqn IDOPOP}$$which is valid when $\left[ \hat{A}_{1},\hat{A}_{2}\right] $ is a $C$ number. The next step is to bring the annihilation part of the mid-point insertions to act on the ghost vacuum. So we need to consider the operator product$$\exp \left( -\frac{1}{2}\sum\limits_{r=1}^{3}\sum\limits_{n=0}^{\infty }\widetilde{\lambda }_{n}\alpha _{n}^{\phi ,r}\right) V_{\phi }^{HS}\left(
\alpha ^{\phi ,1\dag },\alpha ^{\phi ,2\dag },\alpha ^{\phi ,3\dag }\right)
\label{eqnopproduct1}$$where $\widetilde{\lambda }_{n}\equiv \lambda _{n}$ for $n>0$ and $\widetilde{\lambda }_{0}=0$. To commute the annihilation operators $\alpha
_{n}^{\phi ,r}$ through the creation operators $a_{-n}^{\phi ,s}$, $a_{-m}^{\phi ,s}$, we note that the exponential of the quadratic form is the Gaussian$$\begin{aligned}
V_{\phi }^{HS}\left( \alpha ^{\phi ,1\dag },\alpha ^{\phi ,2\dag },\alpha
^{\phi ,3\dag }\right) &=&\exp \left( \frac{1}{2}\sum_{r,s=1}^{3}\sum_{n,m=0}^{\infty }a_{-n}^{\phi ,r}G_{nm}^{rs}a_{-m}^{\phi ,s}\right)
\notag \\
=\lim_{N\rightarrow \infty }\pi ^{-N/2} &&\left[ \det G\right] ^{-1/2}\int
Dx\exp \left( -\frac{1}{2}\overrightarrow{x}^{T}G^{-1}\overrightarrow{x}\right) \notag \\
&&\times \exp \left( \sum_{s=1}^{3}\sum_{m=0}^{\infty }a_{-m}^{\phi
,s}x_{m}^{s}\right) \label{eqnGaussianalal}\end{aligned}$$where $Dx=\prod_{r=1}^{3}\prod_{n=0}^{N}dx_{n}^{r}$. Thus we first need to consider$$\exp \left( -\frac{1}{2}\sum\limits_{r=1}^{3}\sum\limits_{n=0}^{\infty }\widetilde{\lambda }_{n}\alpha _{n}^{\phi ,r}\right) \exp \left(
\sum_{s=1}^{3}\sum_{m=0}^{\infty }a_{-m}^{\phi ,s}x_{m}^{s}\right)$$Using the well known identity $$e^{\hat{A}_{1}}e^{\hat{A}_{2}}=e^{\left[ \hat{A}_{1},\hat{A}_{2}\right] }e^{\hat{A}_{2}}e^{\hat{A}_{1}}$$which is valid when $\left[ \hat{A}_{1},\hat{A}_{2}\right] $ is a $C$ number, the above product becomes$$\begin{aligned}
&&\exp \left( -\frac{1}{2}\sum\limits_{r=1}^{3}\sum\limits_{n=0}^{\infty }\widetilde{\lambda }_{n}\alpha _{n}^{\phi ,r}\right) \exp \left(
\sum_{s=1}^{3}\sum_{m=0}^{\infty }a_{-m}^{\phi ,s}x_{m}^{s}\right)
\label{eqnidenlplp} \\
&=&\exp \left[ \sum_{s=1}^{3}\sum_{m=0}^{\infty }\left( -\frac{1}{2}m\widetilde{\lambda }_{m}+a_{-m}^{\phi ,s}\right) x_{m}^{s}\right] \exp
\left( -\frac{1}{2}\sum\limits_{r=1}^{3}\sum\limits_{n=0}^{\infty }\widetilde{\lambda }_{n}\alpha _{n}^{\phi ,r}\right) \notag\end{aligned}$$Observe that the operator $\overrightarrow{\alpha }$ translates $\overrightarrow{\alpha }^{\dag }$ by $-m\widetilde{\lambda }_{m}/2$. With the help of the identities in (\[eqnGaussianalal\]) and (\[eqnidenlplp\]), the operator product in (\[eqnopproduct1\]) becomes$$\begin{aligned}
&&\exp \left[ \frac{1}{2}\sum_{r,s=1}^{3}\sum_{n,m=0}^{\infty }\left( -\frac{1}{2}n\widetilde{\lambda }_{n}+a_{-n}^{\phi ,r}\right) G_{nm}^{rs}\left( -\frac{1}{2}m\widetilde{\lambda }_{m}+a_{-m}^{\phi ,s}\right) \right] \notag
\\
&&\times \exp \left( -\frac{1}{2}\sum\limits_{r=1}^{3}\sum\limits_{n=1}^{\infty }\widetilde{\lambda }_{n}\alpha _{n}^{\phi ,r}\right)
\label{eqnlamdatelonvertexbeV}\end{aligned}$$Since $\alpha _{n}^{\phi ,r}|0,0,0)=0$, the above expression when acting on the vacuum of the three strings gives$$\begin{aligned}
&&\exp \left( -\frac{1}{2}\sum\limits_{r=1}^{3}\sum\limits_{n=0}^{\infty }\widetilde{\lambda }_{n}\alpha _{n}^{\phi ,r}\right) V_{\phi }^{HS}\left(
\alpha ^{\phi ,1\dag },\alpha ^{\phi ,2\dag },\alpha ^{\phi ,3\dag }\right)
|0,0,0>_{\phi }= \notag \\
&&\exp \left[ \frac{1}{2}\sum_{r,s=1}^{3}\sum_{n,m=0}^{\infty }\left( -\frac{1}{2}n\widetilde{\lambda }_{n}+a_{-n}^{\phi ,r}\right) G_{nm}^{rs}\left( -\frac{1}{2}m\widetilde{\lambda }_{m}+a_{-m}^{\phi ,s}\right) \right] \notag
\\
|0,0,0 &>&_{\phi }\end{aligned}$$Using the identity$$\sum_{r=1}^{3}G_{nm}^{rs}=\frac{\left( -1\right) ^{n+1}}{n}\delta _{nm}$$and the definition of $\widetilde{\lambda }$, the above expression becomes$$\begin{aligned}
&&\exp \left( -\frac{1}{2}\sum\limits_{r=1}^{3}\sum\limits_{n=0}^{\infty }\widetilde{\lambda }_{n}\alpha _{n}^{\phi ,r}\right) V_{\phi }^{HS}\left(
\alpha ^{\phi ,1\dag },\alpha ^{\phi ,2\dag },\alpha ^{\phi ,3\dag }\right)
|0,0,0>_{\phi } \notag \\
&=&N_{2}\exp \left[ \sum_{s=1}^{3}\sum_{m=1}^{\infty }\lambda
_{m}^{s}a_{-m}^{\phi ,s}\right] V_{\phi }^{HS}\left( \alpha ^{\phi ,1\dag
},\alpha ^{\phi ,2\dag },\alpha ^{\phi ,3\dag }\right) |0,0,0>_{\phi }
\label{eqnlipompaconv}\end{aligned}$$where $N_{2}=\exp \left[ 2^{-3}\sum_{r,s=1}^{3}\sum_{n,m=1}^{\infty }\cos
\left( n\pi /2\right) G_{nm}^{rs}\cos \left( m\pi /2\right) \right] $. Combining equation (\[eqnlipompaconv\]), (\[eqnexpomdpoex\]), and ([eqnGhost3vertexHS]{}), and replacing $\lambda _{n}$ by $\left( 1/n\right)
\cos \left( n\pi /2\right) $, we find$$\begin{aligned}
|V_{\phi }^{HS} &>&=N\exp \left( \frac{1}{2}i\sum_{r=1}^{3}\phi
_{0}^{r}\right) \exp \left( \sum_{r=1}^{3}\sum_{n=1}^{\infty }\frac{1}{n}\cos \left( \frac{n\pi }{2}\right) \alpha _{-n}^{\phi ,r}\right) \notag \\
&&\times V_{\phi }^{HS}\left( \alpha ^{\phi ,1\dag },\alpha ^{\phi ,2\dag
},\alpha ^{\phi ,3\dag }\right) |0,0,0>_{\phi }\end{aligned}$$where $N=N_{1}N_{2}$ is a constant that can be absorbed in the overall normalization constant.
Before we could apply the Virasoro to the vertex we need first to commute the annihilation operators in the Virasoro generators through the ghost insertions. Thus we need to consider$$\alpha _{n}^{\phi ,r}\exp \left( \sum_{r=1}^{3}\sum_{n=1}^{\infty }\frac{1}{n}\cos \left( \frac{n\pi }{2}\right) \alpha _{-n}^{\phi ,r}\right) \text{, }n=1,2,3,...$$If we write $\alpha _{n}^{\phi ,r}$ as $\alpha _{n}^{\phi ,r}=\left(
\partial /\partial \rho _{n}^{r}\right) \exp \left(
\sum_{s=1}^{3}\sum_{m=0}^{\infty }\rho _{m}^{s}\alpha _{m}^{s}\right) |_{\overrightarrow{\rho }=0}$ and use the operator identity in (\[Eqn IDOPOP\]), then the above expression becomes$$\begin{aligned}
\alpha _{n}^{\phi ,r}\exp \left( \sum_{r=1}^{3}\sum_{n=1}^{\infty }\frac{1}{n}\cos \left( \frac{n\pi }{2}\right) \alpha _{-n}^{\phi ,r}\right) &=&\exp
\left( \sum_{r=1}^{3}\sum_{n=1}^{\infty }\frac{1}{n}\cos \left( \frac{n\pi }{2}\right) \alpha _{-n}^{\phi ,r}\right) \notag \\
&&\times \left( \alpha _{n}^{\phi r}+\cos \frac{n\pi }{2}\right)\end{aligned}$$where $n=1,2,3,...$. For $p_{0}^{\phi ,r}$, we have $$p_{0}^{\phi ,r}\exp \left( \frac{1}{2}i\sum_{s=1}^{3}\phi _{0}^{r}\right)
=\exp \left( \frac{1}{2}i\sum_{s=1}^{3}\phi _{0}^{s}\right) \left(
p_{0}^{\phi ,r}+\frac{1}{2}\right)$$Thus we see that the effect of commuting the annihilation operators in the Virasoro generators through the ghost insertions results in a shift of the annihilation operator in the Virasoro generator by$$\alpha _{n}^{\phi ,r}\rightarrow \left( \alpha _{n}^{\phi ,r}+\cos \frac{n\pi }{2}\right) \label{eqnInserID1}$$for $n=1,2,3,...$, and$$p_{0}^{\phi ,r}\rightarrow \left( p_{0}^{\phi ,r}+\frac{1}{2}\right)
\label{eqnInserID2}$$Notice that this shift is independent of the string index $r$.
To commute the annihilation part of the $\alpha $ operators in the Virasoro generators $L$, we need to commute $\alpha _{n}^{r}$ through the creation part of the vertex; that is; we need$$\alpha _{k}^{t}V_{\phi }^{HS}\left( \alpha ^{\phi ,1\dag },\alpha ^{\phi
,2\dag },\alpha ^{\phi ,3\dag }\right)$$The above expression may be written as$$\begin{aligned}
&&\alpha _{k}^{t}V_{\phi }^{HS}\left( \alpha ^{\phi ,1\dag },\alpha ^{\phi
,2\dag },\alpha ^{\phi ,3\dag }\right) \\
&=&\alpha _{k}^{t}\exp \left( \frac{1}{2}\sum_{r,s=1}^{3}\sum_{n,m=0}^{\infty }a_{-n}^{\phi ,r}G_{nm}^{rs}a_{-m}^{\phi ,s}\right) \notag \\
&=&\frac{\partial }{\partial \rho _{k}^{t}}\left\{ \exp \left(
\sum\limits_{s=1}^{3}\sum\limits_{m=0}^{\infty }\rho _{m}^{s}\alpha
_{m}^{\phi ,s}\right) \exp \left( \frac{1}{2}\sum_{r,s=1}^{3}\sum_{n,m=0}^{\infty }a_{-n}^{\phi ,r}G_{nm}^{rs}a_{-m}^{\phi ,s}\right) \right\} |_{\overrightarrow{\rho }=0} \notag\end{aligned}$$The expression inside the curly brackets is identical to equation ([eqnopproduct1]{}) with $\rho _{m}^{s}$ replacing $-\frac{1}{2}\widetilde{\lambda }_{n}$; thus the result can be obtained from ([eqnlamdatelonvertexbeV]{}) with $\rho _{m}^{s}$ replacing $-\frac{1}{2}\widetilde{\lambda }_{n}$; hence the above expression becomes $$\begin{aligned}
&&\alpha _{k}^{t}V_{\phi }^{HS}\left( \alpha ^{\phi ,1\dag },\alpha ^{\phi
,2\dag },\alpha ^{\phi ,3\dag }\right) =\frac{\partial }{\partial \rho
_{k}^{t}}\exp \left[ \frac{1}{2}\sum_{r,s=1}^{3}\sum_{n,m=0}^{\infty }\left(
\left( n+\delta _{n0}\right) \rho _{n}^{r}+a_{-n}^{\phi ,r}\right) \right.
\notag \\
&&\left. G_{nm}^{rs}\left( \left( m+\delta _{m0}\right) \rho
_{m}^{s}+a_{-m}^{\phi ,s}\right) \right] \exp \left(
\sum\limits_{s=1}^{3}\sum\limits_{m=0}^{\infty }\rho _{m}^{s}\alpha
_{m}^{\phi ,s}\right) |_{\overrightarrow{\rho }=0}
\label{eqnIMPOIDCALFTbefore}\end{aligned}$$So we have succeeded in commuting the annihilation operator $\alpha
_{n}^{\phi ,r}$ through the creation part of the vertex. Since $\alpha
_{n}^{\phi ,r}$ annihilates the ghost part of the vacuum of the three strings, then$$\exp \left( \sum\limits_{s=1}^{3}\sum\limits_{m=0}^{\infty }\rho
_{m}^{s}\alpha _{m}^{\phi ,s}\right) |_{\overrightarrow{\rho }=0}|0>_{123}^{\phi }=|0>_{123}^{\phi } \notag$$and equation (\[eqnIMPOIDCALFTbefore\]) becomes$$\begin{aligned}
&&\alpha _{n}^{r}V_{\phi }^{HS}\left( \alpha ^{\phi ,1\dag },\alpha ^{\phi
,2\dag },\alpha ^{\phi ,3\dag }\right) |0>_{123}^{\phi } \notag \\
&=&\left( n+\delta _{n0}\right)
\sum\limits_{s=1}^{3}\sum\limits_{m=0}^{\infty }G_{nm}^{rs}a_{-m}^{\phi
,s}V_{\phi }^{HS}\left( \alpha ^{\phi ,1\dag },\alpha ^{\phi ,2\dag },\alpha
^{\phi ,3\dag }\right) |0>_{123}^{\phi } \label{eqnIMPOIDCALFTVF}\end{aligned}$$where $n,m=0,1,2,3,...$. In fact this relation is equivalent to the $\phi $ and $p_{\phi }$ overlaps on $V_{\phi }^{HS}\left( \alpha ^{\phi ,1\dag
},\alpha ^{\phi ,2\dag },\alpha ^{\phi ,3\dag }\right) |0>_{123}^{\phi }$. This identity will be very useful as we shall see shortly in commuting all the annihilation operators $\alpha ^{\phi \prime }$s in the Virasoro generators $L_{n}$ through $V_{\phi }^{HS}\left( \alpha ^{\phi ,1\dag
},\alpha ^{\phi ,2\dag },\alpha ^{\phi ,3\dag }\right) $.
Let us first commute $L_{m}^{\phi ,r}$ through the ghost insertions. Using the identities in (\[eqnInserID1\]) and (\[eqnInserID2\]), we find
$$\begin{aligned}
&&L_{m}^{\phi ,r}\exp \left( \frac{1}{2}i\sum_{r=1}^{3}\phi _{0}^{r}\right)
\exp \left( \sum_{r=1}^{3}\sum_{n=1}^{\infty }\frac{1}{n}\cos \left( \frac{n\pi }{2}\right) \alpha _{-n}^{\phi ,r}\right) \notag \\
&=&\exp \left( \frac{1}{2}i\sum_{r=1}^{3}\phi _{0}^{r}\right) \exp \left(
\sum_{r=1}^{3}\sum_{n=1}^{\infty }\frac{1}{n}\cos \left( \frac{n\pi }{2}\right) \alpha _{-n}^{\phi ,r}\right) \left\{ \left[ L_{m}^{\phi ,r}+\frac{3}{2}m\alpha _{m}^{\phi ,r}\right] \right. + \notag \\
&&\left[ \sum_{k=1}^{\infty }\alpha _{-k}^{\phi ,r}\cos \frac{\left(
k+m\right) \pi }{2}+\sum_{k=1}^{m-1}\alpha _{k}^{r,\phi }\cos \frac{\left(
m-k\right) \pi }{2}+p_{0}^{\phi ,r}\cos \frac{m\pi }{2}+\right.
\label{eqnHasla1} \\
&&\left. \left( \frac{1-3m}{2}\right) \alpha _{m}^{r,\phi }\right] +\left. \left[ \frac{1}{2}\sum_{k=1}^{m-1}\cos \pi \frac{m-k}{2}\cos \frac{k\pi }{2}+\left( \frac{1}{2}-\frac{3}{2}m\right) \cos \frac{m\pi }{2}\right] \right\}
\notag\end{aligned}$$
where $m=1,2,...$. In obtaining the above result we made use of the fact that $\sum_{k=1}^{m-1}\alpha _{m-k}^{\phi ,r}\cos \frac{k\pi }{2}=\frac{1}{2}\sum_{k=1}^{m-1}\alpha _{k}^{r,\phi }\cos \pi \frac{m-k}{2}$. Observe that the quadratic part[^3] (i.e., the expression in the first square bracket $L_{m}^{\phi ,r}+\frac{3}{2}m\alpha _{m}^{\phi ,r}$) is identical to Virasoro generator for the orbital part $L_{m}^{x,r}$. Thus its action on $V_{\phi }^{HS}\left( \alpha
^{\phi ,1\dag },\alpha ^{\phi ,2\dag },\alpha ^{\phi ,3\dag }\right) $ $|0>_{123}^{\phi }$ is identical to the action of $L_{m}^{x,r}$ on the coordinate part of the vertex because $V_{\phi }^{HS}\left( \alpha ^{\phi
,1\dag },\alpha ^{\phi ,2\dag },\alpha ^{\phi ,3\dag }\right) $ and $V_{x}^{HS}\left( \alpha ^{1\dag },\alpha ^{2\dag },\alpha ^{3\dag }\right) $ has exactly the same structure. The expression in the second square brackets is linear in oscillators and the expression in the third square brackets has no oscillators. We still need to compute the effect of passing the expression in the second brackets through the exponential of the quadratic form in the ghost creation operators. But before we do that let us first see the effect of passing $L_{-m}^{\phi ,r}$ through the mid-point insertions. Since $L_{-m}^{\phi ,r}\equiv L_{m}^{\phi ,r\dag }$, then taking the adjoint of (\[eqnVirosorGm\]), we find $$\begin{aligned}
&&L_{-m}^{\phi ,r}\exp \left( \frac{1}{2}i\sum_{r=1}^{3}\phi _{0}^{r}\right)
\exp \left( \sum_{r=1}^{3}\sum_{n=1}^{\infty }\frac{1}{n}\cos \left( \frac{n\pi }{2}\right) \alpha _{-n}^{\phi ,r}\right) \notag \\
&=&\exp \left( \frac{1}{2}i\sum_{r=1}^{3}\phi _{0}^{r}\right) \exp \left(
\sum_{r=1}^{3}\sum_{n=1}^{\infty }\frac{1}{n}\cos \left( \frac{n\pi }{2}\right) \alpha _{-n}^{\phi ,r}\right) \left\{ \left[ L_{-m}^{\phi ,r}+\frac{3}{2}m\alpha _{-m}^{\phi ,r}\right] \right. \notag \\
&&\left. +\left[ \sum_{k=1}^{\infty }\alpha _{-k-m}^{\phi ,r}\cos \frac{k\pi
}{2}+\left( \frac{1}{2}-\frac{3}{2}m\right) \alpha _{-m}^{\phi ,r}\right]
\right\} \label{eqnHasla2}\end{aligned}$$where $m=1,2,...$. Once more the quadratic[^4] inside the first square bracket; that is, $L_{-m}^{\phi ,r}+\frac{3}{2}m\alpha _{-m}^{\phi ,r}$ is identical to the orbital part of the Virasoro generator $L_{-m}^{x,r}$ and so its effect on the exponential of the quadratic form in the ghost creation operators is the same as the action of $L_{-m}^{x,r}$ on the orbital part of the vertex. The effect of the expression in the second square brackets needs to be computed which we shall do later. For the zero mode of the ghost Virasoro generators, we find
$$\begin{aligned}
&&L_{0}^{\phi ,r}\exp \left( \frac{1}{2}i\sum_{r=1}^{3}\phi _{0}^{r}\right)
\exp \left( \sum_{r=1}^{3}\sum_{n=1}^{\infty }\frac{1}{n}\cos \left( \frac{n\pi }{2}\right) \alpha _{-n}^{\phi ,r}\right) \notag \\
&=&\exp \left( \frac{1}{2}i\sum_{r=1}^{3}\phi _{0}^{r}\right) \exp \left(
\sum_{r=1}^{3}\sum_{n=1}^{\infty }\frac{1}{n}\cos \left( \frac{n\pi }{2}\right) \alpha _{-n}^{\phi ,r}\right) \left\{ \left[ L_{0}^{\phi ,r}+\frac{1}{8}\right] \right. \notag \\
&&+\left. \left[ \frac{1}{2}p_{0}^{\phi ,r}+\sum_{k=1}^{\infty }\cos \frac{k\pi }{2}\alpha _{-k}^{\phi ,r}\right] +\left[ \frac{1}{2}\cdot \frac{1}{2^{2}}-\frac{1}{8}\right] \right\} \label{eqnHasla0}\end{aligned}$$
Once again the expression inside the first square brackets is identical to that for the orbital zero mode of the Virasoro generator $L_{0}^{x,r}$; hence its action is the same as that of $L_{0}^{x,r}$ on the orbital part of the vertex. The second and the third expressions in the second and the third square brackets have no similar terms in the orbital part $L_{0}^{x,r}$. Using equations (\[eqnHasla1\]), (\[eqnHasla2\]), and (\[eqnHasla0\]) to commute the ghost part of the Ward-like operator through the mid-point insertions, we find$$\begin{aligned}
&&W_{m}^{\phi ,r}\exp \left( \frac{1}{2}i\sum_{r=1}^{3}\phi _{0}^{r}\right)
\exp \left( \sum_{r=1}^{3}\sum_{n=1}^{\infty }\frac{1}{n}\cos \left( \frac{n\pi }{2}\right) \alpha _{-n}^{\phi ,r}\right) \notag \\
&=&\exp \left( \frac{1}{2}i\sum_{r=1}^{3}\phi _{0}^{r}\right) \exp \left(
\sum_{r=1}^{3}\sum_{n=1}^{\infty }\frac{1}{n}\cos \left( \frac{n\pi }{2}\right) \alpha _{-n}^{\phi ,r}\right) \notag \\
&&\left[ W_{m}^{\phi ,r}\left( 2\right) +W_{m}^{\phi ,r}\left( 1\right)
+\kappa _{m}^{\phi ,r}\left( 1\right) \right]\end{aligned}$$where$$\begin{aligned}
&&W_{m}^{\phi ,r}\left( 2\right) \equiv \sum_{k=1}^{\infty }\alpha
_{-k}^{\phi ,r}\alpha _{k+m}^{r,\phi }+\frac{1}{2}\sum_{k=1}^{m-1}\alpha
_{m-k}^{\phi ,r}\alpha _{k}^{r,\phi }+p_{0}^{\phi ,r}\alpha _{m}^{r,\phi } \\
&&+\sum_{s=1}^{3}\sum_{n=1}^{\infty }m\widetilde{N}_{mn}^{rs}\left[
\sum_{k=1}^{\infty }\alpha _{-k-n}^{\phi ,s}\alpha _{k}^{s,\phi }+\frac{1}{2}\sum_{k=1}^{m-1}\alpha _{-k}^{\phi ,s}\alpha _{-n+k}^{\phi ,s}+p_{0}^{\phi
,s}\alpha _{-n}^{\phi ,s}\right] \\
&&+\sum_{s=1}^{3}m\widetilde{N}_{m0}^{rs}\left[ \frac{1}{2}\left(
p_{0}^{\phi ,s}\right) ^{2}+\sum_{k=1}^{\infty }\alpha _{-k}^{\phi ,s}\alpha
_{k}^{\phi ,s}\right] \\
&=&\left[ L_{m}^{\phi ,r}+\frac{3}{2}m\alpha _{m}^{\phi ,r}\right]
+\sum_{s=1}^{3}\sum_{n=1}^{\infty }m\widetilde{N}_{mn}^{rs}\left[
L_{-n}^{\phi ,r}+\frac{3}{2}n\alpha _{-n}^{\phi ,r}\right] \\
&&+\sum_{s=1}^{3}m\widetilde{N}_{m0}^{rs}\left[ L_{0}^{\phi ,s}+\frac{1}{8}\right]\end{aligned}$$
$$\begin{aligned}
&&W_{m}^{\phi ,r}\left( 1\right) \equiv \left[ \sum_{k=1}^{\infty }\alpha
_{-k}^{\phi ,r}\cos \pi \frac{k+m}{2}+\sum_{k=1}^{m-1}\alpha _{k}^{r,\phi
}\cos \pi \frac{m-k}{2}+p_{0}^{\phi ,r}\cos \frac{m\pi }{2}\right. \notag \\
&&\left. +\left( \frac{1}{2}-\frac{3}{2}m\right) \alpha _{m}^{r,\phi }\right]
+\sum_{s=1}^{3}\sum_{n=1}^{\infty }m\widetilde{N}_{mn}^{rs}\left[
\sum_{k=1}^{\infty }\alpha _{-k-n}^{\phi ,s}\cos \frac{k\pi }{2}\right.
\notag \\
&&\left. +\left( \frac{1}{2}-\frac{3}{2}n\right) \alpha _{-n}^{\phi ,s}\right] +\sum_{s=1}^{3}m\widetilde{N}_{m0}^{rs}\left[ \frac{1}{2}p_{0}^{\phi
,s}+\sum_{k=1}^{\infty }\cos \frac{k\pi }{2}\alpha _{-k}^{\phi ,s}\right] ,\end{aligned}$$
and$$\begin{aligned}
\kappa _{m}^{\phi ,r}\left( 1\right) &\equiv &\left[ \frac{1}{2}\sum_{k=1}^{m-1}\cos \pi \frac{m-k}{2}\cos \frac{k\pi }{2}+\left( \frac{1}{2}-\frac{3}{2}m\right) \cos \frac{m\pi }{2}\right] \notag \\
+\left[ \frac{1}{2}\cdot \frac{1}{2^{2}}-\frac{1}{8}\right] &=&\text{\ }\frac{1+\left( -1\right) ^{m}}{2}\frac{5}{2}\frac{m}{2}\left( -1\right)
^{m/2} \label{eqnk(1)anomaly}\end{aligned}$$It is important to notice that $W_{m}^{\phi ,r}\left( 2\right) $ has precisely the same structure as the orbital part $W_{m}^{x,r}$ of the Ward-like operator. Thus its action on $V_{\phi }^{HS}\left( \alpha ^{\phi
,1\dag },\alpha ^{\phi ,2\dag },\alpha ^{\phi ,3\dag }\right) $ $|0,0,0)_{\phi }$ is identical to the action of the orbital part of the Ward-like operator on the orbital part of the vertex. The action of $W_{m}^{x,r}$ on the orbital part of the vertex may be computed with the help of the results obtained in [@A-A-B-I]. Thus, the anomaly of the quadratic part $W_{m}^{\phi ,r}\left( 2\right) $ is$$\kappa _{m}^{\phi ,r}\left( 2\right) =-\frac{1}{2}\sum_{k=1}^{m-1}k\left(
m-k\right) G_{m-k\text{ }k}^{rr}$$We have seen in appendix B, that $G_{nm}^{rr}$ vanish for all $n+m=odd$ (this is a consequence of the cyclic property of the $G$ coefficients); hence the above expression reduces to$$\kappa _{m}^{\phi ,r}\left( 2\right) =\frac{1+\left( -1\right) ^{m}}{2}\left[
-\frac{1}{2}\sum_{k=1}^{m-1}k\left( m-k\right) G_{m-k\text{ }k}^{rr}\right]
\label{eqnK _m(2)}$$The closed form of the finite sum may be obtained by considering the first few values for $m$. For $m=2$, we have$$\kappa _{2}^{\phi ,r}\left( 2\right) =-\frac{1}{2}G_{11}^{rr}$$From ref. [@Gassem], we have$$G_{11}^{rr}=-\frac{a_{1}b_{1}}{3}-\frac{1}{\pi }\sqrt{\frac{1}{3}}\left(
a_{1}\widetilde{E}_{1}^{b}-b_{1}\widetilde{E}_{1}^{a}\right)$$The explicit values of $\widetilde{E}_{1}^{a}$ and $\widetilde{E}_{1}^{b}$ were considered by many authors[Gross-Jevicki-II,lousek-Jevicki,ABA,BAA-N,Gassem]{}. Thus using these values and the values of $a_{1}=2/3$ and $b_{1}=4/3$, we find$$G_{11}^{rr}=-\frac{5}{3^{3}} \label{eqn Grr11}$$and so we find$$\kappa _{2}^{\phi ,r}\left( 2\right) =-\frac{1}{2}\left[ -\frac{5}{3^{3}}\right] =\frac{1}{2}\left( -\frac{5}{3^{3}}\right) \frac{2}{2}\left(
-1\right) ^{2/2}$$For $m=4$, equation (\[eqnK \_m(2)\]) gives$$\kappa _{4}^{\phi ,r}\left( 2\right) =-\frac{1}{2}\left[ 3G_{3\text{ }1}^{rr}+4G_{22}^{rr}+3G_{1\text{ }3}^{rr}\right] \label{eqnK_m=4}$$The values of $G$’s are given by[Gross-Jevicki-II,lousek-Jevicki,ABA,BAA-N,Gassem]{}$$G_{13}^{rr}=G_{31}^{rr}=\frac{2^{5}}{3^{6}}$$and$$G_{22}^{rr}=\frac{2a_{2}b_{2}}{12}+\frac{1}{\pi }\sqrt{\frac{1}{3}}\left[
a_{2}\widetilde{S}_{2}^{b}-b_{2}\widetilde{S}_{2}^{a}\right]
\label{eqnG22need}$$where
$${}\widetilde{S}_{2}^{a}=\left[ \widetilde{S}_{1}^{a}-\frac{2}{9}\sqrt{3}\pi \right] \frac{3}{2}a_{2}-\frac{1}{3}\sqrt{3}\pi \left( -a_{1}a_{1}+\frac{1}{2}a_{2}\right) -\frac{1}{4}\sqrt{3}\pi \left( \frac{1}{2}a_{1}\text{ }\right)
\text{\ \ }$$
$${}{}\widetilde{S}_{2}^{b}=\left[ \widetilde{S}_{1}^{b}-\frac{4}{9}\sqrt{3}\pi \right] \frac{3}{4}b_{2}-\frac{1}{3}\sqrt{3}\pi \left( -b_{1}b_{1}+\frac{1}{2}b_{2}\right) -\frac{3}{8}\sqrt{3}\pi \left( \frac{1}{2}b_{1}\right)
\text{\ }$$
Using the explicit values of the $a$’s, $b$’s and $\widetilde{S}_{1}^{a}$, we find[@Gross-Jevicki-II; @lousek-Jevicki; @ABA; @BAA-N; @Gassem]$${}\widetilde{S}_{2}^{a}=-\frac{1}{36}\sqrt{3}\pi \left( 4\ln 2-4\ln
3+1\right)$$and$${}{}\widetilde{S}_{2}^{b}=-\frac{1}{36}\sqrt{3}\pi \left( 16\ln 2-16\ln
3+5\right) \text{\ }$$and so equation (\[eqnG22need\]) yields$$G_{22}^{rr}=\frac{13}{2\cdot 3^{5}}$$Now substituting the values of $G_{13}^{rr}$, $G_{31}^{rr}$ and $G_{22}^{rr}$ into (\[eqnK\_m=4\]), we find$$\kappa _{4}^{\phi ,r}\left( 2\right) =-\frac{1}{2}\left[ \frac{10}{27}\right]
=\frac{1}{2}\left( -\frac{5}{3^{3}}\right) \frac{4}{2}\left( -1\right) ^{4/2}$$Continuing this way we see that for $m=even$, the finite sum in (\[eqnK \_m(2)\]) has the value$$\sum_{k=1}^{m-1}k\left( m-k\right) G_{m-k\text{ }k}^{rr}=-\left( -\frac{5}{3^{3}}\right) \frac{m}{2}\left( -1\right) ^{m/2}\text{ \ \ \ \ \ \ }$$Thus we obtain$$\kappa _{m}^{\phi ,r}\left( 2\right) =\frac{1+\left( -1\right) ^{m}}{2}\left[
\frac{1}{2}\left( -\frac{5}{3^{3}}\right) \frac{m}{2}\left( -1\right) ^{m/2}\right] \text{ } \label{eqnanomalyQuadrKmPhi}$$The anomaly for the quadratic coordinate piece has been evaluated by many authors [@Gross-Jevicki-II; @Hlousek-Jevicki; @ABA; @BAA-N; @Gassem]$$\kappa _{m}^{x,r}\left( 2\right) =\frac{1+\left( -1\right) ^{m}}{2}\left[
\frac{D}{2}\left( -\frac{5}{3^{3}}\right) \frac{m}{2}\left( -1\right) ^{m/2}\text{ }\right] \label{eqnanomalyQuadrKmX}$$Combining equations (\[eqnk(1)anomaly\]), (\[eqnanomalyQuadrKmPhi\]), and (\[eqnanomalyQuadrKmX\]), we see that the total anomaly $$\kappa _{m}^{x,r}\left( 2\right) +\kappa _{m}^{\phi ,r}\left( 2\right)
+\kappa _{m}^{\phi ,r}\left( 1\right) =0$$in the critical dimension $D=26$. This result provides a nontrivial consistency check on the validity of the half string theory.
Now we proceed to consider the Linear part of the Ward-like operator. Using the identity in (\[eqnIMPOIDCALFTVF\]), we can commute the Ward-like operator $W_{m}^{\phi ,r}\left( 1\right) $ through $V_{\phi }^{HS}\left(
\alpha ^{\phi ,1\dag },\alpha ^{\phi ,2\dag },\alpha ^{\phi ,3\dag }\right)
> $, skipping some rather simple algebra, we find $$\begin{aligned}
&&W_{m}^{\phi ,r}\left( 1\right) V_{\phi }^{HS}\left( \alpha ^{\phi ,1\dag
},\alpha ^{\phi ,2\dag },\alpha ^{\phi ,3\dag }\right) |0>_{123}^{\phi } \\
&=&\left\{ \sum_{k=1}^{\infty }\alpha _{-k}^{\phi ,r}\cos \pi \frac{k+m}{2}+\sum_{k=1}^{m-1}\sum_{s=1}^{3}\sum_{q=0}^{\infty }kG_{kq}^{rs}a_{-q}^{\phi
,s}\cos \pi \frac{m-k}{2}\right. \\
&&+p_{0}^{\phi ,r}\cos \frac{m\pi }{2}+\left( \frac{1}{2}-\frac{3}{2}m\right) \sum_{s=1}^{3}\sum_{q=0}^{\infty }mG_{mq}^{rs}a_{-q}^{\phi ,s} \\
&&+\sum_{s=1}^{3}\sum_{n=1}^{\infty }m\widetilde{N}_{mn}^{rs}\left[
\sum_{k=1}^{\infty }\alpha _{-k-n}^{\phi ,s}\cos \frac{k\pi }{2}+\left(
\frac{1}{2}-\frac{3}{2}n\right) \alpha _{-n}^{\phi ,s}\right] + \\
&&\left. \sum_{s=1}^{3}m\widetilde{N}_{m0}^{rs}\left[ \frac{1}{2}p_{0}^{\phi
,s}+\sum_{k=1}^{\infty }\cos \frac{k\pi }{2}\alpha _{-k}^{\phi ,s}\right]
\right\} V_{\phi }^{HS}\left( \alpha ^{\phi ,1\dag },\alpha ^{\phi ,2\dag
},\alpha ^{\phi ,3\dag }\right) |0>_{123}^{\phi }\end{aligned}$$Making the identification $\alpha _{0}^{\phi ,s}=p_{0}^{\phi ,s}$ and then using the fact that$$\sum_{q=1}^{m-1}qG_{q0}^{rs}\cos \pi \frac{m-q}{2}=\sum_{q=1}^{m-1}\left(
m-q\right) G_{m-q\text{ }0}^{rs}\cos \pi \frac{q}{2}\text{,}$$the above expression becomes
$$\begin{aligned}
&&\left\{ \sum_{s=1}^{3}\sum_{k=1}^{\infty }\left( \delta ^{rs}\cos \pi
\frac{k+m}{2}\right) \alpha _{-k}^{\phi ,s}+\sum_{s=1}^{3}\sum_{k=1}^{\infty
}\left( \sum_{q=1}^{m-1}qG_{qk}^{rs}\cos \pi \frac{m-q}{2}\right)
a_{-k}^{\phi ,s}\right. \\
&&+\sum_{s=1}^{3}\sum_{k=1}^{\infty }\left( \frac{1}{2}-\frac{3}{2}m\right)
mG_{mk}^{rs}a_{-k}^{\phi ,s}+\sum_{s=1}^{3}\sum_{k=1}^{\infty }\left( m\widetilde{N}_{m0}^{rs}\cos \frac{k\pi }{2}\right) \alpha _{-k}^{\phi ,s} \\
&&+\sum_{s=1}^{3}\sum_{n=1}^{\infty }m\widetilde{N}_{mn}^{rs}\sum_{k=1}^{\infty }\alpha _{-k-n}^{\phi ,s}\cos \frac{k\pi }{2}+\sum_{s=1}^{3}\sum_{k=1}^{\infty }m\widetilde{N}_{mk}^{rs}\left( \frac{1}{2}-\frac{3}{2}k\right) \alpha _{-k}^{\phi ,s} \\
&&+\sum_{s=1}^{3}\left[ \delta ^{rs}\cos \frac{m\pi }{2}+\frac{1}{2}m\widetilde{N}_{m0}^{rs}+\left( \frac{1}{2}-\frac{3}{2}m\right)
mG_{m0}^{rs}\right. \\
&&\left. +\sum_{q=1}^{m-1}\left( m-q\right) G_{m-q\text{ }0}^{rs}\cos \pi
\frac{q}{2}\right] \left. p_{0}^{\phi ,s}\right\} V_{\phi }^{HS}\left(
\alpha ^{\phi ,1\dag },\alpha ^{\phi ,2\dag },\alpha ^{\phi ,3\dag }\right)
\left\vert 0\right\rangle _{123}^{\phi }\end{aligned}$$
If we now let $k+n\longrightarrow q$ in the double sum $\sum_{n=1}^{\infty
}\sum_{k=1}^{\infty }\left( ....\right) $ , so that$$\begin{aligned}
&&\sum_{n=1}^{\infty }m\widetilde{N}_{mn}^{rs}\sum_{k=1}^{\infty }\alpha
_{-k-n}^{\phi ,s}\cos \frac{k\pi }{2} \\
&=&\sum_{n=1}^{\infty }\sum_{k=1+n}^{\infty }m\widetilde{N}_{mn}^{rs}\alpha
_{-k}^{\phi ,s}\cos \pi \frac{k-n}{2}=\sum_{k=1}^{\infty }\sum_{n=1}^{k-1}m\widetilde{N}_{mn}^{rs}\alpha _{-k}^{\phi ,s}\cos \pi \frac{k-n}{2}\end{aligned}$$(to see the last equality you only need to expand both sides and compare terms), we obtain$$\begin{aligned}
&&W_{m}^{\phi ,r}\left( 1\right) V_{\phi }^{HS}\left( \alpha ^{\phi ,1\dag
},\alpha ^{\phi ,2\dag },\alpha ^{\phi ,3\dag }\right) \left\vert
0,0,0\right\rangle _{\phi } \notag \\
&=&\left[ \sum_{s=1}^{3}\sum_{k=1}^{\infty }\Omega _{mk}^{rs}\alpha
_{-k}^{\phi ,s}+\sum_{s=1}^{3}\Omega _{m0}^{rs}\text{ }p_{0}^{\phi ,s}\right]
\notag \\
&&V_{\phi }^{HS}\left( \alpha ^{\phi ,1\dag },\alpha ^{\phi ,2\dag },\alpha
^{\phi ,3\dag }\right) \left\vert 0,0,0\right\rangle _{\phi }
\label{eqnidentoperward}\end{aligned}$$where for $k,m=1,2,3,...$$$\begin{aligned}
\Omega _{mk}^{rs} &\equiv &\delta ^{rs}\cos \frac{\left( k+m\right) \pi }{2}+\sum_{n=1}^{m-1}nG_{nk}^{rs}\cos \frac{\left( m-n\right) \pi }{2}+\left(
\frac{1}{2}-\frac{3}{2}m\right) mG_{mk}^{rs}+ \notag \\
&&m\widetilde{N}_{m0}^{rs}\cos \frac{k\pi }{2}+\sum_{n=1}^{k-1}m\widetilde{N}_{mn}^{rs}\cos \frac{\left( k-n\right) \pi }{2}+m\widetilde{N}_{mk}^{rs}\left( \frac{1}{2}+\frac{3}{2}k\right) \label{eqnOmegars;km}\end{aligned}$$and for $k=0,m=1,2,3,...$ $$\begin{aligned}
\Omega _{m0}^{rs} &\equiv &\delta ^{rs}\cos \frac{m\pi }{2}+\frac{1}{2}m\widetilde{N}_{m0}^{rs}+\left( \frac{1}{2}-\frac{3}{2}m\right) mG_{m0}^{rs}
\notag \\
&&+\sum_{n=1}^{m-1}\left( m-n\right) G_{m-n\text{ }0}^{rs}\cos \pi \frac{n}{2} \label{eqnOmegak=0N}\end{aligned}$$The identities in (\[eqnOmegars;km\]) and (\[eqnOmegak=0N\]) are identical to those obtained Hlousek and Jevicki[@Hlousek-Jevicki] for the full string. In obtaining equation (\[eqnidentoperward\]) we have labeled the dummy index $q$ as $n$. Thus to establish that$$W_{m}^{\phi ,r}\left( 1\right) V_{\phi }^{HS}\left( \alpha ^{\phi ,1\dag
},\alpha ^{\phi ,2\dag },\alpha ^{\phi ,3\dag }\right) |0>_{123}^{\phi }=0\text{,}$$ we need to prove that$$\sum_{s=1}^{3}\sum_{k=1}^{\infty }\Omega _{mk}^{rs}\alpha _{-k}^{\phi
,s}V_{\phi }^{HS}\left( \alpha ^{\phi ,1\dag },\alpha ^{\phi ,2\dag },\alpha
^{\phi ,3\dag }\right) |0>_{123}^{\phi }=0$$and$$\sum_{s=1}^{3}\Omega _{m0}^{rs}p_{0}^{\phi ,s}V_{\phi }^{HS}\left( \alpha
^{\phi ,1\dag },\alpha ^{\phi ,2\dag },\alpha ^{\phi ,3\dag }\right)
|0>_{123}^{\phi }=0$$We observe that for $k\neq 0$, the states $\alpha _{-k}^{\phi ,s}V_{\phi
}^{HS}\left( \alpha ^{\phi ,1\dag },\alpha ^{\phi ,2\dag },\alpha ^{\phi
,3\dag }\right) |0>_{123}^{\phi }$ are all linearly independent for $s=1,2,3$ and $k=1,2,...$, thus the only way for this part to vanish is for $\Omega
_{mk}^{rs}$ to be identically zero for all values of $s=1,2,3$ and $k=1,2,..$. That is we need to show that$$\Omega _{mk}^{rs}=0 \label{eqn GeneralIDforOmegars,mk}$$is true for all values of $r,s=1,2,3$ and $k,m=1,2,..$. For the second part we need to prove that$$\sum_{s=1}^{3}\Omega _{m0}^{rs}p_{0}^{\phi ,s}=0$$for $m=1,2,..$. Using the conservation of momentum (or more precisely the ghost number conservation) to eliminate [^5] $p_{0}^{\phi ,3}$, we see at once that to establish the above equation we only need to show that the following identities$$\begin{aligned}
\left( \Omega _{m0}^{r1}-\Omega _{m0}^{r3}\right) &=&0 \label{eqnOmega=0I}
\\
\left( \Omega _{m0}^{r2}-\Omega _{m0}^{r3}\right) &=&0 \label{eqnOmega=0II}\end{aligned}$$are satisfied for $r=1,2,3$, $m=1,2,..$. To prove that $\Omega _{mk}^{rs}$ in equation (\[eqnOmegars;km\]) vanish for all values of $r,s,m,k$ is quite cumbersome. Unfortunately, it appears to be no short cuts in proving $\Omega _{mk}^{rs}=0$ and so we are forced to prove that $\Omega _{mk}^{rs}=0$ by brute force. The identities in (\[eqnOmega=0I\]) and (\[eqnOmega=0II\]) may be proven with the help of the properties of the coefficients $G_{nm}^{rs}$ and $\widetilde{N}_{nm}^{rs}$[Gross-Jevicki-II,lousek-Jevicki,ABA,BAA-N,Gassem]{}. First let us concentrate on the identities in (\[eqnOmega=0I\]) and (\[eqnOmega=0II\]). For $m=odd=2k+1>0$, equation (\[eqnOmegak=0N\]) reduces to$$\begin{aligned}
\Omega _{2k+10}^{rs} &=&\frac{1}{2}\left( 2k+1\right) \widetilde{N}_{2k+10}^{rs}+\left( \frac{1}{2}-\frac{3}{2}\left( 2k+1\right) \right)
\left( 2k+1\right) G_{2k+10}^{rs} \notag \\
&&+\sum_{n=1}^{2k}\left( 2k+1-n\right) G_{2k+1-n\text{ }0}^{rs}\cos \pi
\frac{n}{2} \label{eqnOmegak=0/m=odd}\end{aligned}$$For $r=s$, we have $G_{odd\text{ }even}^{rr}=\widetilde{N}_{odd\text{ }even}^{rr}=0$, and so the above expression vanish since for $n=even$, $G_{2k+1-n\text{ }0}^{rs}=0$ and for $n=odd$, $\cos \pi n/2$ vanish. Thus for $r=s$, we have$$\Omega _{2k+10}^{rr}=0$$It follows from the above identity and (\[eqnOmega=0I\]) and ([eqnOmega=0II]{}) that to prove the identities in (\[eqnOmega=0I\]) and ([eqnOmega=0II]{}) for $m=odd=2k+1>0$, we have to establish that $$\Omega _{2k+1,0}^{12}=\Omega _{2k+1,0}^{23}=\Omega _{2k+1,0}^{31}=\Omega
_{2k+1,0}^{13}=\Omega _{2k+1,0}^{32}=\Omega _{2k+1,0}^{21}=0
\label{eqnIdentities Odd0r,s}$$It is not hard to check that the above identities are satisfied for $2k+1=1,2,3,..$, by explicit substitution. The proof for a general value of $2k+1$, can be established by mathematical induction. We have proved that in fact these identities are satisfied for all values $2k+1$; however to present the proof here for all cases will occupy too much space. To illustrate the proof we will present here the complete proof for one of them. Let us consider $$\Omega _{2k+1,0}^{13}=0$$From (\[eqnOmegak=0/m=odd\]) we have$$\begin{aligned}
\Omega _{2k+10}^{13} &=&\frac{1}{2}\left( 2k+1\right) \widetilde{N}_{2k+10}^{13}+\left( \frac{1}{2}-\frac{3}{2}\left( 2k+1\right) \right)
\left( 2k+1\right) G_{2k+10}^{13} \notag \\
&&+\sum_{n=1}^{2k}\left( 2k+1-n\right) G_{2k+1-n\text{ }0}^{13}\cos \pi
\frac{n}{2}\end{aligned}$$From [@Gassem; @Gross-Jevicki-II], we have$$\begin{aligned}
G_{2k+10}^{13} &=&-\frac{1}{\sqrt{3}}\left( -\right) ^{k}\frac{a_{2k+1}}{2k+1} \\
\widetilde{N}_{2k+10}^{13} &=&-\frac{1}{\sqrt{3}}\left( -\right) ^{k}\frac{b_{2k+1}}{2k+1}\end{aligned}$$And so the above equation becomes$$\Omega _{2k+10}^{13}=\frac{1}{\sqrt{3}}\left( -\right) ^{k}\left[ -\frac{1}{2}b_{2k+1}+\left( 3k+1\right) a_{2k+1}-\sum_{m=1}^{k}a_{2k+1-2m}\right]
\label{Omega (13) 2k+10}$$To proceed further, we need to eliminate either $b$ or $a$ from the above expression. This can be established with the help of the mixed recursion relation$$\frac{2}{3}b_{n}=\left( -1\right) ^{n}\left[ \left( n+1\right)
a_{n+1}-2na_{n}+\left( n-1\right) a_{n-1}\right] \label{eqnMixed Recab}$$which can be established with the help of contour integration[@Gassem]. Thus setting $n=2k+1$ in the recursion relation and substituting for $b_{2k+1}$, the above expression in (\[Omega (13) 2k+10\]) reduces to$$\Omega _{2k+10}^{13}=\frac{1}{\sqrt{3}}\left( -\right) ^{k}\left[ \frac{3}{2}\left( k+1\right) a_{2k+2}+\frac{3}{2}ka_{2k}-\frac{1}{2}a_{2k+1}-\sum_{m=1}^{k}a_{2k+1-2m}\right] \label{Omega (13) 2k+10-1}$$The expression inside the square bracket can be shown to vanish for all values of $k=0,1,2,3,...$ with the help of the recursion relation[Gassem]{}$$\frac{2}{3}a_{n}=\left( n+1\right) a_{n+1}-\left( n-1\right) a_{n-1}
\label{recurrisionRelaaa}$$To see this we set $n=2k+1$ in the above recursion relation and then substituting in (\[Omega (13) 2k+10-1\]) to obtain$$\Omega _{2k+10}^{13}=\frac{1}{\sqrt{3}}\left( -\right) ^{k}\left[
3ka_{2k}-\sum_{m=1}^{k}a_{2k+1-2m}\right] \label{Omega (13) 2k+10-2}$$Setting $n=2k-1$ in (\[recurrisionRelaaa\]) and then substituting in ([Omega (13) 2k+10-2]{}), we find$$\Omega _{2k+10}^{13}=\frac{1}{\sqrt{3}}\left( -\right) ^{k}\left[ 3\left(
k-1\right) a_{2k-2}-\sum_{m=2}^{k}a_{2k+1-2m}\right]
\label{Omega (13) 2k+10-3}$$At this point it is not hard to show by explicit substitution that both expressions inside the square brackets in (\[Omega (13) 2k+10-2\]) and (\[Omega (13) 2k+10-3\]) vanish for $k=0,1,2$. Now we assume that both expressions inside the square brackets in (\[Omega (13) 2k+10-2\]) and (\[Omega (13) 2k+10-3\]) vanish for $k$. We let $k\rightarrow k+1$ in ([Omega (13) 2k+10-3]{}) so that the expression inside the square bracket of (\[Omega (13) 2k+10-3\]) becomes$$3ka_{2k}-\sum_{m=2}^{k+1}a_{2k+1+2\left( 1-m\right) }$$Letting $1-m=n$, and then letting $n\rightarrow -n$, the above expression becomes$$3ka_{2k}-\sum_{m=2}^{k+1}a_{2k+1+2\left( 1-m\right)
}=3ka_{2k}-\sum_{n=1}^{k}a_{2k+1-2n}=0$$where the second equality follows from the fact that the expression inside the square bracket in (\[Omega (13) 2k+10-2\]) vanishes for all values of $k$. Thus the expression inside the square bracket in (\[Omega (13) 2k+10-3\]) vanishes identically by mathematical induction and the desired result follows at once. In fact using this procedure we have checked that all the identities in (\[eqnIdentities Odd0r,s\]) are satisfied.
For the case of $m=even$, one can show using similar procedure to the one used to establish (\[eqnIdentities Odd0r,s\]) and the following properties of the $G$ and $\widetilde{N}$ coefficients[Gross-Jevicki-II,lousek-Jevicki,ABA,BAA-N,Gassem]{} $$\begin{aligned}
G_{2k\text{ }0}^{13} &=&G_{2k\text{ }0}^{12}\text{, \ \ }G_{2k+1\text{ }0}^{13}=-G_{2k+1\text{ }0}^{12} \\
\widetilde{N}_{2k\text{ }0}^{13} &=&\widetilde{N}_{2k\text{ }0}^{12}\text{,
\ \ \ }\widetilde{N}_{2k+1\text{ }0}^{13}=-\widetilde{N}_{2k+1\text{ }0}^{12}\end{aligned}$$that the only none trivial identity in equations (\[eqnOmega=0I\]) and (\[eqnOmega=0II\]) is$$\Omega _{2k\text{ }0}^{11}-\Omega _{2k\text{ }0}^{13}=0
\label{eqn(Omega11-Omega13)2k0}$$Using equation (\[eqnOmegak=0N\]), the left hand side of the above equation becomes$$\begin{aligned}
\Omega _{2k\text{ }0}^{11}-\Omega _{2k\text{ }0}^{13} &=&\left( -1\right)
^{k}+k\left( \widetilde{N}_{2k\text{ }0}^{11}-\widetilde{N}_{2k\text{ }0}^{13}\right) +\left( 1-6k\right) k\left( G_{2k\text{ }0}^{11}-G_{2k\text{ }0}^{13}\right) \notag \\
&&+\sum_{n=1}^{k-1}\left( 2k-2n\right) \left( -1\right) ^{n}\left( G_{2k-2n\text{ }0}^{11}-G_{2k-2n\text{ }0}^{13}\right)\end{aligned}$$Using the values of the $G$ and $\widetilde{N}$ coefficients[Gross-Jevicki-II,lousek-Jevicki,ABA,BAA-N,Gassem]{}$$\begin{aligned}
G_{2k\text{ }0}^{11} &=&\frac{2}{3}\left( -\right) ^{k}\frac{a_{2k}}{2k}\text{, \ \ \ \ }G_{2k\text{ }0}^{13}=-\frac{1}{3}\left( -\right) ^{k}\frac{a_{2k}}{2k} \\
G_{2k-2n\text{ }0}^{11} &=&\frac{2}{3}\left( -\right) ^{k-n}\frac{a_{2k-2n}}{2k-2n}\text{, \ \ \ \ }G_{2k-2n\text{ }0}^{13}=-\frac{1}{3}\left( -\right)
^{k-n}\frac{a_{2k-2n}}{2k-2n} \\
\widetilde{N}_{2k\text{ }0}^{11} &=&-\frac{2}{3}\left( -\right) ^{k}\frac{b_{2k}}{2k}\text{, \ \ \ }\widetilde{N}_{2k\text{ }0}^{13}=\frac{1}{3}\left(
-\right) ^{k}\frac{b_{2k}}{2k}\end{aligned}$$the above expression becomes$$\Omega _{2k\text{ }0}^{11}-\Omega _{2k\text{ }0}^{13}=\left( -1\right) ^{k}\left[ 1-\frac{1}{2}b_{2k}+\frac{1}{2}\left( 1-6k\right)
a_{2k}+\sum_{n=1}^{k-1}a_{2k-2n}\right]$$Now to prove (\[eqn(Omega11-Omega13)2k0\]), we have to show that the expression inside the square bracket in the above expression vanish for all $k$. The proof is very similar to the proof of the identities in ([eqnIdentities Odd0r,s]{}). We can use the mixed recursion relation in ([eqnMixed Recab]{}) to eliminate $b_{2k}$ from the expression inside the square bracket on the right hand side of the above equation, then with the help of the recursion relation in (\[recurrisionRelaaa\]) one can show, by mathematical induction, that$$\left[ 1-\frac{1}{2}b_{2k}+\frac{1}{2}\left( 1-6k\right)
a_{2k}+\sum_{n=1}^{k-1}a_{2k-2n}\right] =0$$for all values of $k$. Consequently the identity in ([eqn(Omega11-Omega13)2k0]{}) follows at once.
We still need to prove the identity in (\[eqn GeneralIDforOmegars,mk\]). Unfortunately this identity is quite difficult to prove; the difficulty arises because of the finite sums over the $G$ and $\widetilde{N}$ coefficients. There is no obvious way of proving this identity in a clean way other than by brute force. However let us start by working out few specific cases. We first consider $\Omega _{11}^{rs}$. letting $m=k=1$ in (\[eqnOmegars;km\]) we have$$\Omega _{11}^{rs}=-\delta ^{rs}-G_{11}^{rs}+2\widetilde{N}_{11}^{rs}
\label{eqnOmegars-11}$$For $r=s$; we have $G_{11}^{rr}=-5/3^{3}$ and $\widetilde{N}_{11}^{rs}=11/3^{3}$, so the right hand side of the above equation vanish for $r=s$. Thus $\Omega _{11}^{rr}=0$ for $r=1,2,3$. For $r=1$, $s=2,3$, we have $G_{11}^{12}=G_{11}^{13}=2^{4}/3^{3}$, $\widetilde{N}_{11}^{12}=\widetilde{N}_{11}^{13}=2^{3}/3^{3}$. Using these values we see that $-\delta ^{rs}-G_{11}^{rs}+2\widetilde{N}_{11}^{rs}=0$ for $r=1$, $s=2,3$ and so we have $\Omega _{11}^{12}=\Omega _{11}^{13}=0$. The cyclic symmetries of $G_{nm}^{rs}$ and $\widetilde{N}_{nm}^{rs}$ now imply that the right hand side of (\[eqnOmegars-11\]) also vanish for $\left( r,s\right) =\left(
2,3\right) $, $\left( 3,1\right) $, $\left( 3,2\right) $ and $\left(
2,1\right) $. Thus we have established that $\Omega _{11}^{rs}=0$ for all $r$ and $s$. We will work out few more case; however in what follows we shall not refer to the exact equations explicitly for the $G_{nm}^{rs}$ and $\widetilde{N}_{nm}^{rs}$ coefficients[Gross-Jevicki-II,lousek-Jevicki,ABA,BAA-N,Gassem]{} and it will not be much of a task for the reader to look them up. Let us next consider the case when $k+m=odd$. For $k=odd$ and $\ m=even$ equation (\[eqnOmegars;km\]) yields$$\begin{aligned}
\Omega _{2m\text{ }2k+1}^{rs} &=&\sum_{n=1}^{m-1}2nG_{2n\text{ }2k+1}^{rs}\left( -1\right) ^{m-n}+\left( 1-6m\right) mG_{2m\text{ }2k+1}^{rs}+\sum_{n=1}^{k}2m\times \notag \\
&&\widetilde{N}_{2m\text{ }2n-1}^{rs}\left( -1\right) ^{k-n+1}+m\widetilde{N}_{2m\text{ }2k+1}^{rs}\left( 1+3\left( 2k+1\right) \right)
\label{eqnOmegars2m-2k+1}\end{aligned}$$First consider $r=s$; in this case $G_{n+m=ood}^{rr}=\widetilde{N}_{n+m=odd}^{rr}=0$ and the right hand side vanish; hence $\Omega _{2m\text{ }2k+1}^{rr}=0$. Likewise one can show that $\Omega _{2m+1\text{ }2k}^{rr}=0$. Next we consider $\Omega _{2m\text{ }2k+1}^{12}=\Omega _{2m\text{ }2k+1}^{23}=\Omega _{2m\text{ }2k+1}^{31}$, where the equality here follows from the cyclic symmetry of the $G_{nm}^{rs}$ and $\widetilde{N}_{nm}^{rs}$ coefficients. Thus it suffice in this case to show that $\Omega _{2m\text{ }2k+1}^{12}=0$. To prove this put $r=1$, $s=2$ in equation ([eqnOmegars2m-2k+1]{})$$\begin{aligned}
\Omega _{2m\text{ }2k+1}^{12} &=&\left( -1\right) ^{m}\sum_{n=1}^{m-1}2nG_{2n\text{ }2k+1}^{12}\left( -1\right) ^{n}+\left( 1-6m\right) mG_{2m\text{ }2k+1}^{12}-\left( -1\right) ^{k} \notag \\
&&\sum_{n=0}^{k-1}2m\widetilde{N}_{2m\text{ }2n+1}^{12}\left( -1\right)
^{n+1}+m\widetilde{N}_{2m\text{ }2k+1}^{12}\left( 1+3\left( 2k+1\right)
\right) \label{eqnOmega12-2m-2k+1}\end{aligned}$$At this point we can use the explicit values [Gross-Jevicki-II,Hlousek-Jevicki,ABA,BAA-N,Gassem]{} of the $G_{2n\text{ }2k+1}^{12}$, $G_{2m\text{ }2k+1}^{12}$, $\widetilde{N}_{2m\text{ }2n-1}^{12}$ and $\widetilde{N}_{2m\text{ }2k+1}^{12}$coefficients and carry the calculation to the end however this will take too much space. So at this stage let us work out specific cases. For $m=1$, $k=0$, the above expression becomes$$\Omega _{2\text{ }1}^{12}=-5G_{2\text{ }1}^{12}+4\widetilde{N}_{2\text{ }1}^{12}$$Since $G_{2\text{ }1}^{12}=2^{5}/3^{4}\sqrt{3}$, $\widetilde{N}_{2\text{ }1}^{12}=-2^{3}\cdot 5/3^{4}\sqrt{3}$, the right hand side of the above equation is identically zero. Hence, we have established that $\Omega _{2\text{ }1}^{12}=0$. For $m=1$, $k=1$, equation (\[eqnOmega12-2m-2k+1\]) gives$$\Omega _{2\text{ }3}^{12}=-5G_{2\text{ }3}^{12}-2\widetilde{N}_{2\text{ }1}^{12}+10\widetilde{N}_{2\text{ }3}^{12}$$The $G$ and $\widetilde{N}$ are given by[Gross-Jevicki-II,Hlousek-Jevicki,ABA,BAA-N,Gassem]{} $G_{2\text{ }3}^{12}=-4\cdot 2^{3}\cdot 5/3^{6}\sqrt{3}$, $\widetilde{N}_{2\text{ }1}^{12}=-2^{3}\cdot 5/3^{4}\sqrt{3}$, $\widetilde{N}_{2\text{ }3}^{12}=-19\cdot 2^{3}/3^{6}\sqrt{3}$. Substituting these values in the above expression yields $\Omega _{2\text{ }3}^{12}=0$.
From these examples, we see that the difficulty involved in constructing a general proof of these identities. However, we still can describe the steps involved in the proof and give some of the details. Using the $G_{nm}^{rs}$ and $\widetilde{N}_{nm}^{rs}$ coefficients, one can show that the only nontrivial identities in (\[eqn GeneralIDforOmegars,mk\]) are the following four$$\Omega _{2m\text{ }2k+1}^{12}=\Omega _{2m+1\text{ }2k}^{12}=\Omega _{2m\text{
}2k}^{11}=\Omega _{2m+1\text{ }2k+1}^{11}$$and all the other identities are either trivially satisfied or can be deduced from these four identities. To illustrate the proof of the above identities we consider $\Omega _{2m\text{ }2k+1}^{12}$. Substituting the explicit values[@Gross-Jevicki-II; @ABA; @BAA-N; @Gassem; @Hlousek-Jevicki] for the $G$ and $\widetilde{N}$ coefficients$$\begin{aligned}
&&G_{2n2m+1}^{12} \notag \\
&=&\frac{\left( -\right) ^{n+m}}{2\sqrt{3}}\left[ \frac{a_{2n}b_{2m+1}-b_{2n}a_{2m+1}}{\left( 2n\right) +\left( 2m+1\right) }+\frac{a_{2n}b_{2m+1}+b_{2n}a_{2m+1}}{\left( 2n\right) -\left( 2m+1\right) }\right]
\\
&&\widetilde{N}_{2n2m+1}^{12} \notag \\
&=&\frac{\left( -1\right) ^{n+m}}{2\sqrt{3}}\left[ \frac{b_{2n}a_{2m+1}-a_{2n}b_{2m+1}}{2n+\left( 2m+1\right) }+\frac{b_{2n}a_{2m+1}+a_{2n}b_{2m+1}}{2n-\left( 2m+1\right) }\right] \end{aligned}$$into equation (\[eqnOmega12-2m-2k+1\]) , and skipping some algebra we obtain
$$\begin{aligned}
&&\frac{2\sqrt{3}}{\left( -\right) ^{m+k}}\Omega _{2m\text{ }2k+1}^{12} \\
&=&\sum_{n=0}^{m-1}2n\left[ \frac{a_{2n}b_{2k+1}-b_{2n}a_{2k+1}}{\left(
2n\right) +\left( 2k+1\right) }+\frac{a_{2n}b_{2k+1}+b_{2n}a_{2k+1}}{\left(
2n\right) -\left( 2k+1\right) }\right] \\
&&+\sum_{n=0}^{k-1}2m\left[ \frac{b_{2m}a_{2n+1}-a_{2m}b_{2n+1}}{2m+\left(
2n+1\right) }+\frac{b_{2m}a_{2n+1}+a_{2m}b_{2n+1}}{2m-\left( 2n+1\right) }\right] \\
&&-\frac{2m}{2k-2m+1}\left(
4a_{2m}b_{2k+1}+b_{2m}a_{2k+1}+6ka_{2m}b_{2k+1}-6ma_{2m}b_{2k+1}\right)
\allowbreak\end{aligned}$$
At this stage we have checked by explicit substitution of the $a$’s and the $b$’s that the right hand side vanish for the first few low values of $m$ and $k$. Theses consistency checks are important to ensure that we are on the right track. Now we use the mixed recursion relation in (\[eqnMixed Recab\]) to eliminate the $b$’ in favor of the $a$’s; hence using (\[eqnMixed Recab\]), the expression becomes
$$\begin{aligned}
&&\frac{2\sqrt{3}}{\left( -\right) ^{m+k}}\Omega _{2m\text{ }2k+1}^{12}=\sum_{n=0}^{m-1}\frac{-1}{4k^{2}+4k-4n^{2}+1}\{6na_{2k+1}a_{2n+1}
\notag \\
&&-6na_{2k+1}a_{2n-1}-24n^{2}a_{2n}a_{2k+2}+12n^{2}a_{2k+1}a_{2n-1}+12n^{2}a_{2k+1}a_{2n+1}
\notag \\
&&-24kn^{2}a_{2k}a_{2n}-12kna_{2k+1}a_{2n-1}+12kna_{2k+1}a_{2n+1} \notag \\
&&-24kn^{2}a_{2n}a_{2k+2}+24kn^{2}a_{2k+1}a_{2n-1}+24kn^{2}a_{2k+1}a_{2n+1}\}
\notag \\
+ &&\sum_{n=0}^{k-1}\frac{1}{-4m^{2}+4n^{2}+4n+1}\{12ma_{2m}a_{2n+2}-12ma_{2m}a_{2n+1} \notag \\
&&+48m^{3}a_{2m}a_{2n+1}+12m^{2}a_{2m-1}a_{2n+1}-12m^{2}a_{2m+1}a_{2n+1}
\notag \\
&&-24m^{3}a_{2m-1}a_{2n+1}-24m^{3}a_{2m+1}a_{2n+1} \notag \\
&&-48mna_{2m}a_{2n+1}+36mna_{2m}a_{2n+2}+24mn^{2}a_{2m}a_{2n} \notag \\
&&-48mn^{2}a_{2m}a_{2n+1}+24mn^{2}a_{2m}a_{2n+2}+12mna_{2m}a_{2n}\} \notag
\\
&&+2\frac{m}{2k-2m+1}\{12a_{2m}a_{2k+2}-12a_{2m}a_{2k+1}-3a_{2k+1}a_{2n+1}
\notag \\
&&+3a_{2k+1}a_{2n+2}-42ka_{2m}a_{2k+1}+30ka_{2m}a_{2k+2}+18ma_{2m}a_{2k+1}
\notag \\
&&-18ma_{2m}a_{2k+2}+3na_{2n}a_{2k+1}+18k^{2}a_{2k}a_{2m}-6na_{2k+1}a_{2n+1}
\notag \\
&&+3na_{2k+1}a_{2n+2}-36k^{2}a_{2m}a_{2k+1} \notag \\
&&+18k^{2}a_{2m}a_{2k+2}+12ka_{2k}a_{2m}+36kma_{2m}a_{2k+1}-18kma_{2m}a_{2k+2}
\notag \\
&&-18kma_{2k}a_{2m}\} \label{eqnOmega122mak+1Factor}\end{aligned}$$
At this point we have checked by explicit substitution that the right hand side is zero for the first few values of $m$ and $k$. So we will prove this identity by mathematical induction. Let us assume the right hand side of the above equation vanish for given $m$ and $k$, then for $m\rightarrow m+1$ and $k\rightarrow k+1$, the right hand side (RHS) reads
$$\begin{aligned}
&&\text{RHS} \\
&=&\sum_{n=0}^{m}\frac{-6n}{4k^{2}+12k-4n^{2}+9}\{\allowbreak
3a_{2k+3}a_{2n+1}-3a_{2k+3}a_{2n-1}-8na_{2n}a_{2k+4} \\
&&-\allowbreak 2ka_{2k+3}a_{2n-1}+2ka_{2k+3}a_{2n+1}+6na_{2k+3}\allowbreak
a_{2n-1}+6na_{2k+3}a_{2n+1} \\
&&-4kna_{2n}a_{2k+2}-\allowbreak
4kna_{2n}a_{2k+4}+4kna_{2k+3}a_{2n-1}+4kna_{2k+3}a_{2n+1}\} \\
&&+\sum_{n=0}^{k}\frac{1}{-4\left( m+1\right) ^{2}+4n^{2}+4n+1}\{ \\
&&12a_{2m+2}\left( m+1\right) \left(
4a_{2n+1}m^{2}+8a_{2n+1}m+3a_{2n+1}+a_{2n+2}\right) \\
&&-12a_{2n+1}\left( m+1\right) ^{2}\left(
2ma_{2m+1}+2ma_{2m+3}+a_{2m+1}+3a_{2m+3}\right) \\
&&-12na_{2m+2}\left( m+1\right) \left(
4na_{2n+1}-2na_{2n}+4a_{2n+1}-3a_{2n+2}\right) \\
&&+12n\left( 2na_{2n+2}+a_{2n}\right) a_{2m+2}\left( m+1\right) \} \\
&&+\frac{2\left( m+1\right) }{2\left( k+1\right) -2\left( m+1\right) +1}\{12a_{2k+4}a_{2m+2}-3a_{2k+3}a_{2n+1} \\
&&-12a_{2k+3}a_{2m+2}+3a_{2k+3}a_{2n+2}+3na_{2n}a_{2k+3}-18a_{2k+4}a_{2m+2}-
\\
&&24a_{2k+3}a_{2m+2}-42ka_{2k+3}a_{2m+2}+30ka_{2k+4}a_{2m+2}+18ma_{2k+3}a_{2m+2}
\\
&&-18ma_{2k+4}a_{2m+2}+18a_{2k+2}a_{2m+2}-36a_{2k+3}a_{2m+2}+36ka_{2k+2}a_{2m+2}
\\
&&-72ka_{2k+3}a_{2m+2}-6na_{2k+3}a_{2n+1}+3na_{2k+3}a_{2n+2}+18k^{2}a_{2k+2}a_{2m+2}
\\
&&-36k^{2}a_{2k+3}a_{2m+2}+6a_{2m+2}\left( k+1\right) \times \\
&&\left(
3ka_{2k+4}-3ma_{2k+2}+6ma_{2k+3}-3ma_{2k+4}-a_{2k+2}+6a_{2k+3}\right) \}\end{aligned}$$
This expression reduces to the right hand side of ([eqnOmega122mak+1Factor]{}). To see this one needs only to use the recursion relations in (\[recurrisionRelaaa\]), to reduces the indices to those appearing in (\[eqnOmega122mak+1Factor\]). We have checked that this is indeed true, however the calculation is quite messy to include here, otherwise straight forward. Thus the right hand side of ([eqnOmega122mak+1Factor]{}) vanish identically by induction for all values of $m$ and $k$. Likewise one can prove the remaining identities; the procedure is straight forward but it takes countless number of pages and so we shall not include any more details here.
BRST invariance
===============
The full Ward-like identity now reads$$\left[ L_{m}^{x+\phi ,r}+\sum_{s=1}^{3}\sum_{n=0}^{\infty }m\widetilde{N}_{mn}^{rs}L_{-n}^{x+\phi ,s}\right] |V_{HS}^{x+\phi }>=0\text{, \ \ }m=1,2,... \label{eqnWard-like-Ide}$$We have seen that the coordinate and the ghost anomaly cancel; thus the above equation contain no anomaly term. The $K_{m}$ invariance of the comma interaction three vertex follows at once from the Ward-like identity. Summing over the string index $r$, we have$$\sum_{r=1}^{3}\left[ L_{m}^{x+\phi ,r}+\sum_{s=1}^{3}\sum_{n=0}^{\infty }m\widetilde{N}_{mn}^{rs}L_{-n}^{x+\phi ,s}\right] |V_{HS}^{x+\phi }>=0\text{,
\ \ }m=1,2,...$$Using the identity $\sum_{r=1}^{3}m\widetilde{N}_{mn}^{rs}=\left( -1\right)
^{m+1}\delta _{mn}$, which can be established by contour integration, and then renaming the dummy index $s$ as $r$, the above equation reduces to$$\sum_{r=1}^{3}\left[ L_{m}^{x+\phi ,r}-\left( -1\right) ^{m}L_{-m}^{x+\phi
,r}\right] |V_{HS}^{x+\phi }>=0\text{, \ \ }m=1,2,...$$The expression inside the square bracket in the above equation is by definition the Virasoro generator $K_{m}$. Thus $|V_{HS}^{x+\phi }>$ is invariant under the subgroup of conformal transformations, generated by the Virasoro generators$$K_{m}=\sum_{r=1}^{3}\left[ L_{m}^{x+\phi ,r}-\left( -1\right)
^{m}L_{-m}^{x+\phi ,r}\right]$$
To complete the proof of equivalence, we still need to show the $BRST$ ($Q$) invariance of the comma three vertex. Unfortunately a direct proof that follows from equation (\[eqnWard-like-Ide\]) is quite cumbersome due to the presence of the $\frac{1}{2}L_{m}^{\phi ,r}$ term in the definition of the $BRST$ charge. Thus we need to evaluate the action of the $BRST$ on the comma three vertex directly. Recall that the total three string $BRST$ charge is the sum of the $BRST$ charges corresponding to the individual strings; that is,$$Q=\sum_{r=1}^{3}Q^{r}$$where$$\begin{aligned}
Q^{r} &=&\sum_{m=1}^{\infty }\left[ c_{-m}^{r}\left( L_{m}^{x,r}+\frac{1}{2}L_{m}^{\phi ,r}\right) +\left( L_{-m}^{x,r}+\frac{1}{2}L_{-m}^{\phi
,r}\right) c_{m}^{r}\right] \notag \\
&&+\left( L_{0}^{x,r}+\frac{1}{2}L_{0}^{\phi ,r}-1\right) c_{0}^{r}\end{aligned}$$To evaluate the action of $BRST$ charge on the full comma three vertex we use the $c$-overlaps satisfied by the half string ghost vertex[Gross-Jevicki-II,Abdu-BordesN-II]{} $$\left[ c_{m}^{r}-\sum_{s=1}^{3}\sum_{n=1}^{\infty }\widetilde{N}_{mn}^{rs}c_{-n}^{s}\right] |V_{HS}^{\phi }>=0 \label{eqnCoverlonHSV}$$With the help of (\[eqnCoverlonHSV\]), the proof of the $BRST$ invariance follows along the same lines of references[Gross-Jevicki-II,Abdu-BordesN-II]{}. Thus when acting with the $BRST$ charge on the full half string vertex, the operator parts cancel, leaving$$\begin{aligned}
Q|V_{HS}^{x+\phi } &>&=\sum_{r=1}^{3}\sum_{m=1}^{\infty }c_{-m}^{r}\left[
\left( \kappa _{m}^{x,r}+\frac{1}{2}\kappa _{m}^{\phi ,r}\right) \right. -\frac{1}{2}\times \notag \\
&&\left. \left( m^{2}\widetilde{N}_{0m}^{rr}+\sum_{k=1}^{m-1}\left(
m+k\right) \left( m-k\right) \widetilde{N}_{m\text{ }m-k}^{rr}\right) \right]
|V_{HS}^{x+\phi }>\end{aligned}$$The expression inside the second parenthesis can be computed easily and it is found to be the anomaly of the fermionic ghost[Gross-Jevicki-II,Abdu-BordesN-II]{}. Thus the coefficient of $c_{-m}^{r}$ is$$\kappa _{m}^{x,r}+\frac{1}{2}\kappa _{m}^{\phi ,r}+\frac{1}{2}\kappa
_{m}^{c,r}=0$$where in obtaining the above result we used the fact that the coordinate anomaly is the negative of the ghost anomaly regardless of the ghost representation. This result is the final step in the proof of equivalence.
Sums of the first type
======================
Generalizations of the above sums$$O_{n=2k}^{u(q,p)}=\sum_{m=2l+1=1}^{\infty }\frac{u_{m}^{q/p}}{n+m}\text{ , }n\geq 0 \label{eqnsumoverodd1}$$$$E_{n=2k+1}^{u(q,p)}=\sum_{m=2l=0}^{\infty }\frac{u_{m}^{q/p}}{n+m}\text{ , }n>0 \label{eqnsumovereven1}$$where $u_{m}^{q/p}$ are the Taylor modes appearing in the expansion$$\left( \frac{1+z}{1-z}\right) ^{q/p}=\sum_{m=0}^{\infty }u_{n}^{q/p}z^{n}
\label{equationdefofu(q/p)}$$Mathematical induction leads to$$E_{-n=-(2k+1)}^{u(q,p)}=-\cos \left( \frac{\pi q}{p}\right)
E_{n=2k+1}^{u(q,p)}\text{, \ \ \ }n>0 \label{eqnSumOnegative n=odd}$$and$$O_{-n=-2k}^{u(q,p)}=-\cos \left( \frac{\pi q}{p}\right) O_{n=2k}^{u(q,p)}\text{, \ \ \ }n>0 \label{eqnSumOnegative n=even}$$respectively. So far we have evaluated the sums defined in ([eqnsumoverodd1]{}) and (\[eqnsumovereven1\]) under the restriction $n+m=odd$; now we would like to relax this restriction, which brings us to the sums of the second type.
Sums of the second type
=======================
Sums of the second type are defined by$$S_{n=2k+1}^{u(q,p)}\equiv O_{n=2k+1}^{u(q,p)}=\sum_{m=2l+1=1}^{\infty }\frac{u_{m}^{q/p}}{n+m} \label{eqnSUMeveneven}$$$$S_{n=2k}^{u(q,p)}\equiv E_{n=2k}^{u(q,p)}=\sum_{m=2l=0}^{\infty }\frac{u_{m}^{q/p}}{n+m} \label{eqnSUModdodd}$$$$\begin{aligned}
S_{n}^{u(q/p)} &=&\frac{p}{2q}\left[ \frac{q}{p}\left( \beta \left( 1-\frac{q}{p}\right) +\beta \left( 1+\frac{q}{p}\right) \right) \right] u_{n}^{q/p}
\notag \\
&&+\frac{p}{2q}\sum_{m=0}^{n-1}\left( -\right) ^{m}\frac{u_{m}^{q/p}u_{n-m-1}^{q/p}}{m+1} \label{eqn.Gen.SnU}\end{aligned}$$
Sums of the third type
======================
$$\overset{\sim }{O}_{n=2k}^{u(q,p)}=\sum_{m=2l+1=1}^{\infty }\frac{u_{m}^{q/p}}{\left( n+m\right) ^{2}} \label{eqnOteldan=even}$$
$$\overset{\sim }{E}_{n=2k+1}^{u(q,p)}=\sum_{m=2l=0}^{\infty }\frac{u_{m}^{q/p}}{\left( n+m\right) ^{2}} \label{eqnEteldan=odd}$$
$$\begin{aligned}
\overset{\sim }{E}_{n=odd=1}^{u(p,q)} &=&-\frac{1}{2}\left( \frac{q}{p}\right) \frac{\pi }{\sin \left( \pi q/p\right) }\left\{ \left[ 2\psi \left(
\frac{q}{p}\right) +\frac{1}{\left( q/p\right) }-2\psi \left( 1\right)
-2+2\ln 2\right] \right. \notag \\
&&+\left. \cos \left( \pi q/p\right) \left[ \psi \left( \frac{q}{2p}+\frac{1}{2}\right) -\psi \left( \frac{q}{2p}\right) -\frac{1}{\left( q/p\right) }\right] \right\} \label{EqnE(n=odd)}\end{aligned}$$
$$\begin{aligned}
\overset{\sim }{S}_{n}^{u(q,p)} &=&\overset{\sim }{E}_{n}^{u(q,p)}\text{, \
\ \ \ }n=2k+1>0 \notag \\
\overset{\sim }{S}_{n}^{u(q,p)} &=&\overset{\sim }{O}_{n}^{u(q,p)}\text{, \
\ }n=2k\geqslant 0\text{ } \label{eqnSTelda}\end{aligned}$$
and$$\begin{aligned}
\overline{S}_{n}^{u(q,p)} &=&E_{n}^{u(q,p)}\text{, \ \ \ \ }n=2k+1>0 \notag
\\
\overline{S}_{n}^{u(q,p)} &=&O_{n}^{u(q,p)}\text{, \ \ }n=2k\geqslant 0
\label{eqnSbar}\end{aligned}$$$$\begin{aligned}
{}\widetilde{S}_{n}^{u(q/p)} &=&\left[ \widetilde{S}_{1}^{u(q/p)}-\frac{2q\pi }{2p\sin \left( \pi q/p\right) }\right] \frac{pu_{n}^{q/p}}{2q}-\frac{\pi }{2\sin \left( \pi q/p\right) }\sum_{k=1}^{n}\frac{\left( -\right) ^{k}}{k}u_{k}^{q/p}u_{n-k}^{q/p} \notag \\
&&-\frac{\pi }{2}\tan \left( \pi \frac{q}{2p}\right) \frac{p}{2q}\sum_{k=0}^{n-1}\frac{\left( -\right) ^{k}}{k+1}u_{k}^{q/p}u_{n-1-k}^{q/p}\text{ \ \ , }n>0 \label{eqnEXPValOfSteln}\end{aligned}$$This result holds for all integer values of $n\geq 1$.
$$\widetilde{S}_{-n}^{u(q,p)}=\cos \left( \pi \frac{q}{p}\right) \widetilde{S}_{n}^{u(q,p)}+\left[ 1+\cos \left( \pi \frac{q}{p}\right) \right] \overline{S}_{0}^{u(q,p)}S_{n}^{u(q,p)} \label{eqnSUM-S-nTel}$$
$$\overline{S}_{0}^{u(q,p)}=\frac{1}{2}\pi \tan \left( \pi \frac{q}{2p}\right)$$
[99]{} A. Sen and B. Zwiebach, MIT-CTO, hep-th/0105058
L. Rastelli, A. Sen and B. Zwiebach, CTP-MIT-3064, hep-th/0012251
D.J. Gross and W. Taylor , MIT-CTP-3130, hep-th/0105059
D.J. Gross and W. Taylor, MIT-CTP-3145, hep-th/0106036
A Abdurrahman, F. Anton and J. Bordes, Nucl. Phys. B411 (1994) 694
A. Abdurrahman and J. Bordes, Nuovo Cimento B, 118 (2003) 641
T. Banks and M. Peskin, Nucl. Phys. B264 (1986) 513
E. Witten, Nucl. Phys. B268 (1986) 253
W. Siegel, Phys. Lett. 142B (1984) 157; H. Hata, K. Itoh, T. Kugo, H. Kunimoto and K. Ogawa, Phys. Lett. 172B (1986) 186; A. Neveu and P. West, Phys. Lett. 165B (1986) 63; D. Friedan, Phys. Lett. 162B (1985) 102;J. L. Gervais, l’Ecole Normale Superieure preprints LPTENS 85/35 (1986), LPTENS 86/1 (1986); A. Neveu and P. West, Phys. Lett. 168B (1986) 192; N. P. Chang, H.Y. Guo, Z. Qiu and K. Wu, City College preprint (1986); A.A. Tseytlin, Phys. Lett. 168B (1986) 63
W. Siegl and B. Zwiebach, Nucl. Phys. 263 (1986) 105
Z. Hlousek and A. Jevicki, Nucl. Phys. B288 (1987) 131
A. Abdurrahman, F. Anton and J. Bordes, Phys. Lett. B358 (1995) 259
A Abdurrahman, F. Anton and J. Bordes, Nucl. Phys. B397 (1993) 260
D. J. Gross and A. Jevicki, Nucl. Phys. B283 (1987) 1
D. J. Gross and A. Jevicki, Nucl. Phys. B287 (1987) 225
M. Gassem, PhD Thesis, the American University, London (2008)
S. Samuel, CERN preprint CERN-TH-4498/86 (July 1986)
J. Bordes, A. Abdurrahman and F. Anton. The N-String Vertices of String Field Theory. Phys. Rev. D49, No.6, 2966 (1994)
[^1]: ababdu@shi.edu
[^2]: The Division of Mathematics and Science, South Texas College , 3201 W. Pecan, McAllen, Texas 78501, E-mail: mgassem@southtexascollege.edu
[^3]: Note that the expression $L_{-m}^{\phi ,r}+\frac{3}{2}m\alpha _{-m}^{\phi
,r} $ is indeed quadratic in the creation-annihilation operators $\alpha
^{\phi } $and $\alpha ^{\phi \dag }$ since the linear term $\frac{3}{2}m\alpha _{-m}^{\phi ,r}$ cancels agains the linear term in $L_{-m}^{\phi ,r}$.
[^4]: See previous footnote.
[^5]: If we choose to eliminate $p_{0}^{\phi ,1}$or $p_{0}^{\phi ,2}$ instead of $p_{0}^{\phi ,3}$ the conclusions remain the same.
|
---
abstract: |
We consider a huge quantum system that is subject to the charge superselection rule, which requires that any pure state must be an eigenstate of the total charge. We regard some parts of the system as “subsystems" S$_1$, S$_2$, $\cdots$, S$_M$, and the rest as an environment E. We assume that one does not measure anything of E, i.e., one is only interested in observables of the joint subsystem S $\equiv {\rm S}_1 + {\rm S}_2 + \cdots + {\rm S}_M$. We show that there exist states $| \Phi \rangle_{\rm tot}$ with the following properties: (i) The reduced density operator $
{\rm Tr}_{\rm E}
\left( | \Phi \rangle_{\rm tot} \ \null_{\rm tot} \langle \Phi | \right)
$ is completely equivalent to a vector state $| \varphi \rangle_{\rm S} \ \null_{\rm S} \langle \varphi |$ of S, for any gauge-invariant observable of S. (ii) $| \varphi \rangle_{\rm S}$ is a simple product of vector states of individual subsystems; $
| \varphi \rangle_{\rm S}
=
| C^{(1)} \rangle_1 | C^{(2)} \rangle_2 \cdots
$, where $
| C^{(k)} \rangle_k
$ is a vector state in S$_k$ which is [*not*]{} an eigenstate of the charge in S$_k$. Furthermore, one can associate to each subsystem S$_k$ the vector state $| C^{(k)} \rangle_k$ and observables which are [*not*]{} necessarily gauge invariant in each subsystem, and $| C^{(k)} \rangle_k$ is then a pure state. These results justify taking (a) superpositions of states with different charges, and (b) non-gauge-invariant operators, such as the order parameter of the breaking of the gauge symmetry, as observables, for subsystems.
address: ' Department of Basic Science, University of Tokyo, Komaba, Meguro-ku, Tokyo 153-8902, Japan '
author:
- 'Akira Shimizu[@shmz] and Takayuki Miyadera'
date: Received 23 February 2001
title: Charge superselection rule does not rule out pure states of subsystems to be coherent superpositions of states with different charges
---
In quantum theory, some superpositions of states are not permitted as pure states [@haag]. In particular, the charge superselection rule (CSSR) forbids coherent superpositions of states with different charges [@haag]. Namely, any pure state must be an eigenstate of the total charge $\hat N_{\rm tot}$. However, it is customary to take such superpositions when one discusses the breaking of a gauge symmetry. Superconductors and superfluids are typical examples. If the system size $V$ is infinite, this does not conflict with the CSSR because $\hat N_{\rm tot}$ becomes ill-defined as $V \to \infty$, and the CSSR becomes inapplicable. In real physical systems, however, phase transitions practically occur for finite ($V < + \infty$) systems as well. In particular, phase transitions have been observed in relatively small systems, including small superconductors [@super], Helium atoms in a micro bubble [@He4bubble], and laser-trapped atoms [@atom]. The meaning of the symmetry breaking in such systems has been a subject of active research [@andrews; @java; @barnett; @SMprl2000]. The purpose of this paper is to present a general discussion which justifies taking coherent superpositions of states with different charges for finite quantum systems subject to the CSSR, such as charged particles and massive bosons. Furthermore, we also justify taking non-gauge-invariant operators such as the order parameter of the breaking of the gauge symmetry, as observables of subsystems.
We consider a huge quantum system that is subject to the CSSR. We regard some parts of the system as “subsystems" S$_1$, S$_2$, $\cdots$, S$_M$, and the rest as the environment E. We assume the usual situation where (i) E is much larger than the joint subsystem S $\equiv {\rm S}_1 + {\rm S}_2 + \cdots + {\rm S}_M$, and (ii) one is not interested in (thus, one will not measure) degrees of freedom of E, i.e, one is only interested in S (or some parts of S). The Hilbert space ${\cal H}_{\rm tot}$ of the total system is the tensor product of the Hilbert spaces of S$_1$, S$_2$, $\cdots$, S$_M$ and E; $ % \begin{equation}
{\cal H}_{\rm tot}
=
{\cal H}_{\rm S}
\otimes
{\cal H}_{\rm E},
$ where $ % \begin{equation}
{\cal H}_{\rm S}
\equiv
% \bigotimes_{k=1}^M {\cal H}_k.
{\cal H}_1 \otimes {\cal H}_2 \otimes \cdots \otimes {\cal H}_M.
$ The total charge (in some unit) $\hat N_{\rm tot}$ is the sum of the charges of S$_1$, S$_2$, $\cdots$, S$_M$ and E; $$\hat N_{\rm tot}
=
\sum_{k=1}^M \hat N_k + \hat N_{\rm E}.$$ Products of eigenfunctions $| N_1 n_1 \rangle_1$, $\cdots$, $| N_M n_M \rangle_M$, and $| N_{\rm E} \ell \rangle_{\rm E}$ of $\hat N_1$, $\cdots$, $\hat N_M$, and $\hat N_{\rm E}$, respectively, form complete basis sets of ${\cal H}_{\rm tot}$. Here, $N_k$ ($k = 1, 2, \cdots, M$) and $N_{\rm E}$ are eigenvalues of $\hat N_k$ and $\hat N_{\rm E}$, respectively, and $n_k$ and $\ell$ denote additional quantum numbers.
The CSSR requires that any pure state of the total system must be an eigenstate of $\hat N_{\rm tot}$, i.e., superposition is allowed only among states with a fixed value of the eigenvalue $N_{\rm tot}$ of $\hat N_{\rm tot}$. Consider the following state that satisfies this requirement: $$| \Phi \rangle_{\rm tot}
=
\sum_{N_1,n_1} \cdots \sum_{N_M,n_M}
\sum_{\ell}
C^{(1)}_{N_1 n_1} \cdots C^{(M)}_{N_M n_M}
C^{({\rm E})}_{N_{\rm S} \ell} \
| N_1 n_1 \rangle_1 \cdots | N_M n_M \rangle_M
| N_{\rm tot}- N_{\rm S}, \ell \rangle_{\rm E},
\label{Phi}$$ where $N_{\rm S} = \sum_k N_k$, and the superposition coefficients are normalized as $$\sum_{N,n} |C^{(k)}_{N n}|^2
=
\sum_{\ell} |C^{({\rm E})}_{N_{\rm S} \ell}|^2
= 1.
\label{norm_C}$$ For this state, the probability of finding $N_{\rm tot}-N_{\rm S}$ bosons in E takes almost the same values for all $N_{\rm S}$ such that $ % \begin{equation}
|N_{\rm S} - \langle N_{\rm S} \rangle |
<
% \sqrt{\langle \delta N_{\rm S}^2 \rangle}.
\langle \delta N_{\rm S}^2 \rangle^{1/2}
$ [@range]. This property seems natural for a huge environment. Since we assume that one will not measure degrees of freedom of E, we are interested in the reduced density operator $\hat \rho_{\rm S}$ of S, which is evaluated as $$\begin{aligned}
&&
\hat \rho_{\rm S} = {\rm Tr}_{\rm E}
\left( | \Phi \rangle_{\rm tot} \ \null_{\rm tot} \langle \Phi | \right)
\nonumber\\
&& \quad =
\sum_{N'_1,n'_1} \cdots \sum_{N'_M,n'_M}
\sum_{N_1,n_1} \cdots \sum_{N_M,n_M}
\delta_{N_1+\cdots+N_M, \ N'_1+\cdots+N'_M}
\
C^{(1)}_{N'_1 n'_1} \cdots C^{(M)}_{N'_M n'_M}
C^{(M)*}_{N_M n_M} \cdots C^{(1)*}_{N_1 n_1}
\nonumber\\
&& \qquad
\times \
| N'_1 n'_1 \rangle_1 \cdots | N'_M n'_M \rangle_M
\null_M \langle N_M n_M | \cdots \null_1 \langle N_1 n_1 |.
\label{rho}\end{aligned}$$ We can easily show that $ %\begin{equation}
(\hat \rho_{\rm S})^2 \neq \hat \rho_{\rm S}
$ unless $$\sum_{n}|C^{(k)}_{Nn}|^2 = \delta_{N,N^{(k)}_0}
\mbox{ for all $k$}$$ for some set of numbers $N^{(1)}_0, \cdots, N^{(M)}_0$. We exclude this trivial case (where the CSSR is satisfied in each subsystem) from our consideration. Then, $(\hat \rho_{\rm S})^2 \neq \hat \rho_{\rm S}$, and one may say that $\hat \rho_{\rm S}$ represents a mixed state. However, the relation $(\hat \rho_{\rm S})^2 \neq \hat \rho_{\rm S}$ only ensures that for any vector state $| \varphi \rangle_{\rm S}$ ($\in {\cal H}_{\rm S}$) there exists some [*operator*]{} $\hat \Xi_{\rm S}$ (on ${\cal H}_{\rm S}$) for which $${\rm Tr}_{\rm S} \left( \hat \rho_{\rm S} \hat \Xi_{\rm S} \right)
\neq
\null_{\rm S} \langle \varphi |
\hat \Xi_{\rm S}
| \varphi \rangle_{\rm S}.$$ Note that such a general operator $\hat \Xi_{\rm S}$ is not necessarily gauge-invariant. Hence, $\hat \Xi_{\rm S}$ might not be an [*observable*]{}, which must be gauge-invariant. In fact, we first show that $\hat \rho_{\rm S}$ is equivalent to a vector state $| \varphi \rangle_{\rm S}$ for all gauge-invariant (thus physical) observables on ${\cal H}_{\rm S}$. This statement might not sound surprising because a vector state is not necessarily a pure state [@haag; @vs]. \[Here, we use the precise definition of pure and mixed states [@precise], rather than misleading definitions such as $\hat \rho^2 = \hat \rho$ and $\hat \rho^2 \neq \hat \rho$.\] In fact, the equivalence of $| \varphi \rangle_{\rm S}$ to $\hat \rho_{\rm S}$ means that the vector state $| \varphi \rangle_{\rm S}$ is a mixed state. In other words, $| \varphi \rangle_{\rm S}$ is a vector state in a [*reducible*]{} representation of the algebra of gauge-invariant observables [@vs].
Actually, we first show a stronger statement: $| \varphi \rangle_{\rm S}$ is a simple product of vector states of individual subsystems; $$\hat \rho_{\rm S}
\mbox{ is equivalent to }
| \varphi \rangle_{\rm S}
=
| C^{(1)} \rangle_1 | C^{(2)} \rangle_2 \cdots | C^{(M)} \rangle_M
\mbox{ for any gauge-invariant observables in
${\cal H}_{\rm S}$},
\label{equivalance}$$ where $| C^{(k)} \rangle_k$ is a coherent superposition of states with different charges; $$| C^{(k)} \rangle_k
=
\sum_{N,n}
C^{(k)}_{Nn}
| N n \rangle_k.$$ To see this, we recall that one will not measure degrees of freedom of E. This means that one measures only observables which take the following form; $$\hat A_{\rm S} \otimes \hat 1_{\rm E},$$ where $\hat A_{\rm S}$ is an operator on ${\cal H}_{\rm S}$, and $\hat 1_{\rm E}$ denotes the unity operator on ${\cal H}_{\rm E}$. Note that $\hat A_{\rm S} \otimes \hat 1_{\rm E}$ must be gauge-invariant because of the gauge invariance of the total system, hence $\hat A_{\rm S}$ must also be gauge-invariant. This requires that $N_{\rm S}$ ($= \sum_k N_k$) should be conserved by the operation of $\hat A_{\rm S}$. Hence, the matrix elements of $\hat A_{\rm S}$ should take the following form; $$\null_1 \langle N_1 n_1 | \cdots \null_M \langle N_M n_M |
\ \hat A_{\rm S} \
| N'_M n'_M \rangle_M \cdots | N'_1 n'_1 \rangle_1
=
\delta_{N_1+\cdots+N_M, N'_1+\cdots+N'_M}
A^{N_1 n_1 \cdots N_M n_M}_{N'_1 n'_1 \cdots N'_M n'_M}.
\label{A}$$ Hence, the expectation value of $\hat A_{\rm S}$ for $\hat \rho_{\rm S}$ is evaluated as $$\begin{aligned}
&&
\langle A_{\rm S} \rangle
=
{\rm Tr}_{\rm S} \left( \hat \rho_{\rm S} \hat A_{\rm S} \right)
\nonumber\\
&& \quad =
\sum_{N'_1,n'_1} \cdots \sum_{N'_M,n'_M}
\sum_{N_1,n_1} \cdots \sum_{N_M,n_M}
C^{(1)}_{N'_1 n'_1} \cdots C^{(M)}_{N'_M n'_M}
C^{(M)*}_{N_M n_M} \cdots C^{(1)*}_{N_1 n_1}
\delta_{N_1+\cdots+N_M, N'_1+\cdots+N'_M}
A^{N_1 n_1 \cdots N_M n_M}_{N'_1 n'_1 \cdots N'_M n'_M}
\nonumber\\
&& \quad =
\null_{\rm S} \langle \varphi |
\hat A_{\rm S}
| \varphi \rangle_{\rm S},\end{aligned}$$ and Eq. (\[equivalance\]) is proved. The point is that Eqs. (\[rho\]) and (\[A\]) contain the same Kronecker’s delta, $\delta_{N_1+\cdots+N_M, N'_1+\cdots+N'_M}$. Although this factor in Eq. (\[rho\]) makes $\hat \rho_{\rm S}$ different from $
| \varphi \rangle_{\rm S} \ \null_{\rm S}\langle \varphi |
$, the difference becomes totally irrelevant to $
\langle A_{\rm S} \rangle
$ because of the same factor in Eq. (\[A\]). (Recall that $
(\delta_{N_1+\cdots+N_M, N'_1+\cdots+N'_M})^2
=
\delta_{N_1+\cdots+N_M, N'_1+\cdots+N'_M}
$.) Moreover, we can easily show that $\hat \rho_{\rm S-S_M} \equiv {\rm Tr}_M [\hat \rho_{\rm S}]$ is also equivalent to a vector state, $
| C^{(1)} \rangle_1 | C^{(2)} \rangle_2 \cdots | C^{(M)} \rangle_{M-1}
$, where ${\rm Tr}_M$ denotes the trace operation over ${\cal H}_M$. This result is easily generalized: For states of the form of (\[Phi\]), its reduced density operator for any set of subsystems is completely equivalent to a vector state, which is a simple product of vector states of the individual subsystems, if one measures only gauge-invariant observables of the subsystems. Note that there is no ‘entanglement’ between any subsystems [@ent]. In general, the absence of entanglement among subsystems means the ‘separability’, i.e., it is possible to control quantum states of individual subsystems independently by local operations [@peres]. Hence, one can control the state $
| C^{(k)} \rangle_k
$ of each subsystem by local operations, without perturbing the other subsystems [@perturbed].
For example, for interacting many bosons, which are confined in a large but [*finite*]{} box, one may take $
| C^{(k)} \rangle_k
$ to be the ‘coherent state of interacting bosons’ (CSIB), which is defined by [@SMprl2000; @SI; @complicated] $$| \alpha_k, {\rm G} \rangle_k
=
e^{- |\alpha_k|^2/2}
\sum_{N=0}^{\infty}
\frac{\alpha_k^N}{\sqrt{N!}}
|N, {\rm G} \rangle_k,
\label{CSIBk}$$ where $\alpha_k = |\alpha_k| e^{i \phi_k}$ is a complex amplitude, and $|N, {\rm G} \rangle_k$ denotes the ground state that has [*exactly*]{} $N$ bosons, which we call the ‘number state of interacting bosons’ (NSIB) [@SMprl2000; @SI]. One may also take $
| C^{(l)} \rangle_l
$ as $| \alpha_l, {\rm G} \rangle_l$, where $\alpha_l = |\alpha_l| e^{i \phi_l}$. Then, $\hat \rho_{\rm S}$ is equivalent to $$| \varphi \rangle_{\rm S}
=
\cdots
| \alpha_k, {\rm G} \rangle_k
\cdots
| \alpha_l, {\rm G} \rangle_l
\cdots
\label{CSIBkl}$$ This state has an almost definite value of the relative phase $\phi_{kl}$ between two condensates in S$_k$ and S$_l$. To see this, we may define the operator (acting on ${\cal H}_k \otimes {\cal H}_l$) corresponding to $e^{i \phi_{kl}}$ by $$% \widehat{e^{i (\phi_k - \phi_j)}}
\widehat{e^{i \phi_{kl}}}
% \hat{e}^{i (\phi_k - \phi_j)}
% \widehat{\exp[i (\phi_k - \phi_j)]}
\equiv
\widehat{e^{i \phi_k}}
\
(\widehat{e^{i \phi_l}})^\dagger.
\label{ephikl}$$ Here, $\widehat{e^{i \phi_k}}$, acting on ${\cal H}_k$, is not an exponential of some phase operator $\hat \phi_k$, but is defined by $$\widehat{e^{i \phi_k}}
\equiv
(\hat b_{0k}^\dagger \hat b_{0k} + 1)^{-1/2} b_{0k},$$ where $\hat b_{0k}$ denotes the operator $\hat b_0$, which is defined in Ref. [@SI] as a nonlinear function of free operators, for S$_k$. This operator is a linear combination of the cosine and sine operators of Ref. [@SI]. In the same way, $\widehat{e^{i \phi_l}}$, acting on ${\cal H}_l$, is defined using $\hat b_{0l}$, which is $\hat b_0$ for S$_l$. By similar calculations as in Ref. [@SI], we can show that $\widehat{e^{i \phi_k}}$, $(\widehat{e^{i \phi_l}})^\dagger$, and $\widehat{e^{i \phi_{kl}}}$ have almost definite values, and we can regard $\phi_{kl} \simeq \phi_k - \phi_l$. Hence, the state (\[CSIBkl\]) has the almost definite value of the relative phase. It should not be confused with states of the following form, which have been frequently discussed in the literature [@ex-ent]; $$|\varphi_{\rm ent} \rangle_{\rm S}
=
\sum_N C_N \sum_{N_k}
\cdots e^{i N_k \phi_k} R_{N_k} |N_k, {\rm G} \rangle_k
\cdots
e^{i (N-N_k) \phi_l} R_{N-N_k} |N-N_k, {\rm G} \rangle_l \cdots,
\label{ent}$$ where $C_N$ is a complex coefficient and $R$’s are real ones. This state also has an almost definite relative phase if $R$’s are appropriately taken. However, since S$_k$ and S$_l$ are strongly entangled in this state [@ent], one cannot control the state of S$_k$ by local operations on S$_k$, without perturbing the state of S$_l$. In contrast, this is possible for the simple product state $|\varphi \rangle_{\rm S}$. Moreover, we now show that $|\varphi \rangle_{\rm S}$ allows us to treat observables of individual subsystems separately. As exemplified by $\widehat{e^{i \phi_{kl}}}$ above, $\hat A_{\rm S}$ is generally a sum of products of operators of subsystems [@hc]; $$\hat A_{\rm S}
=
\sum
\hat A_{k} \hat A_{l} \cdots,$$ where $\hat A_{k}$ ($\hat A_{l}$) is an operator on ${\cal H}_k$ (${\cal H}_l$), and the sum is over combinations of $\hat A_{k} \hat A_{l} \cdots$. For the product state $|\varphi \rangle_{\rm S}$, the expectation value of $\hat A_{\rm S}$ is simply evaluated from the expectation values of individual subsystems as $${}_{\rm S} \langle \varphi | \hat A_{\rm S} |\varphi \rangle_{\rm S}
=
\sum
{}_k \langle C^{(k)} | \hat A_k |C^{(k)} \rangle_k
\
{}_l \langle C^{(l)} | \hat A_l |C^{(l)} \rangle_l
\cdots.
\label{exp-AS}$$ From Eqs. (\[equivalance\]) and (\[exp-AS\]), [*we can consider each subsystem separately*]{} from the other subsystems. Namely, for subsystem S$_k$, we can consider that its state is the vector state $| C^{(k)} \rangle_k$, and that $\hat A_{k}$ is one of its observables. When the expectation value of $\hat A_{\rm S}$ for the set of subsystems is necessary, it can be evaluated from the expectation values of individual subsystems, as Eq. (\[exp-AS\]). Note that the gauge invariance of $\hat A_{\rm S}$ does [*not*]{} require the gauge invariance of $\hat A_{k}$ of each subsystem: It rather requires that only gauge-invariant [*combinations*]{} of $\hat A_{k} \hat A_{l} \cdots$ should appear in the sum. \[Namely, each $N_k$ can vary by the operation of $\hat A_{k}$, whereas $N_{\rm S}$ ($= \sum_k N_k$) is conserved by the operation of $\hat A_{k} \hat A_{l} \cdots$.\] For example, $\widehat{e^{i \phi_{kl}}}$ is gauge invariant, whereas neither $\widehat{e^{i \phi_{k}}}$ nor $\widehat{e^{i \phi_{l}}}$ is. Hence, by considering $\hat A_{k}$ as an observable of S$_k$, we can include non-gauge invariant operators into the set of observables in ${\cal H}_k$, with the restriction that among various combinations of $\hat A_{k} \hat A_{l} \cdots$ we must take only gauge-invariant ones as physical combinations: Results for other combinations should be discarded. This formulation gives correct results for all gauge-invariant (thus physical) combinations. The vector state $| C^{(k)} \rangle_k$ then becomes a pure state of an irreducible representation of the algebra of observables of S$_k$, because non-gauge invariant observables are now included in the algebra [@vs]. We have thus arrived at [*non-gauge invariant observables and a pure state which is a superposition of states with different charges, for each subsystem*]{}.
The restriction that we must take only gauge-invariant ones among various combinations of $\hat A_{k} \hat A_{l} \cdots$ has the following physical meaning: If $\hat A_{k}$ is a non-gauge invariant observable of S$_k$, then its value can only be defined relative to some reference observable $\hat A_{l}$ of some reference system S$_l$. When $| \varphi \rangle_{\rm S}$ takes the form of Eq. (\[CSIBkl\]), for example, we can consider that S$_k$ is in a pure state $| \alpha_k, {\rm G} \rangle_k$, and that the phase factor $\widehat{e^{i \phi_{k}}}$ is one of observables on ${\cal H}_k$. However, like the classical phase factor, the quantum phase factor $\widehat{e^{i \phi_{k}}}$ can only be defined relative to some reference (such as $(\widehat{e^{i \phi_{l}}})^\dagger$ of S$_l$). In contrast to the position observable, which also requires a reference system but it can be any system composed of any material, the reference system of the phase factor should contain particles of the same kind as the particles in S$_k$. Hence, the system of interest (S$_k$) and the reference system (S$_l$) should be subsystems of a larger system of the same kind of particles. When the larger system contains a huge system(s), of which we are not interested in, we may call it an environment E, and the present model is applicable. When one is only interested in S$_k$, the reference system S$_l$ may be considered as a part of the external systems, which include an apparatus with which one measures $\phi_{kl}$. In this case, the reference system S$_l$ can be regarded as a part of the measuring apparatus. (In the case of measurement of the position, for example, a material which defines the origin of the position coordinate can be considered as a part of the measuring apparatus of the position.) Although $\widehat{e^{i \phi_{k}}}$ is not gauge invariant, results of measurement is gauge invariant because $\widehat{e^{i \phi_k}} \ (\widehat{e^{i \phi_l}})^\dagger$ is gauge invariant. In general terms, although results of any physical [*measurements*]{} must be gauge invariant, it does not necessarily mean that any [*observables*]{} of a subsystem must be gauge-invariant, because the gauge invariance of $\hat A_{k} \hat A_{l}$ (which ensures the gauge invariance of results of measurements) does not necessarily mean the gauge invariance of $\hat A_{k}$.
Finally, we present some significant applications of the present results. First, Our results give a natural interpretation of the order parameter $\hat O$ of a finite system, which can exhibit the breaking of the gauge symmetry in the infinite-volume limit. Although one might suspect that $\hat O$ must be excluded from observables since it is not gauge invariant, our results allow one to include it among observables as one of $\hat A_{k}$’s. Second, one might also suspect that the CSSR would forbid a pure state which has a finite expectation value of $\hat O$ because its expectation value vanishes for any eigenstate of $\hat N$. However, our results show that such a pure state is allowed as a state of a subsystem, $
| C^{(k)} \rangle_k
$, which is not entangled with states of the other subsystems The absence of entanglement allows the preparation of such a pure state of a subsystem without perturbing the other subsystems (with perturbing the environment only). For interacting many bosons, for example, the order parameter of the condensation is usually taken as the field operator $\hat \psi$ of bosons. The present work justifies taking such a non-gauge-invariant operator as an observable of a subsystem. The expectation value of $\hat \psi$ is finite only for superpositions of states with different numbers of bosons. The present work justifies taking such a superposition as a pure state of a subsystem. More concretely, the expectation value of $\hat \psi$ is finite for the CSIB, $\langle \alpha, {\rm G}| \hat \psi |\alpha, {\rm G} \rangle ={\cal O}(1)$, whereas it vanishes for the NSIB, $\langle N, {\rm G}| \hat \psi |N, {\rm G} \rangle=0$ [@SMprl2000; @SI]. Although both the NSIB and CSIB have the off-diagonal long-range order [@SMprl2000; @SI], the gauge symmetry is broken, in the sense that $\langle \hat \psi \rangle ={\cal O}(1)$, only for the CSIB [@ferro]. Since the state vectors of the NSIB and CSIB are quite different, they have different properties. The most striking difference is that the NSIB decoheres much faster than the CSIB when bosons have a finite probability of leakage into a huge environment, whose boson density is zero [@SMprl2000]. Moreover, it was shown that the CSIB has the ‘cluster property,’ which ensures that fluctuations of any intensive variables are negligible, in consistency with thermodynamics [@SMcluster]. Although the CSIB may look against the CSSR, the present work justifies taking it as a pure state of a subsystem, which, unlike the NSIB, is robust, symmetry breaking, and consistent with thermodynamics. In particular, generalizing Eq. (\[CSIBkl\]), one can take CSIB’s for all subsystems. In this case, the joint subsystem S is in a pure state in which each subsystem is in a CSIB. Namely, the state vector of S behaves locally as a CSIB, which is robust, symmetry breaking, and consistent with thermodynamics. Moreover, unlike Eq. (\[ent\]), one can controle states of individual susbsystems independently by local operations. These justify macroscopic theories such as the Ginzburg-Landau theory and the Gross-Pitaevskii theory, which assume, sometimes implicitly, that the order parameters can be defined locally, and that their fluctuations are negligible, and that the state vectors are robust against weak perturbation from environments, and that local operations do not cause global changes. We consider that such a locally-CSIB state should be (a good approximation to) a quantum state of real physical systems at low temperature, except for extreme cases such as bosons in a perfectly-closed box at a ultra-low temperature. Finally, we remark that we have not assumed that the particles in the environment are in (or, not in) a condensed phase.
We thank H. Tasaki and M. Ueda for discussions.
Electronic address: shmz@ASone.c.u-tokyo.ac.jp
See, e.g., R. Haag, [*Local Quantum Physics*]{} (Springer, Berlin, 1992).
See, e.g., M. Tinkham, [*Superconductivity*]{} 2nd ed. (McGraw-Hill, New York, 1996).
E. G. Syskakis, F. Pobell and H. Ullmaier, Phys. Rev. Lett. [**55**]{} (1985) 2964.
M. H. Anderson [*et al.*]{}, Science [**269**]{} (1995) 198; C. C. Bradley [*et al.*]{}, Phys. Rev. Lett. [**75**]{} (1995) 1687; K. B. Davis [*et al.*]{}, Phys. Rev. Lett. [**75**]{} (1995) 3969.
M. R. Andrews [*et al.*]{}, Science [**275**]{} (1997) 637.
J. Javanainen and S. M. Yoo, Phys. Rev. Lett. [**76**]{} (1996) 161.
S. M. Barnett, K. Burnett and J. A. Vaccaro, J. Res. Natl. Inst. Stand. Technol. [**101**]{}, 593 (1996).
A. Shimizu and T. Miyadera, Phys. Rev. Lett. [**85**]{}, 688 (2000).
We assume that the product $\prod_k C^{(k)}_{N_k n_k}$ is finite only when $N_{\rm S} \ll N_{\rm tot}$. Since E is assumed to be much larger than S, states with low energies would satisfy this condition.
A vector state is a state represented by a vector (more precisely, ray) in a Hilbert space, which is not necessarily an irreducible representation: Since the Hilbert space consists of many charge sectors, it becomes an irreducilbe representation of the algebra of observables only when the observables include non-gauge invariant ones (such as a field operator), which connect different sectors [@haag].
Let us denote a quantum state symbolically by $\omega$ and the expectation value of an observable $A$ by $\omega(A)$. A state $\omega$ is called [*mixed*]{} iff there exist states $\omega_1$ and $\omega_2$ ($\neq \omega_1$), and a positive number $\lambda$ ($0<\lambda<1$), such that $ % \begin{equation}
\omega(A) = \lambda \omega_1(A) + (1-\lambda) \omega_2(A)
$ for any [*local*]{} observable $A$. Otherwise, $\omega$ is called a pure state. See, e.g., Ref. [@haag].
Here, the entanglement is defined by the increase of the von Neumann entropy for the reduced density operator which is obtained by tracing over a part of subsytems.
See, e.g., A. Peres, Phys. Scripta T76, [**52**]{} (1998) (quant-ph/9707026).
On the other hand, the environment E might be perturbed by the control of a subsystem because $| \Phi \rangle_{\rm tot}$ is not a simple product of $\hat \rho_{\rm S}$ and a state vector of E. However, we do not measure effects of such perturbations on E because we assume that we do not measure anything of E.
A. Shimizu and J. Inoue, Phys. Rev. A [**60**]{}, 3204 (1999).
Although this relation is the same as the relation between the corresponding states of free bosons, $| \alpha, {\rm G} \rangle$ is a complicated wave function because $|N, {\rm G} \rangle$ is complicated [@SMprl2000].
For example, the state (\[ent\]) appears for two condensates, each of which initially has a definite number of bosons, when an interference pattern is developed and observed. See, e.g., J. I. Cirac et al., Phys. Rev. [**A54**]{}, R3714 (1996).
If necessary, one can make $\hat A_{\rm S}$ self-adjoint by adding its conjugate operator.
This is a generic property of quantum phase transitions. For example, if we treat a ferromagnetic Ising spin system as a quantum spin system, then two ferromagnetic states, $|\uparrow \uparrow \cdots \rangle$ and $|\downarrow \downarrow \cdots \rangle$, correspond to CSIB’s, whereas their superpositions $(|\uparrow \uparrow \cdots \rangle \pm
|\downarrow \downarrow \cdots \rangle)/\sqrt{2}$ correspond to NSIB’s: Although all these states have a long-range order, the up-down symmetry is broken only for the former states, in the sense that $\langle M_z \rangle \neq 0$.
A. Shimizu and T. Miyadera, cond-mat/0009258.
|
---
abstract: 'We present general arguments, based on medium-induced radiative energy loss, which reproduce the non-gaussian shapes of away-side di-jet azimuthal correlations found in nucleus-nucleus collisions at RHIC. A rather simple generalization of the Sudakov form factors to opaque media allowing an effective description of the experimental data is proposed.'
author:
- 'Antonio D. Polosa$^{(a)}$ and Carlos A. Salgado$^{(b)}$[^1]'
title: Jet Shapes in Opaque Media
---
[*Jet quenching*]{}, the strong modification of the spectrum of particles produced at high transverse momenta in nucleus-nucleus collisions, is a solid experimental result of the RHIC program [@Adams:2005dq]. At present its main observable is the strong suppression of the inclusive particle yield in the whole available range of $p_\perp\lesssim$ 20 GeV/c [@Adams:2005dq][@Shimomura:2005en]. This suppression can be understood, in general terms, as due to the energy loss of highly energetic partons traversing the medium created in the collision. To further unravel the dynamical mechanism underlying this effect the questions of where and how the energy “is lost" are of special relevance. The most promising experimental probes to investigate these issues are clearly related to the modification of the jet structures [@Salgado:2003rv]. In the most successful approach to explain jet quenching, the degradation of the energy of the parent parton produced in a elementary hard process, is due to the medium-induced radiation of soft gluons [@Baier:1996sk]-[@Salgado:2003gb], and the energy transfer to the medium (recoil) is neglected. As usual, the medium-modified parton-shower will eventually convert into a hadron jet. In the opposite limit, one could assume that a large fraction of the original parton energy is transferred to the medium, with a fast local equilibration, and diffused through sound and/or dispersive modes. The latter possibility has been advocated [@conical] as the origin of the striking non-gaussian shape of the azimuthal distributions in the opposite direction to the trigger particle [@Adler:2005ee]. In this paper we show that the same perturbative mechanism able to describe the inclusive suppression data, namely, the so-called radiative energy loss, can account for the experimentally observed two-peak shapes in the azimuthal correlations. The central point of our argument is the need of a more exclusive treatment of the distributions to describe experimental data with restrictive kinematical constrains. To this end, we supplement the medium-modified jet formation formalism with the Sudakov suppression form factor following the well known in-vacuum approach. Furthermore, we assume that the final hadronic distributions follow the parton level ones (parton-hadron duality).
[*The medium-induced gluon radiation*]{}. Let us suppose that a very energetic parton propagating through a medium of length $L$ emits a gluon with energy $\omega$ and transverse momentum $k_\perp$ with respect to the fast parton. The formation time of the gluon is $t_{\rm form}\sim \frac{2\omega}{k_\perp^2}$. Its typical transverse momentum is $$k_\perp^2\sim \frac{\langle q_\perp^2\rangle_{\rm med}}{\lambda}\, t_{\rm form}\sim
\sqrt{2\omega\hat q}.
\label{eqkt2}$$ $\hat q$ is the transport coefficient [@Baier:1996sk] which characterizes the medium-induced transverse momentum squared $\langle q_\perp^2\rangle_{\rm
med}$ transferred to the projectile per unit path length $\lambda$. In this picture the gluon acquires transverse momentum because of the Brownian-motion in the transverse plane due to multiple soft scatterings during $t_{\rm form}$. The typical emission angle $$\sin\theta\equiv \frac{k_\perp}{\omega}\sim \left(\frac{2\hat q}{\omega^3}\right)^{1/4}
\label{eq:emangle}$$ defines a minimum emission energy $\hat\omega\sim (2\hat q)^{1/3}$ below which the radiation is suppressed by formation time effects [@Salgado:2003gb]. The latter is a crucial observation for the discussion to follow. Notice that for energies smaller than $\hat\omega$ the angular distribution of the medium-induced emitted gluons peaks at large values. In [@Eskola:2004cr] a fit to the inclusive high-$p_\perp$ particle suppression gives the values $\hat q\sim 5...15$ GeV$^2$/fm for the most central AuAu collisions at RHIC, i.e., $\hat\omega\sim$ 3 GeV \[we have taken $\hat\omega\sim 2(2\hat q)^{1/3}$ as indicated by numerical results\]. This situation can be computed from the spectrum of the medium-induced gluon radiation in the limit $k_\perp^2< \sqrt{2\omega\hat q}$, $\omega\ll\omega_c\equiv\frac{1}{2}\hat q L^2$. This gives in the multiple soft scattering approximation [@Wiedemann:2000za; @Salgado:2003gb] $$\frac{dI^{\rm med}}{d\omega\,dk^2_\perp}\simeq\frac{\alpha_s C_R}{16\pi}\,L\,\frac{1}{\omega^2}.
\label{eq:cohspec}$$ Compared to the corresponding spectrum in the vacuum, the one in (\[eq:cohspec\]) is softer and the typical emission angles larger, producing a broadening of the jet signals. [*The parton shower evolution*]{}. Equation (\[eq:cohspec\]) gives the inclusive spectrum of gluons emitted by a high-energy parton traversing a medium. For practical applications, however, more exclusive distributions, giving the probability of one, two... emissions are needed. How to construct such probabilities, using Sudakov form factors, is a well known procedure in the vacuum. In the medium, a first attempt to deal with the strong trigger bias effects in inclusive particle production, was that of using an independent gluon emission approximation with corresponding Poissonian probabilities [@Baier:2001yt]. Here we propose to improve this assumption by including the virtuality evolution through medium–modified Sudakov form factors. It is encouraging to notice that the original formulation is recovered once the virtuality evolution is neglected.
Since we are interested in angular distributions the parton shower description proposed in [@Marchesini:1983bm] is particularly convenient. Let us introduce the evolution variable $$\xi=\frac{q_1\cdot q_2}{\omega_1\omega_2}$$ to define the branching of a particle (gluon in general) with virtuality $q$ and energy $\omega$ in two particles of virtuality $q_1,\ q_2<q$ and energies $\omega_1\equiv
z\omega$, $\omega_2\equiv (1-z)\omega$. If $\omega_1,\ \omega_2
\gg m_1,\ m_2$, as we will assume, $\xi\simeq 1-\cos\theta_{12}$, with $\theta_{12}=\theta_1+\theta_2$. $\theta_1$, $\theta_2$ are the angles formed by the daughter partons with the parent parton. Consequently $$\frac{dq^2}{q^2}=\frac{d\xi}{\xi},
\label{eq:jaco}$$ i.e., the virtuality evolution can be converted into an evolution in the variable $\xi$. The corresponding probability distribution for one branching is $$d{\cal P}(\xi,z)=\frac{d\xi}{\xi}\,dz\,\frac{\alpha_s}{2\pi}P(z)\,
\Delta(\xi_{\rm max},\xi)\theta(\xi_{\rm max}-\xi)\theta(\xi-\xi_{\rm min})
\label{eq:sudakxi}$$ where $P(z)$ is the splitting function. The Sudakov form factor controlling the evolution is [@bks]: $$\Delta(\xi,\omega)=\exp\left\{-\int_\xi^{\xi_{\rm max}}\frac{d\xi^\prime}{\xi^\prime}
\int_\epsilon^{1-\epsilon} dz\,\frac{\alpha_s}{2\pi}\,P(z)\right\}
\label{eq:sudakreg}$$ and $\epsilon=Q_0/\omega\sqrt{\xi}$ with $Q_0$ a cut-off. $\Delta(\xi,\omega)$ can be interpreted as the probability of no branching between the scales $\xi$ and $\xi_{\rm max}$.
We propose to generalize the above formulation to the in-medium case by noticing that in our notation [@Wiedemann:2000za][@Salgado:2003gb] $$\frac{dI^{\rm vac}}{dz dk_\perp^2}=\frac{\alpha_s}{2\pi}\frac{1}{k_\perp^2}P(z).
\label{eq:specvac}$$ We define the corresponding equations (\[eq:sudakxi\]), (\[eq:sudakreg\]) for the medium by changing $dI^{\rm vac}/dzdk^2_\perp$ to $dI^{\rm med}/dzdk^2_\perp$ given by Eq. (\[eq:cohspec\]). We have clear motivations to make this [*Ansatz*]{}: (i) it provides a clear probabilistic interpretation with the right limit to the most used “quenching weights” [@Baier:2001yt][@Salgado:2003gb] when the virtuality is ignored; (ii) the evolution equations in the case of nuclear fragmentation functions in DIS can be written as usual DGLAP equations with medium-modified splitting functions $P(z)\to P(z)+\Delta P(z)$ [@Wang:2001if]. In our case Eq. (\[eq:cohspec\]) would correspond to $\Delta P(z)$ with the appropriate factors.
[*Results*]{}. In what follows, we present some simple analytical estimates following the approach just sketched. The general case of arbitrary $\omega_1$ and $\omega_2$ is difficult to study analytically. Let us study, instead, two extreme cases: (1) one of the particles takes most of the incoming energy $\omega_1\gg\omega_2$ and, hence, $\theta_1\simeq 0$ – we name this as “J-configuration”; (2) the two particles share equally the available energy, $\omega_1\simeq\omega_2$ and $\theta_1\simeq\theta_2$ – named as “Y-configuration”.
Let us first consider the case (1) where $\xi=1-\cos\theta_{12}\equiv 1-\cos\theta$ with $\sin\theta=k_\perp/\omega$. From (\[eq:cohspec\]) we get $$\frac{dI}{dzd\xi}=\frac{\alpha_s C_R}{8\pi}\, E\,L\,(1-\xi).
\label{eq:case1}$$ Therefore the corresponding Sudakov form factor reads: $$\Delta^{\rm med}(\xi_{\rm max},\xi)=\exp\left\{-\frac{\alpha_s C_R}{8\pi}
\,L\,E\int_\xi^{\xi_{\rm max}}
d\xi^\prime\,(1-\xi^\prime)\right\}$$ where the small contribution form the integration in $z$ has been neglected \[we have checked that the results does not vary as long as $\omega\gg Q_0$\]; $\alpha_s=1/3$ will be assumed in the following. Taking $\xi=1-\cos\theta$, $\xi_{\rm max}=1$ and inserting into the probability of splitting, Eq. (\[eq:sudakxi\]), we get: $$\frac{d{\cal P}(\theta,z)}{dz\,d\theta}=\frac{\alpha_s C_R}{8\pi}\,E\,L\,\sin\theta\cos\theta
\exp\left\{-E\,L\,\frac{\alpha_s C_R}{16\pi}\cos^2\theta\right\}.
\label{eq:splitms}$$ Eq. (\[eq:splitms\]) is the probability distribution for a parent parton to split just once, emitting a gluon at angle $\theta$ with fraction $z$ of the incoming momentum.
The variables of (\[eq:splitms\]) are polar coordinates with respect to the jet axis (the direction of the parent parton). Assuming that this parent parton was produced at 90$^o$ in the center of mass frame of the collision, we can transform the coordinates of the emitted gluon to the laboratory polar coordinates $\Phi$ and $\theta_{\rm lab}$ with respect to the beam direction $\hat{z}$. Normally, one uses the pseudorapidity $\eta=-{\rm log\,tan\,}(\theta_{\rm lab}/2)$. The jabobian is $$d\theta d\beta=\frac{d\eta d\Phi}{{\rm cosh}\eta\sqrt{{\rm cosh}^2\eta-\cos^2\Phi}}$$ where $\beta$ was integrated in the previous expressions to give $2\pi$ in the spectra. Taking, for simplicity $\eta=0$, in the most favorable detection region, the answer is simply $$\frac{d{\cal P}(\Phi,z)}{dz\,d\Phi}\Bigg\vert_{\eta=0}=
\frac{\alpha_s C_R}{16\pi^2}\,E\,L\,\cos\Phi
\exp\left\{-E\,L\,\frac{\alpha_s C_R}{16\pi}\cos^2\Phi\right\}
\label{eq:splitlab}$$ giving the probability of one splitting as a function of $\Phi$. Thus we reach our objective: the possibility of describing non-trivial angular dependences, as shown in Eqs.(\[eq:splitms\]) and (\[eq:splitlab\]) with a perturbative mechanisms. The distribution found has two maxima whose positions are determined by: $$\Phi_{\rm max}=\pm {\rm arccos}\sqrt{\frac{8\pi}{E\,L\,\alpha_s\, C_R}}
\label{eq:phimax}$$
The angular shape found in (\[eq:splitlab\]) is very similar to the one found experimentally. We do not intend to perform a detailed calculation of the experimental situation. We just improve our calculation by introducing a simple model to take into account the additional smearing of the jet shape introduced by the triggering conditions. Setting $\eta=0$ for the trigger particle, we take into account (i) the uncertainty due to the boosted center of mass frame of the partonic collision by integrating the probability (\[eq:splitlab\]) in $2\Delta\eta$ with $\Delta\eta=1$ [^2]; (ii) an additional uncertainty in azimuthal angle $\Delta\Phi$ given by a gaussian with $\sigma$= 0.4 [@Adler:2006sc]. Specifically, we take $$\frac{dP}{d\Delta\Phi dz}=\frac{1}{N}\int_{-\Delta\eta}^{\Delta\eta}d\eta\int
d\Phi'\frac{d{\cal P}}{d\Phi' dz d\eta}{\rm e}^{-\frac{(\Delta\Phi-\Phi')^2}{2\sigma^2}}$$ $N=2\Delta\eta\sqrt{2\pi\sigma^2}$ being a normalization factor. The results are plotted in Fig. \[fig:dpar1\] for three different medium lengths and $E_{\rm jet}=7$ GeV \[the quoted value $\hat q=10$ GeV$^2$/fm assures that the results hold for gluon energies $\omega<\hat\omega\simeq 3$ GeV\]. In order to compare with the centrality dependence of the position of the maxima measured experimentally [@Grau:2005sm] we simply take $L=N_{\rm part}^{1/3}$. Although this geometric procedure is too simplistic, it serves us to have a feeling of the description of the data plotted in Fig. \[fig:dpar1\]. We must also point out that the centrality dependence of the transport coefficient is not taken into account in these figures. The reduction of $\hat q$ with centrality ($\hat{q}\sim dN/dy\sim N_{\rm part}$) makes the radiation more collinear when $\hat q^{1/3}<\omega$, i.e., the radiation will be more collinear than plotted for decreasing centrality at fixed $\omega\simeq p_\perp^{\rm assoc}$.
Let us now consider the case of the Y-configuration, where $\omega_1\simeq\omega_2$ and the two angles are similar $\theta_1\simeq\theta_2\equiv\theta$. Now the variable $\xi=1-\cos\theta_{12}=1-\cos 2\theta$ and $k_\perp^2/\omega^2=\sin^2\theta=\xi/2$. Changing variables we get $$\frac{dI}{dzd\xi}=\frac{\alpha_s C_R}{32\pi}\,E\,L .$$ Repeating the same procedure followed before $$\frac{d{\cal P}(\Phi,z)}{dzd\Phi}=\frac{C_R\alpha_s}{64\pi^2}\,E\,L\,\exp\left\{-\frac{C_R\alpha_s}{32\pi}\,E\,L\cos\Phi\right\}.$$ In this case, the maxima are outside the borders of the physical phase space but still a minimum at $\Phi=0$ occours. Anyway, the smeared distribution does present maxima. The corresponding curves and positions of the maxima are plotted in Figs. \[fig:ndist\] and \[fig:dpar1\].
[*The picture*]{} which emerges from our analysis is the following: the splitting probability of a highly energetic parton produced inside the medium by a hard process presents well defined maxima in the laboratory azimuthal angle when the emitted gluons have energies $\omega<\hat\omega$, with $\hat\omega\sim $ 3 GeV for the most central collisions at RHIC. This is a reflection of the jet broadening predicted long ago as a consequence of the medium-induced gluon radiation [@Baier:1996sk]-[@Salgado:2003gb]. In the case that the experimental triggering conditions are such that only a small number of splittings are possible (by restrictive kinematical constrains, e.g. $p_\perp^{\rm assoc}\simeq p_\perp^{\rm trigg}$) these structures should be observed. Our estimate, taking into account the parton-hadron duality, is then $p_\perp^{\rm assoc}\lesssim \hat\omega\simeq$ 3 GeV. When, on the contrary, the allowed number of splittings is larger, we expect this angular structure to disappear, filling the dip located in correspondence of the jet axis. The reason is that the inclusive spectrum (which does not present a two-peak structure) should be recovered in these conditions. This could explain the absence of a two-peak structure when the kinematical cut on $p_\perp^{\rm assoc}$ is relaxed (or/and when $p_\perp^{\rm trigg}$ is larger for fixed $p_\perp^{\rm assoc}$). In this situation the broadening is, however, still present with a flatter distribution which extends in a large range of $\Delta\Phi$. A realistic comparison with experimental data would need a much more sophisticated analysis, including not only the probability of multiple splittings but also the hadronization here ignored. Our results are very encouraging though as the distributions we obtain for the single splitting resemble very much the experimental findings, both in shape and in centrality dependence.
Let us also comment on the relation with other approaches. In [@Vitev:2005yg] the large angle medium-induced gluon radiation has been considered and modifications of the angular distribution computed. The obtained jet structures are, however, basically gaussian in azimuthal angle, the effect of the medium being a broadening on the typical jet width. Two main differences with our approach are the source of this discrepancy: First, and most importantly, the angular distributions in [@Vitev:2005yg] are given by the inclusive spectrum, which does not present the two-peak structure in $\Delta\Phi$ – the main point of the present paper is to provide a proposal on how to improve this description by the inclusion of Sudakov form factors; Second, in the single hard approximation used in [@Vitev:2005yg] for the medium-induced gluon radiation the typical emission angle [*decreases*]{} with increasing in-medium path-length [@Majumder:2005sw] as $\sin\theta\sim1/\sqrt{L}$, making the radiation more and more collinear with increasing centrality (in contrast with (\[eq:emangle\]) which is independent on the centrality).
On the other hand, one of the most popular explanations for the non-gaussian shape of the away-side jet-signals found at RHIC is in terms of shock waves produced by the highly energetic particle into the medium. In this picture, a large amount of the energy lost must be transferred to the medium which thermalizes almost instantaneously. In general the released energy excites both sound and dispersive modes, and only the first ones produce the desired cone-signal (in the dispersive mode, the energy travels basically collinear with the jet). The energy deposition needed for the sound modes to become visible in the spectrum has been found to be quite large [@Casalderrey-Solana:2006sq]. To our knowledge, no attempt has been made so far to describe the centrality dependence of the shape of the azimuthal correlations in this approach. Given the fact that the two formalisms described above rely on completely different hypotheses, finding experimental observables which could distinguish between them is certainly an issue which deserves further investigation. New data on three particle correlations are expected to shed some light on the problem. Here, we just notice that our Y-configurations would lead to similar signatures as the ones from the shock wave model. In the most general case, however, different configurations, not considered in our simple analysis, and characterized by different radiation angles for both gluons, would produce a smeared signal.
CAS is supported by the 6th Framework Programme of the European Community under the Marie Curie contract MEIF-CT-2005-024624.
[99]{} J. Adams [*et al.*]{} \[STAR Collaboration\], Nucl. Phys. A [**757**]{}, 102 (2005); B. B. Back [*et al.*]{} \[PHOBOS Collaboration\], Nucl. Phys. A [**757**]{}, 28 (2005); I. Arsene [*et al.*]{} \[BRAHMS Collaboration\], Nucl. Phys. A [**757**]{}, 1 (2005); K. Adcox [*et al.*]{} \[PHENIX Collaboration\], Nucl. Phys. A [**757**]{}, 184 (2005). M. Shimomura \[PHENIX Collaboration\], arXiv:nucl-ex/0510023. C. A. Salgado and U. A. Wiedemann, Phys. Rev. Lett. [**93**]{}, 042301 (2004) R. Baier, [*et al.*]{} Nucl. Phys. B [**484**]{} (1997) 265. B. G. Zakharov, JETP Lett. [**65**]{} (1997) 615. U. A. Wiedemann, Nucl. Phys. B [**588**]{} (2000) 303. M. Gyulassy, P. Levai and I. Vitev, Nucl. Phys. B [**594**]{} (2001) 371. X. N. Wang and X. F. Guo, Nucl. Phys. A [**696**]{} (2001) 788. C. A. Salgado and U. A. Wiedemann, Phys. Rev. D [**68**]{}, 014008 (2003). H. Stoecker, Nucl. Phys. A [**750**]{}, 121 (2005); J. Casalderrey-Solana, E. V. Shuryak and D. Teaney, J. Phys. Conf. Ser. [**27**]{}, 22 (2005); J. Ruppert and B. Muller, Phys. Lett. B [**618**]{}, 123 (2005). S. S. Adler [*et al.*]{} \[PHENIX Collaboration\], arXiv:nucl-ex/0507004. K. J. Eskola, [*et al.*]{} Nucl. Phys. A [**747**]{}, 511 (2005). A. Dainese, C. Loizides and G. Paic, Eur. Phys. J. C [**38**]{}, 461 (2005). R. Baier, [*et al.*]{} JHEP [**0109**]{}, 033 (2001). G. Marchesini and B. R. Webber, Nucl. Phys. B [**238**]{}, 1 (1984). For more detail on the role of Sudakov form factors in hadron collisions Monte Carlo simulations see R.K. Ellis, W.J. Stirling and B.R. Webber, [*QCD and Collider Physics*]{}, CUP (1996).
T. Renk and J. Ruppert, arXiv:hep-ph/0605330. S. S. Adler \[PHENIX Collaboration\], arXiv:hep-ex/0605039. N. Grau \[PHENIX Collaboration\], arXiv:nucl-ex/0511046. I. Vitev, Phys. Lett. B [**630**]{}, 78 (2005). A. Majumder and X. N. Wang, Phys. Rev. C [**73**]{}, 051901 (2006) J. Casalderrey-Solana, E. V. Shuryak and D. Teaney, arXiv:hep-ph/0602183.
[^1]: Permanent address: Departamento de Física de Partículas, Universidade de Santiago de Compostela (Spain).
[^2]: We have checked that a convolution with a gaussian as given in [@Renk:2006mv] does not affect our results.
|
---
abstract: 'High precision, high cadence radial velocity monitoring over the past 8 years at the W. M. Keck Observatory reveals evidence for a third planet orbiting the nearby (4.69 pc) dM4 star GJ 876. The residuals of three-body Newtonian fits, which include GJ 876 and Jupiter mass companions b and c, show significant power at a periodicity of 1.9379 days. Self-consistently fitting the radial velocity data with a model that includes an additional body with this period significantly improves the quality of the fit. These four-body (three-planet) Newtonian fits find that the minimum mass of companion “d” is $m\sin{i}=5.89\,\pm\,0.54\,M_{\oplus}$ and that its orbital period is $1.93776\,(\pm\,7\times10^{-5})$ days. Assuming coplanar orbits, an inclination of the GJ 876 planetary system to the plane of the sky of $\sim50^{\circ}$ gives the best fit. This inclination yields a mass for companion d of $m=7.53\,\pm\,0.70\,M_{\oplus}$, making it by far the lowest mass companion yet found around a main sequence star other than our Sun. Precise photometric observations at Fairborn Observatory confirm low-level brightness variability in GJ 876 and provide the first explicit determination of the star’s 96.7-day rotation period. Even higher precision short-term photometric measurements obtained at Las Campanas imply that planet d does not transit GJ 876.'
author:
- 'Eugenio J. Rivera, Jack J. Lissauer, R. Paul Butler, Geoffrey W. Marcy, Steven S. Vogt, Debra A. Fischer, Timothy M. Brown, Gregory Laughlin, Gregory W. Henry'
title: 'A $\sim$7.5 Earth-Mass Planet Orbiting the Nearby Star, GJ 876'
---
Introduction {#intro}
============
GJ 876 (HIP 113020) is the lowest mass star currently known to harbor planets. The first companion discovered, “b,” was announced by Marcy et al. (1998) and Delfosse et al. (1998). They found that this companion had an orbital period, $P_b$, of $\sim 61$ days and a minimum mass ($m_b\sin{i_b}$) of $\sim 2.1\,M_{\rm Jup}$ and that it produced a reflex barycentric velocity variation of its dM4 parent star of amplitude $K_b \sim 240$ ms$^{-1}$. After 2.5 more years of continued Doppler monitoring, Marcy et al. (2001) announced the discovery of a second companion, “c.” This second companion has an orbital period, $P_c$, of $\sim$ 30 days, $m_c\sin{i_c} \sim 0.56\,M_{\rm Jup}$, and $K_c \sim 81$ ms$^{-1}$. As a result of fitting the radial velocity data with a model with two non-interacting companions, the fitted parameters for companion b were different, with the most significant change in $K_b$ (and correspondingly $m_b\sin{i_b}$), which dropped from 240 ms$^{-1}$ to 210 ms$^{-1}$.
Marcy et al. (2001) noted that although a model with two planets on unperturbed Keplerian orbits produces a very significantly improved fit to the radial velocity data by dramatically reducing both the $\sqrt{\chi_{\nu}^2}$ and the RMS of the fit[^1], these two statistics were still relatively large. A $\sqrt{\chi_{\nu}^2}$ of $\sim$ 1.0 is expected for a model that is a “good" fit to the data assuming normally (Gaussian) distributed errors. Additionally, dynamical simulations based on this model showed that the system’s stability is strongly dependent on the starting epoch, which is used to determine the initial positions of the planets for the integrations. This indicated that the mutual perturbations among the planets are substantial on orbital timescales (Marcy et al. 2001). Laughlin & Chambers (2001) and Rivera & Lissauer (2001) independently developed self-consistent “Newtonian” fitting schemes which incorporate the mutual perturbations among the planets in fitting the radial velocity data. Nauenberg (2002) developed a similar method which additionally gives a lower limit on the mass of the star; using the radial velocity data from Marcy et al. (2001), he found the mass of GJ 876 to be $M_{\star}>0.3\,M_{\odot}$. This dynamical modeling resulted in a substantially improved fit to the radial velocity data.
Laughlin et al. (2005) provide an updated analysis of the GJ 876 planetary system in which they perform 3-body (two-planet) fits to a radial velocity data set which includes 16 old observations taken at Lick observatory and observations taken at the Keck observatory up to the end of the 2003 observing season. In their fits, they have assumed a stellar jitter of 6 ms$^{-1}$. They found that the two jovian-mass planets are deeply in both the 2:1 mean motion resonance and in an apsidal resonance in which the longitudes of periastron remain nearly aligned. All three resonant angles librate with small amplitudes, which argues for a dissipative history of differential migration for the giant planets. Additionally, they were able to constrain the inclination of the coplanar system to the plane of the sky to be $i > 20^{\circ}$. Finally, they examined the possibility that if companion c is not transiting now, transits might occur in the future for non-coplanar configurations with modest mutual inclinations.
In this paper, we describe the results of a more detailed analysis using a new radial velocity data set. Note that most of the fits presented in this work do not take stellar jitter (which is found to be $\lesssim$ 1.5 ms$^{-1}$, see Section \[3plcf\]) into account. In Section \[obs\], we present the new velocities and describe the procedures which resulted in significant improvements in the precision of these velocities. In Section \[2plcf\], we incorporate the techniques from Laughlin et al. (2005) to determine the uncertainties in the parameters from two-planet fits. In Section \[Res-2pl\], we present a periodogram analysis of the residuals to the two-planet fit, which suggests the presence of a third companion to GJ 876, with a period of 1.94 days. In Section \[3plcf\], we present the results from three-planet fits, which provide estimates of the actual masses of companions b and c as well as $m_d \sin{i_d}$ of the small inner planet. The residuals of the 2-planet fit also show significant power at 2.0548 days; as discussed in Section \[Aliasing\], we have demonstrated that this second period is an alias of the 1.9379-day period. Section \[photometry\] presents the results of long-term photometric monitoring of GJ 876. In Section \[transits\], we show that the third companion was not transiting in 2003. We discuss some interesting aspects of the third companion in Section \[dis\]. Finally, we end with a summary of our results and our conclusions.
Radial Velocity Observations {#obs}
============================
The stellar characteristics of GJ 876 (M4 V) have been described previously in Marcy et al. (1998) and Laughlin et al. (2005). It has a Hipparcos distance of 4.69 pc (Perryman et al. 1997). From its distance and the bolometric correction of Delfosse et al. (1998), its luminosity is 0.0124 $L_{\odot}$. As in previous studies, we adopt a stellar mass of $0.32\,M_{\odot}$ and a radius of $0.3\,R_{\odot}$ based on the mass-luminosity relationship of Henry & McCarthy (1993). We do not incorporate uncertainties in the star’s mass ($0.32\,\pm\,0.03\,M_{\odot}$) into the uncertainties in planetary masses and semi-major axes quoted herein. The age of the star is roughly in the range 1–10 Gyr (Marcy et al. 1998).
We searched for Doppler variability using repeated, high resolution spectra with resolving power R $\approx 70000$, obtained with the Keck/HIRES spectrometer (Vogt et al. 1994). The Keck spectra span the wavelength range from 3900–6200 Å. An iodine absorption cell provides wavelength calibration and the instrumental profile from 5000 to 6000 Å (Marcy & Butler 1992, Butler et al. 1996). Typical signal-to-noise ratios are 100 per pixel for GJ 876. At Keck we routinely obtain Doppler precision of 3–5 ms$^{-1}$ for V=10 M dwarfs, as shown in Figure \[keck\_stable\]. A different set of 4 stable Keck M dwarfs is shown in Figure 2 of Butler et al. (2004). The variations in the observed radial velocities of these stars can be explained by the internal uncertainties in the individual data points; thus, there is no evidence that any of these stars possess planetary companions. Exposure times for GJ 876 and other V=10 M dwarfs are typically 8 minutes.
The internal uncertainties in the velocities are judged from the velocity agreement among the approximately 400 2-Å chunks of the echelle spectrum, each chunk yielding an independent Doppler shift. The internal velocity uncertainty of a given measurement is the uncertainty in the mean of the $\sim 400$ velocities from one echelle spectrum.
We present results of N-body fits to the radial velocity data taken at the W. M. Keck telescope from June 1997 to December 2004. The 155 measured radial velocities are listed in Table \[velocities\]. The median of the uncertainties is 4.1 ms$^{-1}$. Comparison of these velocities with those presented in Laughlin et al. (2005) shows significant changes (typically 3–10 ms$^{-1}$) in the velocities at several observing epochs.
The changes in the measured velocities are a result of a more sophisticated modeling of the spectrum at sub-pixel levels and of key improvements in various instrumental idiosyncrasies. The previous HIRES CCD, installed at first-light in 1993, had 1) relatively large (24 $\mu$m) pixels, 2) a convex surface, 3) excessive charge diffusion in the CCD substrate, which broadened the detector’s point spread function (PSF), and 4) a subtle signal-dependent non-linearity in charge transfer efficiency (CTE). These combined effects had been limiting our radial velocity precision with HIRES to about 3 ms$^{-1}$ since 1996. In August 2004, the old CCD was replaced by a new 3-chip mosaic of MIT-Lincoln Labs CCDs. These devices provided a very flat focal plane (improving the optical PSF), a finer pixel pitch (which improved our sub-pixel modeling), and more spectral coverage per exposure. The MIT-LL devices also are free of signal-dependent CTE non-linearities and have a much lower degree of charge diffusion in the CCD substrate (which improved the detector PSF). We also switched into higher cadence mode in October, 2004, observing 3 times per night and for as many consecutive nights as telescope scheduling would allow. Additionally, toward the end of 2004, a high signal-to-noise template of GJ 876 was obtained. All Keck data were then re-reduced using the improved Doppler code together with the new high S/N template and the higher cadence 2004 observations. As a result of the improvements, the two- (and three-) planet fits presented here for this system are significantly improved over previous N-body fits.
Two-Planet Coplanar Fits {#2plcf}
========================
We first performed self-consistent 2-planet fits in which we assumed that the orbits of both companions “b” and “c” are coplanar and that this plane contains the line of sight ($i_b=i_c=90^{\circ}$). These are fits to all the 155 Keck radial velocities listed in Table \[velocities\]. All fits were obtained with a Levenberg-Marquardt minimization algorithm (Press et al. 1992) driving an N-body integrator. This algorithm is a more general form of the algorithms used in Laughlin & Chambers (2001) and Rivera & Lissauer (2001). All fits in this work are for epoch JD 2452490, a time near the center of the 155 radial velocity measurements. We fitted for 10+1 parameters; 10 of these parameters are related to the planetary masses and orbital elements: the planetary masses ($m$), the semi-major axes ($a$), eccentricities ($e$), arguments of periastron ($\omega$), and mean anomalies ($M$) of each body, and 1 parameter is for the radial velocity offset, representing the center-of-mass motion of the GJ 876 system relative to the barycenter of our Solar System and arbitrary zero-point of the velocities.
We first obtained a nominal 2-planet fit to the actual 155 observed velocities. Figure \[2mf=1.00\] shows the model radial velocity (solid line) generated from this nominal 2-planet fit, along with the actual observed velocities (solid points with vertical error bars); the residuals are shown in the lower portion. Table \[2plparam\] shows the best fitted parameters, which are similar to those obtained by Laughlin et al. (2005). The osculating orbital elements (for epoch JD 2452490) listed in Table \[2plparam\] are in Jacobi coordinates. In these coordinates, each planet is explicitly assumed to be in orbit about the center of mass of all bodies interior to its orbit. As explained in Lissauer & Rivera (2001) and Lee & Peale (2003), Jacobi coordinates are the most natural system for expressing multiple-planet fits to radial velocity data.
The uncertainties listed in Table \[2plparam\] were obtained by performing 1000 additional 2-planet fits to 1000 radial velocity data sets generated using the bootstrap technique described in Section 15.6 of Press et al. (1992). Each velocity data set consisted of 155 entries chosen at random from the 155 entries in the actual velocity data set (Table \[velocities\]). Each entry consists of the observing epoch, the velocity measurement, and the instrumental uncertainty. For every choice, all 155 entries were available to be chosen. This procedure results in generated velocity data sets that contain duplicate entries. The fitting algorithm cannot handle such a data set. Thus, when an entry is chosen more than once during the generation of a velocity data set, 0.001 day is added to the observing epoch of each duplicate entry. We then performed 2-planet fits to each of these 1000 velocity data sets, using the parameters from the nominal 2-planet fit in the initial guesses. The 1000 fits result in ranges in the fitted values of each of the parameters. The uncertainties listed in Table \[2plparam\] are the standard deviations of the distributions of the parameters.
Using an earlier data set of radial velocity measurements, we had also performed an analysis to determine uncertainties in the fitted parameters like the one above and an analysis using randomized residuals (described in Section \[Aliasing\]) in which a model is assumed to generate synthetic data sets. There was no (significant) difference in the resulting uncertainties. For the data set used in this work, we use just the bootstrap method to determine the uncertainties in the fitted parameters since this technique is not constrained by an assumed model. It should be noted that it is not a problem to apply the bootstrap method in determining the uncertainties in the parameters since the method in which the parameters are determined ($\chi^2$ minimization) does not depend on the order in which (the square of) the differences between the model and observed velocities are summed. That is, the data points are identically distributed (see pp. 686 and 687 in Press et al. 1992).
The uncertainties determined using the bootstrap method explicitly [*do not*]{} account for the correlations which exist among the fitted orbital parameters. On the other hand, the uncertainties determined from the Levenberg-Marquardt routine [*do*]{} account for the correlations. An example involving $e$ and $\omega$ illustrates the differences. In general, from the Levenberg-Marquardt routine, a small fitted value for $e$ results in a large uncertainty in $\omega$, and a large value for $e$ results in a small uncertainty in $\omega$. Thus, if the uncertainty in $e$ is large, the uncertainty in $\omega$ may be misleading since it depends on the fitted value of $e$ (whereas the actual value of $e$ could be any value within the range of its uncertainty). The bootstrap method is able to explore the full range of possible values of both $e$ and $\omega$ simultaneously.
It is possible that the bootstrap method can give different uncertainties if we use different initial guesses. However, for the GJ 876 system with an assumed coplanar configuration, we have verified that the fitted parameters are relatively robust to changes in the initial guesses (Rivera & Lissauer 2001). When we explore systems with mutual inclinations, multiple minima appear (see Section \[3plcf\]). Thus, varying the initial guesses while using the bootstrap method to determine uncertainties becomes more relevant for fits with mutual inclinations.
In agreement with previous studies, we find that the system corresponding to the parameters in Table \[2plparam\] is also deeply in the 2:1 mean motion resonance and in the apsidal resonance. We performed 1000 simulations (for 10 years each) based on the nominal fit in Table \[2plparam\] and the 1000 fits used to determine the uncertainties in the parameters. The three critical arguments, $\theta_1=\lambda_c-2\lambda_b+\varpi_c$, $\theta_2=\lambda_c-2\lambda_b+\varpi_b$, and $\theta_3=\varpi_c-\varpi_b$, where $\lambda_c$ and $\lambda_b$ are the respective mean longitudes of companions c and b and $\varpi_c$ and $\varpi_b$ are the corresponding longitudes of periastron, all librate about 0$^{\circ}$ with the following small amplitudes: $|\theta_1|_{max}=5.6\,\pm\,1.4^{\circ}$, $|\theta_2|_{max}=28.8\,\pm\,8.4^{\circ}$, and $|\theta_3|_{max}=29.0\,\pm\,9.3^{\circ}$. These amplitudes are smaller than but consistent with the values from Laughlin et al. (2005). Note that all three critical arguments have approximately the same period of libration ($\sim$ 8.8 yr).
The overall long-period ($\sim8.8$ yr) envelope of the model radial velocity in Figure \[2mf=1.00\] arises from beating between the periods of the jovian-mass planets, since the periods are not exactly commensurate. The radial velocity reversals occur when planets b and c are on opposite sides of the star, which occurs roughly every 60 days. An inspection of the two components of the radial velocity due to companions b and c, the sum of these two velocities, and the mean anomalies of planets b and c shows that the reversals occur when c is near periastron ($M_c\sim0^{\circ}$) while b is near apastron ($M_b\sim180^{\circ}$). Thus, reversals occur when the (full) orbital velocity of companion b is near a minimum while that of c is near a maximum. This configuration arises because of the resonant state of the system. Also, the acceleration of the star due to companion c when it is at periastron, $Gm_c/(a_c(1-e_c))^2=0.0339$ cms$^{-2}$ (simply using the values from Table \[2plparam\]), is greater than that due to companion b when it is at apastron, $Gm_b/(a_b(1+e_b))^2=0.0239$ cms$^{-2}$. In comparison, the mean acceleration of the star due to planet c is $Gm_c/a_c^2=0.0205$ cms$^{-2}$, and that due to planet b is $Gm_b/a_b^2=0.0252$ cms$^{-2}$. Inspection of the longitudes of periastron for planets b and c also shows that the vertical position in Figure \[2mf=1.00\] and amplitude of a velocity reversal is correlated with the longitude of periastron (of both companions). The largest velocity reversals occur when the lines of apsides are roughly perpendicular to the line of sight ($\omega\sim0\mbox{ or }180^{\circ}$), and these occur near maxima in the full radial velocity of the star (e.g., near epochs JD 2451060, where $\omega\sim360^{\circ}$, and JD 2452600, where $\omega\sim180^{\circ}$, in Figure \[2mf=1.00\]). The period of this vertical traversal of the reversals coincides with the period of circulation of the periastron longitudes ($\sim8.8$ yr). Note that in a two-Keplerian model, reversals are also present, and they appear to move vertically (Marcy et al. 2001). However, in the two-Keplerian case, the vertical displacement results from the periods not being exactly commensurate. In this case the line joining the planets when they are on opposite side of the star will deviate farther and farther from the line of apsides. As a result, the shapes of the reversals in the two types of models appear different (Laughlin et al. 2005).
As in previous studies, we considered various inclinations of the coplanar system and generated a series of 2-planet (10+1 parameter) fits. Figure \[chisq\_i2\] shows the resulting $\sqrt{\chi_{\nu}^2}$ for the 2-planet fits versus the inclination ($i$) as triangles. Note that $\sqrt{\chi_{\nu}^2}$ starts to rise when $i \lesssim 50^{\circ}$. Laughlin et al. (2005) found that $\sqrt{\chi_{\nu}^2}$ starts to rise only when $i \lesssim 40^{\circ}$. The stricter constraint that we are able to derive is primarily a result of the improvements to the measurements mentioned in Section \[obs\]. Moreover, although previous studies (Laughlin & Chambers 2001; Rivera & Lissauer 2001; Laughlin et al. 2005) only found a very shallow minimum in $\sqrt{\chi_{\nu}^2}$ versus $i$, the minimum for our larger, more precise data set is noticeably deeper. Newtonian models yield far better fits to the data than do Keplerian models because there is a clear signal of periapse regression in the data (Ford 2005). However, this observed regression rate can be matched by models with relatively low mass planets b and c (high $\sin{i}$) on low eccentricity orbits, or higher mass planets on more eccentric orbits. The $\sqrt{\chi_{\nu}^2}$ increases only when planetary eccentricities (and masses) become too large, and the shape of the model velocity curve starts to deviate significantly from the data. Note that a large value of $\sqrt{\chi_{\nu}^2}$ for a given small value of $i$ does not immediately imply that a system with such a value for $i$ is unstable on short timescales ($<$ 10 Myr). Indeed, Rivera & Lissauer (2001) found that a system with $i\,\sim\,11.5^{\circ}$ was stable for at least 500 Myr.
Residuals to the Two-Planet Fit {#Res-2pl}
===============================
We performed a (Lomb) periodogram analysis (Section 13.8 in Press et al. 1992) to the residuals of the 2-planet $i=90^{\circ}$ fit. The result is displayed in Figure \[periodogram\] and shows a maximum peak (the peak with the largest power) with very significant power, i.e., much higher than typical variations, at a period of 1.9379 days. The periodograms of the residuals of all of the 2-planet coplanar fits in which we varied the inclination of the system show a maximum peak at $\sim$ 1.94 days. This periodicity is also present as the largest peak in 995 periodograms of the 1000 residuals of the 2-planet coplanar fits used to determine the uncertainties in Table \[2plparam\]. We performed 2-planet $i=90^{\circ}$ fits to various subsets of the first or last N data points, and the periodicity is also present as the largest peak in the periodograms of all of the corresponding residuals; the amount of power at 1.94 days rises with N. Additionally, the blue points in Figure \[folded\_res\] directly show the phased residuals of the two-planet fit, folded with a period of 1.9379 days. The red points in Figure \[folded\_res\] show the phased residuals of the 2-planet, $i=50^{\circ}$ coplanar fit. We carried out a double-Keplerian fit to subsets of the radial velocities, each of which contained the data for one observing season with a high cadence of observations, and computed the periodogram of the residuals. The sum of all such periodograms exhibited its tallest peak at a period of 1.94 d, in agreement with the best period found from our dynamical models. This test shows that the 1.94 d period cannot be an artifact of the dynamical modeling but rather is inherent in the data. These results provide evidence that GJ 876 likely has a third companion, “d.” The second, smaller peak in Figure \[periodogram\], at 2.0548 days with power $\sim 26$, is likely an alias, and this issue is addressed in Section \[Aliasing\]. The ratio of the power in the two periods is 1.3394. In the following two sections, we refer to these two periodicities by their approximate values of 1.94 and 2.05 days, respectively.
An alternative to the third planet hypothesis is that this periodicity is due to pulsation of the star itself. For the dM2.5 dwarf GJ 436, Butler et al. (2004) reported a planet having $m\sin{i} = 21\,M_{\oplus}$ with $P=2.8$ days and $K=18$ ms$^{-1}$. Otherwise, none of the 150 M0-M5 dwarfs on the Keck planet search survey exhibits any periodicity with a 2-day period. This suggests that M dwarfs do not naturally pulsate at such a period. Moreover, we are not aware of any timescale within M dwarfs corresponding to 2 days. The dynamical and acoustical timescale, analogous to the Solar 5-minute oscillations, would similarly be of order minutes for M dwarfs. We therefore rule out acoustic modes as the cause of the 2-day period. The rotation period of GJ 876 is at least $\sim40$ days, based on its narrow spectral lines and its low chromospheric emission at Ca II H&K (Delfosse et al. 1998); we present photometric evidence of a rotation period of 97 days in Section \[photometry\]. Thus, rotational modulation of surface features cannot explain the 2-day period in the velocities. Similarly, gravity modes and magnetic buoyant processes seem unlikely to explain the high-Q periodicity that we detect over the timespan of 8 years in GJ 876. Thus, the 2-day periodicity in velocity cannot be easily explained by any known property of this normal M dwarf.
Three-Planet Fits {#3plcf}
=================
Based on the results of the periodogram analysis presented in the previous section, we performed 3-planet self-consistent fits with the period of the third planet initially guessed to be about 1.94 days. These 3-planet fits involve 13+1 parameters; the 3 new fitted parameters are the mass, semi-major axis, and mean anomaly of the third planet, and the remaining 10+1 parameters are the same as in the 2-planet fits described in Section \[2plcf\]. Because of the strong eccentricity damping effects of tides at its distance from the star, the third planet was assumed to be on a circular orbit. Note that this assumption is relaxed later on for some fits.
As in Section \[2plcf\], we performed a nominal 3-planet fit to obtain the best fitted parameters plus 1000 additional 3-planet fits to obtain the uncertainties in the parameters. For the nominal fits $\sqrt{\chi_{\nu}^2}=1.154$ and RMS=4.59 ms$^{-1}$ for 3 planets, compared to $\sqrt{\chi_{\nu}^2}=1.593$ and RMS=6.30 ms$^{-1}$ for 2 planets. Like Table \[2plparam\] in Section \[2plcf\], Table \[3plparam\] shows the best fitted parameters for the 3-planet fit with $i=90^{\circ}$.
Figure \[phase0\] shows the phased velocity contributions due to each planet. This figure is analogous to Figure 10 in Marcy et al. (2002), which shows the triple-[*Keplerian*]{} orbital fit to the radial velocities for 55 Cancri. The major difference is that our Figure \[phase0\] shows a triple-[*Newtonian*]{} fit. Both the data and the model show the interactions between GJ 876’s planets. However, in generating Figure \[phase0\] all of the data are folded into the first orbital period after the first observing epoch, while the models only show the velocities during that first period (in all three panels, the velocities shown in the second period are duplicated from the first period). Since the osculating orbital elements for the outer two planets are varying due to mutual perturbations, the data should deviate from the model, as clearly shown for companions b and c. Since companion d is largely decoupled from the outer planets (in both the data and the model), the observed velocities more closely match the model, and the deviations shown are primarily due to the residual velocities. The decoupling is a consequence of the large ratio of the orbital periods for planets c and d ($>$ 15).
The parameters for the two previously known outer planets are not significantly affected by fitting for the parameters for all three planets. However, all of the uncertainties of these parameters are reduced. Thus, the addition of the third planet does not have as significant an effect on the planetary parameters as the effect that the addition of companion c had on changing the parameters of companion b. This result isn’t surprising, given the very low mass and very different orbital period of planet d.
These results have led us to the likely interpretation of a third companion to GJ 876 with a minimum mass of $m_d\sin{i_d} \sim 6\,M_{\oplus}$ and a period of about 2 days. Although this planet is the lowest mass extrasolar planet yet discovered around a main sequence star, even lower mass planets have been found around the millisecond pulsar PSR B1247+1221 (Wolszczan & Frail 1992; Konacki & Wolszczan 2003).
We also generated a series of 3-planet (13+1 parameter) fits in which we varied the inclination of the coplanar system. Figure \[chisq\_i2\] shows $\sqrt{\chi_{\nu}^2}$ for the 3-planet fits versus the inclination as squares. The global minimum in $\sqrt{\chi_{\nu}^2}$ (1.061 with an RMS of 4.23 ms$^{-1}$) occurs at $i=50^{\circ}$, almost precisely the location of the minimum for the two-planet fits. As for the two-planet fits, the $\sqrt{\chi_{\nu}^2}$ starts to rise when $i < 50^{\circ}$. Unlike two-planet fits performed on previous data sets, the minimum at $i=50^{\circ}$ is significant. Using the bootstrap method as in Press et al. (1992), we generated 100 velocity data sets and performed a series of 71 fits ($i=90^{\circ}, i=89^{\circ}, ..., i=20^{\circ}$) to each set in which we varied the inclination of the system. This results in 100 curves which are analogous to the lower curve in Figure \[chisq\_i2\]. Ninety-eight of these curves have a minimum in $\sqrt{\chi_{\nu}^2}$ which occurs at $i=45$–58$^{\circ}$. Seventy-nine have a minimum at $i=48$–52$^{\circ}$. Considering all 100 curves, the mean value (and standard deviation) of the location of the minimum in $\sqrt{\chi_{\nu}^2}$ is $i=50.2\,\pm\,3.1^{\circ}$, and the median value is 50$^{\circ}$. The mean value (and standard deviation) of the drop in $\sqrt{\chi_{\nu}^2}$ from the value at $i=90^{\circ}$ to the value at the minimum is 0.094$\pm$0.036, and the median is 0.097. The mean value (and standard deviation) of the drop in $\sqrt{\chi_{\nu}^2}$ from the value at $i=90^{\circ}$ to the value at $i=50^{\circ}$ is 0.091$\pm$0.036, and the median is 0.095. These values are fully consistent with the drop observed for the actual data. Figure \[7100fits\] shows the entire set of results (small points) along with the results from fitting the real data (squares). Figure \[3inc50\] shows the model radial velocity generated from the $i=50^{\circ}$ 3-planet fit to the actual data, along with the actual observed velocities; the residuals are shown in the lower portion. Note that the residuals in Figure \[3inc50\] are shown on the same scale as in Figure \[2mf=1.00\]; the dispersion is clearly smaller in the 3-planet fit. For completeness, Figure \[zoomin\] overlays the two model curves from Figures \[2mf=1.00\] and \[3inc50\] and the data near the epoch JD 2452060; this illustrates the effect that the third planet has on the model. Table \[3plparami=50\] lists the best fitted orbital parameters for $i=50^{\circ}$. As in Tables \[2plparam\] and \[3plparam\], the uncertainties listed in Table \[3plparami=50\] are based on 1000 fits with $i=50^{\circ}$. The top panel of Figure \[3periodograms\] shows the periodogram of the residuals for this fit. There are no clearly significantly strong peaks (but see Section \[dis\]).
We analyzed the significance of the minimum at $i=50^{\circ}$ by performing a limited set of fits in which the orbits of planets b and c have a mutual inclination. An exhaustive search of the entire 20+1 parameter space delimited by the masses and six orbital parameters of each of the three planets, less one representing rotation of the entire system about the line of sight, is beyond the scope of this paper. One complication arises from the appearance of a significant number of multiple minima. For example, Rivera & Lissauer (2001) found several satisfactory two-planet fits (with similar values of $\sqrt{\chi_{\nu}^2}$) which had significantly different fitted orbital parameters. We did, however, fit parameters for two sets of non-coplanar planetary configurations. In both cases, the planetary orbital planes at epoch were fixed such that the tiny inner planet and one of the giant planets were coplanar with orbit normal inclined at epoch by 50$^{\circ}$ from the line of sight, and the other giant planet’s orbit normal was inclined by a pre-determined amount from the line of sight with the same node as that of the other two planets, so that the mutual inclination was equal to the difference in inclination to the line of sight. Other initial parameters for the fitting were taken from the fit given in Table \[3plparami=50\], with $m\sin{i}$, rather than $m$, conserved for the non-coplanar planet. In one set, the inner and middle planets had $i=50^{\circ}$ and the outer planet’s inclination varied. In the other set of fits, the inner and outer planets had $i=50^{\circ}$ and the middle planet’s inclination differed. The $\sqrt{\chi_{\nu}^2}$ of these fits are shown in Figure \[chi\_ibc\]. Since only a small amount of the parameter space was explored, these preliminary results are only sufficient to draw two tentative conclusions: 1) since $\sqrt{\chi_{\nu}^2}$ rises more rapidly as the inclination of b is varied, the minimum in $\sqrt{\chi_{\nu}^2}$ appears primarily to constrain the inclination of companion b, and 2) the mutual inclination between the outer two planets is likely small. Note that since the nodes were all the same in these fits, varying the mutual inclination changes the mass ratio of the planets; in contrast, varying the nodes can produce configurations with large mutual inclinations but similar mass ratios to those estimated assuming coplanar systems.
As for the two-planet case, the jovian-mass planets are deeply locked in the resonant state in three-planet simulations. For $i=90^{\circ}$, $|\theta_1|_{max}=5.9\,\pm\,1.1^{\circ}$, $|\theta_2|_{max}=30.2\,\pm\,6.1^{\circ}$, and $|\theta_3|_{max}=30.1\,\pm\,6.6^{\circ}$. Note that for the 3-planet simulations, the uncertainties of the amplitudes of the critical angles are reduced, as are the uncertainties in the parameters in Table \[3plparam\]. For $i=50^{\circ}$, $|\theta_1|_{max}=5.4\,\pm\,0.9^{\circ}$, $|\theta_2|_{max}=19.5\,\pm\,3.8^{\circ}$, and $|\theta_3|_{max}=19.4\,\pm\,4.3^{\circ}$. As in Laughlin et al. (2005), we find a general trend in which the amplitudes of the critical arguments decrease as $i$ decreases.
We attempted to determine a dynamical upper limit to the mass of planet d. For a coplanar system with $i=50^{\circ}$, the fitted $m_d$ is $7.53\,M_{\oplus}$. However, we find that the introduction of an inclination of planet d’s orbit to the initial orbital plane of planets b and c has little effect on $\sqrt{\chi_{\nu}^2}$. We performed a set of 3-planet fits in which we kept the outer two planets in the same plane with $i=50^{\circ}$, but varied the inclination of companion d through values $< 50^{\circ}$. All three nodes were kept aligned. We find that $\sqrt{\chi_{\nu}^2}$ does not deviate significantly above 1.061 until $i_d<3^{\circ}$. We then used the Mercury integration package (Chambers 1999), modified as in Lissauer & Rivera (2001) to simulate the general relativistic precession of the periastra, to perform simulations up to 1 Myr based on these fits, and find that the system is stable for $i_d\ge3^{\circ}$. The fitted mass for planet d for the most inclined stable system is $\sim 103\,M_{\oplus}$. This result indicates that the orbit normal of planet d may lie at least as close as $3^{\circ}$ to the line of sight. The orbit normal could point toward or away from us. This defines a double cone which occupies a solid angle of 0.0172 steradians, or about 0.137% of $4\pi$ steradians. Even by restricting the parameter space by fixing $i_b=i_c=50^{\circ}$, stability considerations can only exclude configurations with the orbit normal of companion d in this small solid angle. Thus, these stability considerations cannot presently provide a very meaningful upper bound on the mass of companion d.
With only $m\sin{i}$ determined here for the new planet, its actual mass and the value of $i$ remain essentially unconstrained by dynamical considerations. Nearly pole-on orientations of the orbital plane cannot be ruled out. However, for randomly oriented orbital planes, the probability that the inclination is $i$ or lower (more face-on) is given by $P(i)=1-\cos{i}$. Thus, for example, the probability that $\sin{i}<0.5$ is 13%. Hence, it is [*a priori*]{} unlikely that $m_d>2m_d\sin{i_d}$. Moreover, GJ 876 is the only M dwarf for which such intense Doppler observations have been made (due to the interest in the outer two planets). The population of M dwarfs from which this low $m\sin{i}$ was found is only one, namely GJ 876 itself. In contrast, more than 150 M stars have been observed with adequate precision to detect planets of Saturn mass or greater in two day orbits, and most of these have been observed with adequate precision to detect a Neptune mass planet so close in. Yet apart from GJ 876, the only M star known to possess a planet is GJ 436, which has a companion with $m\sin{i}=21\,M_{\oplus}$ on a 2.644-day orbit (Butler et al. 2004). Therefore, the low $m\sin{i}$ of GJ 876 d was likely not drawn from some large reservoir for which a few nearly pole-on orbits might be expected. We conclude that the true mass of the new planet is likely to be $\lesssim10\,M_{\oplus}$.
We also performed analyses to place limits on the eccentricity of planet d. Two series of one-planet fits to the velocity residuals of both the $i=90^{\circ}$ and $i=50^{\circ}$ two-planet fits (Figure \[folded\_res\]) suggest that the eccentricity of the third companion could be as high as $\sim0.22$. For the initial guesses, we used the best-fitted mass, period, and mean anomaly for companion d from the three-planet self-consistent fit (from Table \[3plparam\] for $i=90^{\circ}$ and from Table \[3plparami=50\] for $i=50^{\circ}$), but varied the initial guessed eccentricity and argument of periastron, and fitted for the eccentricity, argument of periastron, and mean anomaly. Figure \[1plfit\] shows the resulting phased velocities for $i=90^{\circ}$. Additionally, for both $i=90^{\circ}$ and $i=50^{\circ}$ we performed a few fits including all three planets and using an initial guessed eccentricity for the third planet $e_d\sim0.22$. The largest fitted value is $\sim0.28$ (for each value of $i$); this represents our best estimate for an upper limit on the eccentricity of companion d. Based on each of the best fit parameters in Tables \[3plparam\] and \[3plparami=50\], dynamical integrations of the system with the inner planet initially on a circular orbit show that the forced eccentricity of companion d is only $\sim0.0018$ for $i=90^{\circ}$ and $\sim0.0036$ for $i=50^{\circ}$. The tidal circularization timescale for a planet of mass $m_{\rm pl}$ and radius $R_{\rm pl}$ in an orbit with semi-major axis $a$ about a star of mass $M_{\star}$ is $$\tau=\frac{4}{63}Q\left(\frac{a^3}{GM_{\star}}\right)^{1/2}\left(\frac{m_{\rm pl}}{M_{\star}}\right)\left(\frac{a}{R_{\rm pl}}\right)^5$$ (Goldreich & Soter 1966, Rasio et al. 1996). For GJ 876 d, for $i=90^{\circ}$, $a=0.021$ AU, $m_{\rm pl}=5.9\,M_{\oplus}$ and $R_{\rm pl}=1.619\,R_{\oplus}$ (see Section \[transits\]) this timescale is $<10^5$ yr if companion d has a dissipation factor, $Q$, similar to that of Earth ($\sim10$). If the $Q$ of companion d is similar to the estimated $Q$ values for the outer planets in the Solar System ($10^4$–$10^5$), then the timescale for eccentricity damping would be 40–400 Myr, which is less than the estimated 1-10 Gyr lifetime of the star (Marcy et al. 1998). These arguments and results indicate that the eccentricity of companion d is fully consistent with 0.
We addressed the issue of stellar jitter by performing a few 3-planet, $i=50^{\circ}$, fits in which we folded an assumed value of stellar jitter in quadrature into the instrumental uncertainties (listed in Table \[velocities\]). The most relevant quantity in these fits is the $\sqrt{\chi_{\nu}^2}$. Although we only fit for 13+1 parameters at a time, by varying the inclination of the system, we effectively allowed a $15^{\rm th}$ free parameter. To account for this extra parameter, the formal $\sqrt{\chi_{\nu}^2}$ must be adjusted upwards by a factor of $\sqrt{141/140}\approx1.0036$. (Note that the 141 and 140 are the number of observations, 155, minus the number of fitted parameters, 13+1 and 14+1, respectively.) Accounting for this factor, and folding in an assumed jitter of 0, 0.5, 1.0, 1.5, and 2.0 ms$^{-1}$, the $\sqrt{\chi_{\nu}^2}$ are 1.065, 1.057, 1.034, 0.9996, and 0.956 respectively. These results indicate that the actual stellar jitter of GJ 876 is likely to be small ($\lesssim1.5$ ms$^{-1}$), as it is unlikely for the $\sqrt{\chi_{\nu}^2}$ to be substantially less than unity for a data set as large as the one used in this paper. Note that the period of companion d is the same to better than 1 part in $10^5$ for all five values of the assumed stellar jitter.
Aliasing: What Is the Period of the Third Companion? {#Aliasing}
====================================================
The periodogram presented in Section \[Res-2pl\] shows significant power at both 1.94 days and 2.05 days. Using $\sim$ 2.05 days as an initial guess for the period of the third planet, we performed a 3-planet fit to the observed radial velocities. The resulting fitted period for the third planet is 2.0546 days. More importantly, this fit is not vastly worse than the fit with the period of the third planet initially guessed to be 1.94 days ($\sqrt{\chi_{\nu}^2}=1.154$ and RMS=4.59 ms$^{-1}$ for 1.94 days, and $\sqrt{\chi_{\nu}^2}=1.290$ and RMS=5.08 ms$^{-1}$ for 2.05 days, compared with $\sqrt{\chi_{\nu}^2}=1.593$ and RMS=6.30 ms$^{-1}$ for the corresponding 2-planet model). This result prompted a series of tests which together strongly support the hypothesis that the 1.94-day period is the correct one (and the 2.05-day period is an alias), as follows.
We first examined the periodograms of the residuals of the three-planet fits for each period (the lower two panels of Figure \[3periodograms\]), and we detected no peaks with very significant power at any period (there is moderate power near 9 days and at other periods — see Section \[dis\]). The detection of a significant peak at 1.94 days in the residuals of the three-planet fit with $P_d=2.05$ days would have been a clear indication that the 1.94-day period is the true period, because the introduction of a third planet with the wrong period should not (fully) account for the true periodicity. This simple test thus implies that one of the near two-day periods is an alias, but it does not indicate which period is the alias.
We then analyzed various mock velocity data sets to determine whether or not both near 2-day periodicities could result purely from the spacing of the observations and to determine the relative robustness of the two short-period planet solutions. We generated the mock velocity data sets for this study by randomizing residuals, as follows: The difference between the observed and modeled velocities results in a residual velocity at each observing epoch. We indexed the 155 residuals. At each observing epoch, we chose a random integer from 1 to 155 inclusive, and added the corresponding residual to the model velocity at that epoch.
One issue to address is whether the third periodicity (for both periods) is an artifact of the observing frequency. We generated 1000 mock velocities by randomizing the residuals of the two-planet model and performed two-planet fits to these velocities. If the third periodicity is purely due to the observing frequency, then the periodograms of the residuals to these two-planet fits should show significant power at the third periodicity. Figure \[plot\_power\] shows the maximum power at the two periods in each of the 1000 periodograms. In not one case out of 1000 did we observe a periodogram which resembled the periodogram presented in Figure \[periodogram\]. Not one peak was found at either period that was as significant as the ones in the periodogram in Figure \[periodogram\]. In fact, for the 1.94-day period, the mean (and standard deviation) of the maximum power for the 1000 periodograms is $3.2\,\pm\,1.4$. For the 2.05-day period, this is $3.1\,\pm\,1.3$. The most significant peaks at either periodicity had a power of $\sim10$, about 38% of the power in the second highest peak in Figure \[periodogram\]. In Figure \[periodogram\], the observed maximum peak at 1.94 days ($\sim35$) is $>22$ standard deviations above the mean value of the maximum peaks determined in Figure \[plot\_power\]. At 2.05 days, the observed power ($\sim26$) is $>17$ standard deviations above the mean value of the maximum peaks determined in Figure \[plot\_power\]. This strongly indicates that (at least) one of the periodicities must be real.
We performed similar experiments in which we generated 4000 sets of mock velocities based on the three-planet model. Using randomizing residuals, two thousand of the sets were generated based on the model in which the third planet had a period of 1.94 days. The remaining 2000 sets were generated in an analogous manner but based upon the model in which the third planet had a period of 2.05 days. We then performed two-planet fits to all 4000 velocity sets. Then, we examined the periodograms of the residuals of these fits to see if we could generate results similar to the one in Section \[Res-2pl\]. Figure \[power\_ratio\_hist\] shows histograms of the ratio of the power at 1.94 days to the power at 2.05 days. Red is for the models with the third planet at 1.94 days, and blue is for the models with the third planet at 2.05 days. With a model in which the third planet had a period of 1.94 days, 1996 periodograms have a maximum peak at 1.94 days, and 278 have a ratio in the power at 1.94 days to the power at 2.05 days exceeding 1.3394 (478 have this ratio exceeding 1.3). Thus, the model with $P_d=1.94$ days can yield periodograms which resemble the result when a two-planet fit is performed on the actual data. With a model in which the third planet had a period of 2.05 days, 79 periodograms have a maximum peak at 1.94 days, and 0 have a ratio in the power at 1.94 days to the power at 2.05 days exceeding 1.3394. Thus, the model with $P_d=2.05$ days [*never*]{} resulted in a periodogram which resembles the result when a two-planet fit is performed on the actual data. This is very strong evidence that the 1.94-day period is the true period of the third companion.
Additional results also indicate that the 1.94-day period is the “better” one. Briefly, the $\sqrt{\chi_{\nu}^2}$ and RMS are smaller, and there is more power at 1.94 days in the periodogram of the residuals of the two-planet fit. For the three-planet fits, the osculating radial velocity amplitude of the star, $K$, due to the third planet is $\sim$ 40% larger than the RMS of the fit with $P_d=1.94$ days, while this $K$ is only 5% larger than the RMS for the fit with $P_d=2.05$ days.
Photometric Variability in GJ 876 {#photometry}
=================================
Very little is known about the photometric variability of GJ 876 on rotational and magnetic cycle timescales. Weis (1994) acquired 38 Johnson $V$ and Kron $RI$ measurements at Kitt Peak National Observatory over an 11 year period. He observed variability of a couple percent or so and suspected a possible periodicity of 2.9 years. Based on these findings, Kazarovets & Samus (1997) assigned GJ 876 the variable star name IL Aqr in [*The 73rd Name List of Variable Stars*]{}. The Hipparcos satellite, however, made 67 brightness measurements over the course of its three-year mission (Perryman et al. 1997) and failed to detect photometric variability. These results are consistent with the star’s low level of chromospheric and coronal activity (Delfosse et al. 1998).
We have acquired high-precision photometric observations of GJ 876 with the T12 0.8 m automatic photometric telescope (APT) at Fairborn Observatory to measure the level of photometric variability in the star, to derive its rotation period, and to assess the possibility of observing planetary transits in the GJ 876 system. The T12 APT is equipped with a two-channel precision photometer employing two EMI 9124QB bi-alkali photomultiplier tubes to make simultaneous measurements in the Strömgren $b$ and $y$ passbands. This telescope and its photometer are essentially identical to the T8 0.8 m APT and photometer described in Henry (1999). The APT measures the difference in brightness between a program star ($P$) and a nearby constant comparison star with a typical precision of 0.0015 mag for bright stars ($V<8.0$). For GJ 876, we used HD 216018 ($V=7.62$, $B-V=0.354$) as our primary comparison star ($C1$) and HD 218639 ($V=6.43$, $B-V=0.010$) as a secondary comparison star ($C2$). We reduced our Strömgren $b$ and $y$ differential magnitudes with nightly extinction coefficients and transformed them to the Strömgren system with yearly mean transformation coefficients. Further information on the telescope, photometer, observing procedures, and data reduction techniques employed with the T12 APT can be found in Henry (1999) and Eaton, Henry, & Fekel (2003).
From June 2002 to June 2005, the T12 APT acquired 371 differential measurements of GJ 876 with respect to the $C1$ and $C2$ comparison stars. To increase the precision of these observations, we averaged our Strömgren $b$ and $y$ magnitudes into a single $(b+y)/2$ passband. The standard deviation of the $C1-C2$ differential magnitudes from their mean is 0.0030 mag, slightly larger than the typical 0.0015 mag precision of the APT observations. However, since the declination of GJ 876 is $-14\arcdeg$, the APT observations are taken at a relatively high air mass, degrading the photometric precision somewhat. Periodogram analysis of the $C1-C2$ differential magnitudes does not reveal any significant periodicity, indicating that both comparison stars are photometrically constant. However, the standard deviations of the GJ 876 differential magnitudes with respect to the two comparison stars, $P-C1$ and $P-C2$, are 0.0111 and 0.0110 mag, respectively, indicating definite variability in GJ 876. Our three sets of 371 $(b+y)/2$ differential magnitudes are given in Table 5.
The $P-C1$ differential magnitudes of GJ 876 are plotted in the top panel of Figure \[photometry\_fig\]. Photometric variability of a few percent is clearly seen on a timescale of approximately 100 days; additional long-term variability is present as well. The light curve closely resembles those of typical late-type stars with low to intermediate levels of chromospheric activity (Henry, Fekel, & Hall 1995). Results of periodogram analysis of the data in the top panel are shown in the middle panel, revealing a best period of 96.7 days with an estimated uncertainty of approximately one day. We interpret this period to be the rotation period of the star, made evident by modulation in the visibility of photospheric starspots, which must cover at least a few percent of the stellar surface. The observations are replotted in the bottom panel, phased with the 96.7-day period and an arbitrary epoch, and clearly reveal the stellar rotation period.
There is little evidence from the photometric data for variability much shorter than the rotation period of the star. In particular, no photometric flares are apparent. On two nights when GJ 876 was monitored for several hours (JD 2453271 and JD 2453301), the star appears to be constant to better than 1%, consitent with our measurement precision over a large range of airmass. We conclude that photometric transits of the planetary companions could be observed in this star, if they occur, in spite of its intrinsic photometric variability.
Photometric Limits on Transits by GJ 876 “d” {#transits}
============================================
The [*a priori*]{} probability that a planet on a circular orbit transits its parent star as seen from the line of sight to Earth is given by, $${\cal P}_{\rm transit}=0.0045 \left(\frac{1 {\rm AU}}{a}\right)\left(\frac{R_{\star}+R_{\rm pl}}{R_{\odot}}\right)$$ where $a$ is the semi-major axis of the orbit and $R_{\star}$ and $R_{\rm pl}$ are the radii of the star and planet, respectively (Laughlin et al. 2005). We take $R_{\star}=0.3\,R_{\odot}$. For a given composition, planetary density increases with mass as higher pressures in the interior lead to greater self-compression. Léger et al. (2004) find that the mean density of a $6\,M_{\oplus}$ planet with composition similar to that of Earth would be $\sim$ 39% greater than that of our planet. A $5.9\,M_{\oplus}$ planet with such a density would have a radius of $1.619\,R_{\oplus}$, or $0.0147\,R_{\odot}$. Planets of comparable mass but containing significant quantities of astrophysical ices and/or light gases would be larger. The third companion’s orbital radius is $\sim$ 0.021 AU. Thus, the [*a priori*]{} probability that the third companion transits GJ 876 is only $\sim7\%$. The inclination of the orbit to the plane of the sky would have to be $\gtrsim 86^{\circ}\, (\sim\arccos{0.07})$ to guarantee periodic transits. Until recently, radial velocity measurements provided little constraint on the orbital inclinations of GJ 876’s planets (Laughlin et al. 2005), and they still are only able to exclude a trivial fraction of possible orientation angles for planet d (Section \[3plcf\]). Benedict et al. (2002) astrometrically estimated companion b’s inclination to the plane of the sky to be $84\,\pm\,6^{\circ}$. If we assume this range of possible values for the system and that all three planets are nearly coplanar, the probability of a transit by companion d rises to $\sim25\%$. With a radius of $1.619\,R_{\oplus}$, the transit depth is expected to be of order 0.22%, which is photometrically detectable by medium and large aperture telescopes. Additionally, the transit duration can be as long as $(\arcsin{0.07}/\pi)P_d$, or slightly over an hour.
We used previous N-body fits to generate model radial velocities, which were then used to predict transit epochs for October, 2003. We examined the reflex radial velocity of the star due to just planet d; this motion is almost periodic, as perturbations of planet d by planets b and c are small. For a planet on a circular orbit, transit centers should coincide with times when the portion of the reflex velocity due to just the inner companion goes from positive (red shifted) to negative (blue shifted).
Since the probability that planet d transits the face of its star is relatively large, we obtained CCD photometry with the SITe\#3 camera (2048x3150 15 $\mu$m pixels) at the Henrietta Swope 1 m telescope at Las Campanas, in an attempt to detect such transits. We observed for 6 consecutive nights (UT dates 10 to 15 Oct 2003), with possible transits expected (based on the RV ephemeris and the 1.94-day orbital period) during the nights of 10, 12, and 14 Oct. We took all observations using a Harris V filter; integration times were typically 100 s to 120 s, depending upon seeing and sky transparency. With overheads, the observing cadence was about 245 s per image; on a typical night we obtained about 60 images, with a total of 355 usable images for the 6-night run. The nights of 10, 11, and 15 Oct were of photometric quality or nearly so; on 12, 13, and 14 Oct, each night began with an interval (roughly an hour long) of thin to moderate cirrus over the target field. Integration times were necessarily long in order to maintain a moderate duty cycle, and to accumulate enough total exposure time to reduce noise from atmospheric scintillation to acceptable levels. To avoid detector saturation for these relatively bright stars, we therefore defocused the images to a width of about 30 CCD pixels.
Each CCD integration contained the image of GJ 876, as well as those of 10 other stars that were bright enough to use as comparison objects. We computed the extinction-corrected relative flux (normalized to unity when averaged over the night of 10 Oct) of GJ 876 from the CCD images using proven techniques for bright-star CCD photometry, as described by, e.g., Gilliland & Brown (1992). We removed residual drifts with typical amplitudes of 0.002 (which we attribute to time-varying color-dependent extinction, combined with the extremely red color of GJ 876) from the time series of GJ 876 by subtracting a quadratic function of time (defined separately for each night). After this correction, the RMS variation of the relative flux for GJ 876 was in the range 0.001 to 0.0015 for each of the 6 nights.
We next searched for evidence of periodic transits by a small planet. We did this by folding the time series at each of a set of assumed periods, and then convolving the folded series with negative-going boxcar functions (rectangular window) with unit depth and with widths of 30, 45, and 60 minutes. We evaluated the convolution with the boxcar displaced from the start of the folded time series by lag times ranging from zero up to the assumed period, in steps of 0.005 day, or about 7 minutes. The convolution was normalized so that its value can be interpreted as the depth (in relative flux units) of a transit-like signal with the same width as the convolution boxcar function. Our transit detection statistic consisted of the normalized convolution, multiplied by the square root of the number of data points lying within the non-zero part of the boxcar at any given value of the lag. For most periods and phases, the number of included data points is about 15, so the detection statistic exceeds the transit depth by a factor of about 4. Since the expected duration of a transit by a short-period planet across an M4 dwarf is about 60 minutes, the range of boxcar widths covers both central and off-center transits, down to a duration for which the noise in the convolution becomes prohibitive. We tested periods between 1.8 and 2.0 days, on a dense period grid.
The solid curve in Figure \[Tim\] shows the logarithm of the histogram of the detection statistic, computed using all of the data. The largest transit-like events that occur in the data set have detection statistics of 0.0068, but the histogram is almost symmetrical about zero, so that there are very nearly as many positive-going boxcar events as negative-going ones. The value of the transit amplitude for the planet’s expected period and phase is 0.0005, assuming a 60-minute transit; this is about 1.3 standard deviations larger than zero. From the distribution of transit amplitudes, we estimate the probability that a true transit with amplitude 0.0015 is overlain by a negative-going noise spike with amplitude $-0.001$ (yielding an observed signal of 0.0005) is only about 2.4%. Thus, the observations contain no convincing evidence for planetary transits within the period range searched, and within the range of orbital phases probed by the data.
To refine our understanding of detectability, we added to the data synthetic transits with various depths and 60-minute duration; the phases were chosen so that transits occurred on each of the UT dates 10, 12, and 14 Oct. The dashed line in Figure \[Tim\] shows the resulting histogram of the detection statistic for synthetic transits with depth 0.0015. This histogram is plainly skewed toward positive values (negative-going transits), since real transits produce not only a few very large values of the detection statistic, but also many smaller ones (when the assumed period is not exactly correct, for instance). Examination of many realizations of synthetic transits suggests that the skewness shown in Figure \[Tim\] is near the limit of reliable detection. Adding synthetic transits with depths of 0.002, in contrast, always produces a histogram that is unmistakably skewed. Accordingly, we conclude that (within the period and phase limits already mentioned) there is no evidence for a transiting planet that obscures more than 0.002 of the star’s light, and it is highly improbable that there are transits with depth as great as 0.0015. Most likely, this is because the orbital inclination is such that transits do not occur. If transits are taking place, the maximum radius of the transiting body is approximately $\sqrt{0.002}\,R_{\star} = 9.4 \times 10^3$ km, or $1.47\,R_{\oplus}$, assuming the radius of GJ 876 to be $0.3\,R_{\odot}$. Assuming a maximum transit depth of 0.0015, the corresponding planetary radius is $1.28\,R_{\oplus}$. Note that a larger planet can have a non-central transit.
Even though companion d was not transiting in 2003, it may transit in the future if the planets orbiting GJ 876 are on mutually inclined orbits. An analysis of the analogous case of possible transits by companion c is presented by Laughlin et al. (2005).
Discussion {#dis}
==========
The mass of GJ 876’s third companion is $\sim7.5\,M_{\oplus}$. Assuming this value of mass and a density of 8 gcm$^{-3}$ to account for a bit more compression than that found for a $6\,M_{\oplus}$ rocky planet by Léger et al. (2004), the planet’s radius is $1.73\,R_{\oplus}$. The escape velocity from the surface would be slightly more than twice that of Earth, so that the planet may well have retained a substantial atmosphere, and may thus have a larger optical radius.
The proximity of GJ 876 d to its star implies that the timescale for tidal synchronization of its rotation to its orbital period is short for any reasonably assumed planetary properties. However, it is possible that atmospheric tides could maintain non-synchronous rotation, as they do on Venus (Dobrovolskis 1980). In analogy to models for Europa (Greenberg & Weidenschilling 1984), slightly non-synchronous rotation could result from an eccentric orbit forced by perturbations from the outer planets, especially if planet d lacks a substantial permanent asymmetry. Note in this regard that the topography of the planet’s surface, if it has one, is likely to be muted as a result of the high surface gravity ($\sim2.5$ times that of Earth) and the expected malleability resulting from the planet’s large potential for retaining internal heat.
The mean effective temperature of a planet orbiting at $a=0.021$ AU from a star with $L=0.0124\,L_{\odot}$ is $T_{\rm effective} \sim 650 (1-A)^{1/4}$ K. Assuming that heat is uniformly distributed around the planet, as it is on Venus, and that the planet’s albedo does not exceed 0.8, its effective temperature should be in the range 430–650 K (157–377 C). Simulations by Joshi et al. (1997), suggest that synchronously rotating terrestrial planets with sufficiently massive atmospheres efficiently redistribute heat to their unlit hemispheres. For the opposite extreme of a synchronously rotating planet with no redistribution of stellar heat, the temperature at the subsolar point would be $\sqrt{2}$ higher at the substellar point, and varies as the 1/4th power of the cosine of the angle between the position of the star in the sky and the vertical (on the lit hemisphere), implying very cold values near the terminator, and the unlit hemisphere would be extremely cold.
We can conceive of numerous possible scenarios to explain the formation of GJ 876 d. If the planet is rocky, it could have formed in situ by accretion of small rocky planetesimals that spiraled inwards as a result of angular momentum loss through interactions with gas within the protoplanetary disk. At the other extreme, GJ 876 d may be the remnant core of a gas giant planet that migrated so close to its star that it lost (most of) its gaseous envelope to Roche lobe overflow (Trilling et al. 1998). Neptune/Uranus-like formation coupled with inwards migration models are also quite plausible, as well as various other combinations of accretion/migration scenarios. Additional observations of this and other small, close-in extrasolar planets will be required to narrow down phase space enough for us to have any confidence that any particular model is indeed the correct one.
The value of $\sqrt{\chi_{\nu}^2}$ = 1.065 for our effective 14+1 parameter fit implies that the 3-planet coplanar inclined model provides an excellent fit to the data. Nonetheless, the fit is not perfect, and additional variations may be induced by stellar jitter and/or unmodelled small planets, as well as mutual inclinations of the three known planets. We note that the residuals to both the 90$^{\circ}$ and the 50$^{\circ}$ 1.94-day 3-planet fits to the data have modest power near 9 days (Figure \[3periodograms\]); this period is also present in many of the residuals to artificial data sets used to test various aspects of the 3-planet fits. Somewhat larger peaks near 13 and 120 days are present in the residuals to the 50$^{\circ}$ fit. A small planet with an orbital period of around 13 days would be located so close to the massive and eccentric planet c that it would not be stable for long unless it occupied a protected dynamical niche. Even around 9.4 days, long-term stability is unlikely, especially if $i_c \lesssim 50^{\circ}$. The peak at 120 days probably represents an incomplete accounting of the radial velocity signature of the two large planets, but it could also represent a small outer planet on a resonant orbit. We note that Ji, Li, & Liu (2002, private communication) performed simulations of the GJ 876 system with an additional planet with $a>0.5$ AU. In agreement with Rivera & Lissauer (2001), all were found to be stable. More data are required to determine whether or not additional planets orbit GJ 876.
Summary {#sum}
=======
We have shown that the GJ 876 system likely has a low mass planet on a close-in orbit. Fitting a model that includes the previously identified jovian-mass companions to the radial velocity data obtained at the Keck telescope results in residuals that contain significant power at a periodicity of 1.9379 days. Including a third companion with this period in a self-consistent model results in a significant improvement in the quality of the fit. The third companion, which we refer to as GJ 876 d, is found to have a minimum mass of $5.89\,\pm\,0.54\,M_{\oplus}$ and an orbital period of 1.93776 $\pm$ 0.00007 days. The corresponding semi-major axis is 0.021 AU, making it clearly the smallest $a$ of any planet found in Doppler surveys. Note that this is $\sim$ 10 stellar radii, roughly coincident with the number of stellar radii separating 51 Pegasi b from its host star. Planet d is also probably closer to its star in an absolute sense than are any of the planets found by transit. A significantly better fit to the data is obtained by assuming that the normal to the three planets’ orbits is inclined to the line of sight by $50^{\circ}$ than by assuming this inclination to be $90^{\circ}$. For this $50^{\circ}$ fit, the actual mass of the inner planet is $7.53\,\pm\,0.70\,M_{\oplus}$.
We have searched for transits, and find no evidence of them. The lack of observable transits means that we cannot place direct observational constraints on planet d’s radius and composition. The requirement that the planet be contained within its Roche lobe implies that its density is at least 0.075 gcm$^{-3}$, not a very meaningful bound. Thus, while the newly discovered companion may well be a giant rocky body, a large gaseous component cannot be excluded. Continued study of GJ 876 will provide us with additional information on companion d, which may well be the most “Earth-like” planet yet discovered. See Table \[3plparami=50\] for our best estimates of the masses and orbital parameters of all three planets now known to orbit GJ 876.
We thank Drs. Ron Gilliland, Mark Phillips, and Miguel Roth for informative advice and consultation during the observations performed at Las Campanas. We also are grateful for the contributed algorithms by Jason T. Wright, Chris McCarthy, and John Johnson. We thank Drs. Peter Bodenheimer, Eric Ford, Man Hoi Lee, and Sara Seager for useful discussions. We are also grateful to Sandy Keiser for maintaining the computers used for this work at the Dept. of Terrestrial Magnetism at the Carnegie Institution of Washington. We thank an anonymous referee for a thorough report which helped improve the paper. The work of EJR and JJL on this project was funded by NASA Solar Systems Origins grant 188-07-21-03 (to JJL). The work of GL was funded by NASA Grant NNG-04G191G from the TPF Precursor Science Program. We acknowledge support from the Carnegie Institution of Washington and the NASA Astrobiology Institute through Cooperative Agreement NNA04CC09A for EJR’s work on photometric studies. We acknowledge support by NSF grant AST-0307493 and AST-9988087 (to SSV), travel support from the Carnegie Institution of Washington (to RPB), NASA grant NAG5-8299 and NSF grant AST95-20443 (to GWM), and by Sun Microsystems. This research has made use of the Simbad database, operated at CDS, Strasbourg, France. We thank the NASA and UC Telescope Time Assignment committees for allocations of telescope time toward the planet search around M dwarfs. GWH acknowledges support from NASA grant NCC5-511 and NSF grant HRD-9706268. Finally, the authors wish to extend a special thanks to those of Hawaiian ancestry on whose sacred mountain of Mauna Kea we are very privileged to be guests. Without their generous hospitality, the Keck observations presented herein would not have been possible.
[DUM]{}
Benedict, G. F., et al. 2002, ApJ, 581, L115
Butler, R. P., Marcy, G. W., Williams, E., McCarthy, C., Dosanjh, P., & Vogt, S. S. 1996, PASP, 108, 500
Butler, R. P., Vogt, S. S., Marcy, G. W., Fischer, D. A., Wright, J. T., Henry, G. W., Laughlin, G., & Lissauer, J. J. 2004, ApJ, 617, 580
Chambers, J. E. 1999, MNRAS, 304, 793
Delfosse, X., Forveille, T., Mayor, M., Perrier, C., Naef, D., & Queloz, D. 1998, AA, 338, L67
Dobrovolskis, A. 1980, Icarus, 41, 18
Eaton, J. A., Henry, G. W., & Fekel, F. C. 2003, in The Future of Small Telescopes in the New Millennium, Volume II - The Telescopes We Use, ed. T. D. Oswalt (Dordrecht: Kluwer), 189
Ford, E. 2005, AJ, 129, 1717
Gilliland, R. L. & Brown, T. M. 1992, PASP, 104, 582
Goldreich, P. & Soter, S. 1966, Icarus, 5, 375
Greenberg, R. & Weidenschilling, S. J. 1984, Icarus, 58, 186
Henry, G. W. 1999, , 111, 845
Henry, G. W., Fekel, F. C., & Hall, D. S. 1995, , 110, 2926
Henry, T. J. & McCarthy, D. W. 1993, AJ, 106, 773
Ji, J., Li, G., & Liu, L. 2002, ApJ, 572, 1041
Joshi, M. M., Haberle, R. M., & Reynolds, R. T. 1997, Icarus, 129, 450
Kazarovets, E. V. & Samus, N. N. 1997, IBVS No. 4471
Konacki, M. & A. Wolszczan, A. 2003, ApJ, 591, L147
Laughlin, G., Butler, R. P., Fischer, D. A., Marcy, G. W., Vogt, S. S., & Wolf, A. S. 2005, ApJ, 622, 1182
Laughlin, G. & Chambers, J. E. 2001, ApJ, 551, L109
Lee, M. H. & Peale, S. J. 2003, ApJ, 592, 1201
Léger, A. 2004, Icarus, 169, 499
Lissauer, J. J. & Rivera, E. 2001, ApJ, 554, 114
Marcy, G. W. & Butler, R. P. 1992, PASP, 104, 270
Marcy, G. W., Butler, R. P., Vogt, S. S., Fischer, D. A., & Lissauer, J. J. 1998, ApJ, 505, 147
Marcy, G. W., Butler, R. P., Fischer, D., Vogt, S. S., Lissauer, J. J., & Rivera, E. J. 2001, ApJ, 556, 392
Marcy, G. W., Butler, R. P., Fischer, D. A., Laughlin, G., Vogt, S. S., Henry, G. W., & Pourbaix, D. 2002, ApJ, 581 1375
Nauenberg, M. 2002, ApJ, 568, 369
Perryman, M. A. C. et al. 1997, AA, 323, 49
Press, W. H., Teukolsky, S. A., Vetterling, W. T., & Flannery, B. P. 1992, Numerical Recipes: The Art of Scientific Computing (2nd Edition; Cambridge, U.K.: Cambridge University Press)
Rasio, F. A., Tout, C. A., Lubow, S. H., & Livio, M. 1996, ApJ, 470, 1187
Rivera, E. J. & Lissauer, J. J. 2001, ApJ, 558, 392
Trilling, D. E., Benz, W., Guillot, T., Lunine, J. I., Hubbard, W. B., & Burrows, A. 1998, ApJ, 500, 428
Vogt et al. 1994, In Proc. SPIE Instrumentation in Astronomy VIII, David L. Crawford; Eric R. Craine; Eds., Volume 2198, p. 362
Weis, E. W. 1994, , 107, 1135
Wolszczan, A. & Frail, D. 1992, Nature, 355, 145
![ Radial velocity versus time for Keck M dwarfs spanning 7 years. The Keck-HIRES system achieves precision of 3–5 ms$^{-1}$ for M dwarfs brighter than V=11. []{data-label="keck_stable"}](f1.eps)
![ Triple-Newtonian orbital fit to the radial velocity observations for GJ 876. The observed and model velocities for each planet are shown separately by subtracting the effects of the other two planets. The panels show the velocities due to companions d (top), c (middle), and b (bottom). The curves show model velocities for the first orbital period beginning with the epoch of the first observation. The data were folded at the appropriate periodicity given by the fit in Table \[3plparam\]. Note the differences in scale in the three panels. The deviations shown for companions c and b clearly demonstrate that their orbital elements have been evolving over the timespan of the observations. The colored numbers in the bottom panel indicate which points correspond to which observing season. Note that the points taken in 1997 most closely follow the curves in the bottom two panels, as expected. []{data-label="phase0"}](f6.eps)
![ Top: Periodogram of residuals to the $i=50^{\circ}$ 3-planet self-consistent fit (with $P_d=1.9379$ days). Middle: Periodogram of Residuals to the nominal, $i=90^{\circ}$, 3-planet, self-consistent fit with $P_d=1.9379$ days. Bottom: Periodogram of Residuals to the nominal, $i=90^{\circ}$, 3-planet, self-consistent fit with $P_d=2.0548$ days. []{data-label="3periodograms"}](f10.eps)
![ Maximum power at 1.94 days (left) and 2.05 days (right) in 1000 periodograms of residuals to two-planet fits to 1000 synthetic velocity data sets generated from a two-planet model. []{data-label="plot_power"}](f13.eps)
![ ($Top$): Photometric observations of GJ 876 with the T12 0.8 m APT demonstrate variability of a few percent on a timescale of approximately 100 days along with longer-term variability. ($Middle$): Periodogram analysis of the APT observations gives the star’s rotation period of 96.7 days. ($Bottom$): The photometric observations phased with the 96.7-day period reveal the effect of rotational modulation in the visibility of photospheric starspots on GJ 876. []{data-label="photometry_fig"}](f15.eps)
![ (Solid curve) Histogram of the logarithm, base 10, of the number of period/phase combinations tested as a function of their resulting transit detection statistics, for the full 6 nights of photometric data. See text for a definition of the transit detection statistic. Note that this curve is virtually symmetric about the value of 0, consistent with no transits being observed. (Dashed curve) Similar histogram for identical data to which artificial transits with relative flux depth of 0.0015 have been added, at a period of 1.900 days. []{data-label="Tim"}](f16.eps)
[rrr]{} 602.093 & 294.79 & 3.64\
603.108 & 313.42 & 3.73\
604.118 & 303.40 & 3.89\
605.110 & 302.96 & 3.78\
606.111 & 281.40 & 3.82\
607.085 & 255.18 & 3.71\
609.116 & 163.95 & 3.98\
666.050 & 300.35 & 3.49\
690.007 & -151.95 & 3.88\
715.965 & 156.45 & 3.73\
785.704 & 327.19 & 5.28\
983.046 & -96.43 & 3.61\
984.094 & -117.02 & 4.00\
1010.045 & -79.21 & 3.51\
1011.094 & -62.54 & 3.70\
1011.108 & -62.78 & 3.42\
1011.981 & -35.71 & 3.12\
1011.989 & -38.23 & 3.15\
1013.089 & -7.85 & 3.49\
1013.963 & 14.53 & 3.73\
1013.968 & 16.75 & 3.80\
1043.020 & -83.07 & 3.51\
1044.000 & -110.55 & 3.47\
1050.928 & -154.13 & 3.81\
1052.003 & -136.35 & 4.18\
1068.877 & -128.79 & 3.68\
1069.984 & -100.52 & 3.67\
1070.966 & -94.38 & 3.60\
1071.878 & -66.73 & 3.41\
1072.938 & -55.33 & 3.69\
1170.704 & -123.56 & 4.41\
1171.692 & -137.08 & 4.02\
1172.703 & -119.17 & 4.04\
1173.701 & -115.95 & 4.35\
1312.127 & -134.51 & 3.37\
1313.117 & -133.52 & 3.79\
1343.041 & 35.17 & 3.80\
1368.001 & -182.55 & 4.00\
1369.002 & -191.05 & 4.14\
1370.060 & -174.57 & 3.51\
1372.059 & -157.56 & 5.86\
1409.987 & -90.13 & 3.79\
1410.949 & -85.59 & 3.86\
1411.922 & -92.94 & 3.42\
1438.802 & -63.41 & 3.90\
1543.702 & -142.50 & 4.02\
1550.702 & -197.70 & 3.84\
1704.103 & 122.99 & 3.76\
1706.108 & 73.75 & 3.91\
1755.980 & 272.62 & 5.08\
1757.038 & 245.87 & 4.24\
1792.822 & -215.71 & 3.74\
1883.725 & 187.77 & 3.96\
1897.682 & 50.01 & 4.27\
1898.706 & 42.34 & 4.14\
1899.724 & 32.04 & 3.53\
1900.704 & 13.98 & 3.47\
2063.099 & 212.65 & 4.01\
2095.024 & -228.99 & 4.35\
2098.051 & -266.92 & 4.71\
2099.095 & -257.23 & 4.51\
2100.066 & -270.35 & 3.92\
2101.991 & -248.41 & 3.84\
2128.915 & 130.58 & 5.15\
2133.018 & 55.95 & 4.35\
2133.882 & 68.55 & 4.86\
2160.896 & -269.15 & 3.84\
2161.862 & -270.68 & 4.28\
2162.880 & -235.10 & 4.49\
2188.909 & 116.79 & 4.39\
2189.808 & 113.52 & 5.00\
2236.694 & 187.17 & 4.10\
2238.696 & 208.12 & 4.08\
2242.713 & 225.32 & 4.95\
2446.071 & 84.52 & 4.99\
2486.913 & 194.04 & 4.46\
2486.920 & 195.16 & 4.46\
2487.120 & 182.36 & 4.16\
2487.127 & 181.67 & 4.06\
2487.914 & 179.58 & 4.53\
2487.923 & 180.93 & 4.48\
2488.124 & 188.92 & 3.98\
2488.131 & 181.99 & 4.15\
2488.934 & 162.66 & 3.75\
2488.940 & 162.29 & 3.69\
2488.948 & 161.35 & 3.50\
2488.955 & 163.13 & 3.52\
2514.867 & -121.06 & 4.78\
2515.873 & -143.91 & 4.27\
2535.774 & 45.38 & 4.43\
2536.024 & 49.11 & 3.87\
2536.804 & 77.18 & 5.01\
2537.013 & 75.82 & 4.12\
2537.812 & 87.35 & 4.09\
2538.014 & 91.81 & 4.48\
2538.801 & 121.30 & 4.42\
2539.921 & 137.37 & 3.92\
2572.709 & -46.62 & 4.91\
2572.716 & -44.75 & 5.20\
2572.916 & -54.73 & 4.85\
2572.924 & -50.10 & 5.60\
2573.740 & -69.21 & 4.86\
2573.746 & -66.75 & 4.80\
2573.875 & -69.37 & 4.68\
2573.882 & -66.53 & 4.70\
2574.760 & -104.45 & 4.41\
2574.768 & -102.01 & 4.53\
2574.936 & -103.60 & 4.85\
2574.944 & -102.76 & 4.91\
2575.716 & -124.28 & 4.80\
2575.722 & -123.40 & 4.16\
2600.748 & 134.05 & 3.95\
2600.755 & 134.67 & 3.92\
2601.747 & 138.56 & 4.05\
2601.754 & 141.02 & 4.35\
2602.717 & 160.52 & 4.50\
2602.724 & 164.31 & 4.77\
2651.718 & -116.96 & 6.23\
2807.028 & 168.61 & 4.21\
2829.008 & -240.74 & 4.06\
2832.080 & -170.81 & 4.35\
2833.963 & -121.76 & 4.07\
2835.085 & -85.07 & 3.75\
2848.999 & 149.86 & 5.22\
2850.001 & 133.34 & 4.35\
2851.057 & 131.25 & 4.89\
2854.007 & 93.08 & 4.24\
2856.016 & 120.42 & 4.22\
2897.826 & -42.50 & 4.05\
2898.815 & -8.85 & 4.16\
2924.795 & 215.18 & 4.84\
2987.716 & 209.59 & 6.10\
2988.724 & 203.06 & 4.75\
3154.117 & 61.47 & 4.10\
3181.005 & -51.18 & 4.51\
3181.116 & -55.10 & 4.07\
3182.070 & -93.33 & 4.00\
3191.037 & -257.70 & 4.19\
3195.970 & -173.67 & 4.25\
3196.997 & -156.17 & 4.64\
3301.808 & -18.93 & 5.04\
3301.817 & -25.17 & 5.55\
3301.823 & -19.23 & 6.37\
3301.871 & -25.66 & 4.18\
3302.723 & -75.85 & 4.24\
3302.729 & -72.85 & 4.43\
3302.736 & -73.20 & 4.15\
3303.779 & -119.03 & 5.06\
3303.785 & -126.29 & 4.23\
3303.791 & -124.47 & 4.60\
3338.744 & 78.91 & 4.07\
3367.718 & -217.67 & 5.12\
3368.719 & -230.07 & 5.27\
3369.702 & -230.32 & 4.76\
3369.708 & -229.73 & 4.21\
\[velocities\]
[lll]{} $m$ & 0.617 $\pm$ 0.007 $M_{\rm Jup}$ & 1.929 $\pm$ 0.009 $M_{\rm Jup}$\
$P$ (days) & 30.344 $\pm$ 0.018 & 60.935 $\pm$ 0.017\
$K$ (ms$^{-1}$) & 88.12 $\pm$ 0.94 & 212.04 $\pm$ 1.03\
$a$ (AU) & 0.13031 $\pm$ 0.00005 & 0.20781 $\pm$ 0.00004\
$e$ & 0.2232 $\pm$ 0.0018 & 0.0251 $\pm$ 0.0035\
$\omega$ ($^{\circ}$) & 198.3 $\pm$ 1.4 & 176.8 $\pm$ 9.2\
$M$ ($^{\circ}$) & 308.8 $\pm$ 1.9 & 174.3 $\pm$ 9.2\
transit epoch & JD 2452517.604 $\pm$ 0.067 &\
\[2plparam\]
[llll]{} $m$ & 5.89 $\pm$ 0.54 $M_{\oplus}$ & 0.619 $\pm$ 0.005 $M_{\rm Jup}$ & 1.935 $\pm$ 0.007 $M_{\rm Jup}$\
$P$ (d) & 1.93776 $\pm$ 0.00007 & 30.340 $\pm$ 0.013 & 60.940 $\pm$ 0.013\
$K$ (ms$^{-1}$) & 6.46 $\pm$ 0.59 & 88.36 $\pm$ 0.72 & 212.60 $\pm$ 0.76\
$a$ (AU) & 0.0208067 $\pm$ 0.0000005 & 0.13030 $\pm$ 0.00004 & 0.20783 $\pm$ 0.00003\
$e$ & 0 (fixed) & 0.2243 $\pm$ 0.0013 & 0.0249 $\pm$ 0.0026\
$\omega$ ($^{\circ}$) & 0 (fixed) & 198.3 $\pm$ 0.9 & 175.7 $\pm$ 6.0\
$M$ ($^{\circ}$) & 309.5 $\pm$ 5.1 & 308.5 $\pm$ 1.4 & 175.5 $\pm$ 6.0\
transit epoch & JD 2452490.756 $\pm$ 0.027 & JD 2452517.633 $\pm$ 0.051 &\
\[3plparam\]
[llll]{} $m$ & 7.53 $\pm$ 0.70 $M_{\oplus}$ & 0.790 $\pm$ 0.006 $M_{\rm Jup}$ & 2.530 $\pm$ 0.008 $M_{\rm Jup}$\
$P$ (d) & 1.93774 $\pm$ 0.00006 & 30.455 $\pm$ 0.019 & 60.830 $\pm$ 0.019\
$K$ (ms$^{-1}$) & 6.32 $\pm$ 0.59 & 87.14 $\pm$ 0.67 & 212.81 $\pm$ 0.66\
$a$ (AU) & 0.0208067 $\pm$ 0.0000004 & 0.13065 $\pm$ 0.00005 & 0.20774 $\pm$ 0.00004\
$e$ & 0 (fixed) & 0.2632 $\pm$ 0.0013 & 0.0338 $\pm$ 0.0025\
$\omega$ ($^{\circ}$) & 0 (fixed) & 197.4 $\pm$ 0.9 & 185.5 $\pm$ 4.3\
$M$ ($^{\circ}$) & 311.8 $\pm$ 4.6 & 311.6 $\pm$ 1.3 & 165.6 $\pm$ 4.2\
\[3plparami=50\]
[cccc]{} 52444.9553 & 2.9918 & 4.2587 & 1.2668\
52447.9492 & 3.0000 & 4.2730 & 1.2730\
52461.9151 & 2.9934 & 4.2622 & 1.2689\
52461.9541 & 2.9919 & 4.2609 & 1.2690\
52532.7211 & 2.9915 & 4.2610 & 1.2694\
[^1]: The $\chi^2$ is the sum of the squares of the differences between the data points and model points at each observed epoch divided by the corresponding uncertainties of the measurements. The reduced chi-squared, $\chi_{\nu}^2$, is that quantity divided by the number of degrees of freedom (the total number of observations minus the number of fitted parameters). The RMS is the root-mean-square of the velocity residuals after the model fit has been subtracted from the data. Note that $\chi^2$ has no units, and RMS has units of ms$^{-1}$ in this case.
|
---
abstract: |
In this paper, we propose a novel unsupervised domain adaptation algorithm based on deep learning for visual object recognition. Specifically, we design a new model called Deep Reconstruction-Classification Network (DRCN), which jointly learns a shared encoding representation for two tasks: i) supervised classification of labeled source data, and ii) unsupervised reconstruction of unlabeled target data. In this way, the learnt representation not only preserves discriminability, but also encodes useful information from the target domain. Our new DRCN model can be optimized by using backpropagation similarly as the standard neural networks.
We evaluate the performance of ${\textnormal{DRCN}}$ on a series of cross-domain object recognition tasks, where ${\textnormal{DRCN}}$ provides a considerable improvement (up to $\sim$8$\%$ in accuracy) over the prior state-of-the-art algorithms. Interestingly, we also observe that the reconstruction pipeline of ${\textnormal{DRCN}}$ transforms images from the source domain into images whose appearance resembles the target dataset. This suggests that ${\textnormal{DRCN}}$’s performance is due to constructing a single composite representation that encodes information about both the structure of target images and the classification of source images. Finally, we provide a formal analysis to justify the algorithm’s objective in domain adaptation context.
author:
- 'Muhammad Ghifary${}^{1*}$, W. Bastiaan Kleijn${}^1$, Mengjie Zhang${}^1$, David Balduzzi${}^1$, Wen Li${}^2$'
bibliography:
- '../mdgn.bib'
title: |
Deep Reconstruction-Classification Networks\
for Unsupervised Domain Adaptation
---
16SubNumber[\*\*\*]{}
[ ]{}
Introduction {#sec:intro}
============
An important task in visual object recognition is to design algorithms that are robust to *dataset bias* [@Torralba2011]. Dataset bias arises when labeled training instances are available from a source domain and test instances are sampled from a related, but different, target domain. For example, consider a person identification application in *unmanned aerial vehicles* (UAV), which is essential for a variety of tasks, such as surveillance, people search, and remote monitoring [@Hsu:2015]. One of the critical tasks is to identify people from a bird’s-eye view; however collecting labeled data from that viewpoint can be very challenging. It is more desirable that a UAV can be trained on some already available *on-the-ground* labeled images (source), e.g., people photographs from social media, and then successfully applied to the actual UAV view (target). Traditional supervised learning algorithms typically perform poorly in this setting, since they assume that the training and test data are drawn from the same domain.
Domain adaptation attempts to deal with dataset bias using unlabeled data from the target domain so that the task of manual labeling the target data can be reduced. Unlabeled target data provides auxiliary training information that should help algorithms generalize better on the target domain than using source data only. Successful domain adaptation algorithms have large practical value, since acquiring a huge amount of labels from the target domain is often expensive or impossible. Although domain adaptation has gained increasing attention in object recognition, see [@patel_dasurvey:2015] for a recent overview, the problem remains essentially unsolved since model accuracy has yet to reach a level that is satisfactory for real-world applications. Another issue is that many existing algorithms require optimization procedures that do not scale well as the size of datasets increases [@Aljundi_CVPR2015; @Baktashmotlagh:2013aa; @Bruzzone2010; @Gong:2013ab; @Long:2013aa; @Long2014a; @Pan2011]. Earlier algorithms were typically designed for relatively small datasets, e.g., the Office dataset [@Saenko:2010aa].
We consider a solution based on learning representations or features from raw data. Ideally, the learned feature should model the label distribution as well as reduce the discrepancy between the source and target domains. We hypothesize that a possible way to approximate such a feature is by (supervised) learning the *source label* distribution and (unsupervised) learning of the *target data distribution*. This is in the same spirit as *multi-task learning* in that learning auxiliary tasks can help the main task be learned better [@Caruana1997; @Argyriou2006]. The goal of this paper is to develop an accurate, scalable multi-task feature learning algorithm in the context of domain adaptation.
#### **Contribution:**
To achieve the goal stated above, we propose a new deep learning model for unsupervised domain adaptation. Deep learning algorithms are highly scalable since they run in linear time, can handle streaming data, and can be parallelized on GPUs. Indeed, deep learning has come to dominate object recognition in recent years [@Krizhevsky_NIPS2012; @Simonyan_ICLR2015].
We propose *Deep Reconstruction-Classification Network* (${\textnormal{DRCN}}$), a convolutional network that jointly learns two tasks: i) supervised source label prediction and ii) unsupervised target data reconstruction. The encoding parameters of the ${\textnormal{DRCN}}$ are shared across both tasks, while the decoding parameters are separated. The aim is that the learned label prediction function can perform well on classifying images in the target domain – the data reconstruction can thus be viewed as an auxiliary task to support the adaptation of the label prediction. Learning in ${\textnormal{DRCN}}$ alternates between unsupervised and supervised training, which is different from the standard *pretraining-finetuning* strategy [@Hinton06afast; @Bengio:2007aa].
From experiments over a variety of cross-domain object recognition tasks, ${\textnormal{DRCN}}$ performs better than the state-of-the-art domain adaptation algorithm [@Ganin2015], with up to $\sim8\%$ accuracy gap. The ${\textnormal{DRCN}}$ learning strategy also provides a considerable improvement over the pretraining-finetuning strategy, indicating that it is more suitable for the unsupervised domain adaptation setting. We furthermore perform a visual analysis by reconstructing source images through the learned reconstruction function. It is found that *the reconstructed outputs resemble the appearances of the target images* suggesting that the encoding representations are successfully adapted. Finally, we present a probabilistic analysis to show the relationship between the ${\textnormal{DRCN}}$’s learning objective and a semi-supervised learning framework [@Cohen:2006], and also the soundness of considering only data from a target domain for the data reconstruction training.
Related Work
============
Domain adaptation is a large field of research, with related work under several names such as class imbalance [@Japkowicz:2002], covariate shift [@Shimodaira:2000aa], and sample selection bias [@Zadrozny:2004]. In [@Pan:2010aa], it is considered as a special case of transfer learning. Earlier work on domain adaptation focused on text document analysis and NLP [@Blitzer:2006aa; @Daume-III:2007aa]. In recent years, it has gained a lot of attention in the computer vision community, mainly for object recognition application, see [@patel_dasurvey:2015] and references therein. The domain adaptation problem is often referred to as *dataset bias* in computer vision [@Torralba2011]. This paper is concerned with *unsupervised domain adaptation* in which labeled data from the target domain is not available [@Margolis:2011]. A range of approaches along this line of research in object recognition have been proposed [@Aljundi_CVPR2015; @Baktashmotlagh:2013aa; @Fernando:2013aa; @Ghifary:SCA2015; @Gopalan:2011aa; @Gong:2012aa; @Long2014a], most were designed specifically for small datasets such as the Office dataset [@Saenko:2010aa]. Furthermore, they usually operated on the SURF-based features [@Bay:2008aa] extracted from the raw pixels. In essence, the unsupervised domain adaptation problem remains open and needs more powerful solutions that are useful for practical situations.
Deep learning now plays a major role in the advancement of domain adaptation. An early attempt addressed large-scale sentiment classification [@Glorot:2011aa], where the concatenated features from fully connected layers of stacked denoising autoencoders have been found to be domain-adaptive [@Vincent:2010aa]. In visual recognition, a fully connected, shallow network pretrained by denoising autoencoders has shown a certain level of effectiveness [@Ghifary2014b]. It is widely known that deep convolutional networks (ConvNets) [@LeCun:1998aa] are a more natural choice for visual recognition tasks and have achieved significant successes [@Girshick_CVPR2014; @Krizhevsky_NIPS2012; @Simonyan_ICLR2015]. More recently, ConvNets pretrained on a large-scale dataset, ImageNet, have been shown to be reasonably effective for domain adaptation [@Krizhevsky_NIPS2012]. They provide significantly better performances than the SURF-based features on the Office dataset [@Donahue:2014aa; @Hoffman:2013ab]. An earlier approach on using a convolutional architecture without pretraining on ImageNet, DLID, has also been explored [@Chopra:2013aa] and performs better than the SURF-based features.
To further improve the domain adaptation performance, the pretrained ConvNets can be *fine-tuned* under a particular constraint related to minimizing a domain discrepancy measure [@Ganin2015; @Long_DAN:2015; @Tzeng_DDC:2014; @Tzeng_ICCV2015]. Deep Domain Confusion (DDC) [@Tzeng_DDC:2014] utilizes the maximum mean discrepancy (MMD) measure [@Borgwardt:2006aa] as an additional loss function for the fine-tuning to adapt the last fully connected layer. Deep Adaptation Network (DAN) [@Long_DAN:2015] fine-tunes not only the last fully connected layer, but also some convolutional and fully connected layers underneath, and outperforms DDC. Recently, the deep model proposed in [@Tzeng_ICCV2015] extends the idea of DDC by adding a criterion to guarantee the class alignment between different domains. However, it is limited only to the *semi-supervised* adaptation setting, where a small number of target labels can be acquired.
The algorithm proposed in [@Ganin2015], which we refer to as ReverseGrad, handles the domain invariance as a binary classification problem. It thus optimizes two contradictory objectives: i) minimizing label prediction loss and ii) maximizing domain classification loss via a simple *gradient reversal* strategy. ReverseGrad can be effectively applied both in the pretrained and randomly initialized deep networks. The randomly initialized model is also shown to perform well on cross-domain recognition tasks other than the Office benchmark, i.e., large-scale handwritten digit recognition tasks. Our work in this paper is in a similar spirit to ReverseGrad in that it does not necessarily require pretrained deep networks to perform well on some tasks. However, our proposed method undertakes a fundamentally different learning algorithm: finding a good label classifier while simultaneously learning the structure of the target images.
Deep Reconstruction-Classification Networks
===========================================
This section describes our proposed deep learning algorithm for unsupervised domain adaptation, which we refer to as *Deep Reconstruction-Classification Networks* (${\textnormal{DRCN}}$). We first briefly discuss the unsupervised domain adaptation problem. We then present the DRCN architecture, learning algorithm, and other useful aspects.
Let us define a *domain* as a probability distribution ${{\mathbb D}}_{XY}$ (or just ${{\mathbb D}}$) on ${{\mathcal X}}\times {{\mathcal Y}}$, where ${{\mathcal X}}$ is the input space and ${{\mathcal Y}}$ is the output space. Denote the source domain by ${{\mathbb P}}$ and the target domain by ${{\mathbb Q}}$, where ${{\mathbb P}}\neq {{\mathbb Q}}$. The aim in *unsupervised domain adaptation* is as follows: given a labeled i.i.d. sample from a source domain $S^s = \{(x^s_i, y^s_i) \}_{i=1}^{n_s} \sim {{\mathbb P}}$ and an unlabeled sample from a target domain $S^t_u = \{(x^t_i) \}_{i=1}^{n_t} \sim {{\mathbb Q}}_X$, find a good labeling function $f : {{\mathcal X}}\rightarrow {{\mathcal Y}}$ on $S^t_u$. We consider a feature learning approach: finding a function $g: {{\mathcal X}}\rightarrow {{\mathcal F}}$ such that the discrepancy between distribution ${{\mathbb P}}$ and ${{\mathbb Q}}$ is minimized in ${{\mathcal F}}$. Ideally, a discriminative representation should model both the label and the structure of the data. Based on that intuition, we hypothesize that a domain-adaptive representation should satisfy two criteria: i) classify well the source domain labeled data and ii) reconstruct well the target domain unlabeled data, which can be viewed as an approximate of the ideal discriminative representation. Our model is based on a convolutional architecture that has two pipelines with a shared encoding representation. The first pipeline is a standard convolutional network for *source label prediction* [@LeCun:1998aa], while the second one is a convolutional autoencoder for *target data reconstruction* [@Masci2011; @Zeiler:2010]. Convolutional architectures are a natural choice for object recognition to capture spatial correlation of images. The model is optimized through multitask learning [@Caruana1997], that is, jointly learns the (supervised) source label prediction and the (unsupervised) target data reconstruction tasks.[^1] The aim is that the encoding shared representation should learn the commonality between those tasks that provides useful information for cross-domain object recognition. Figure \[fig:mdgn\_arch\] illustrates the architecture of ${\textnormal{DRCN}}$.
![Illustration of the ${\textnormal{DRCN}}$’s architecture. It consists of two pipelines: i) label prediction and ii) data reconstruction pipelines. The shared parameters between those two pipelines are indicated by the red color.[]{data-label="fig:mdgn_arch"}](img/mdgn_arch.png){width="4.5in" height="1.8in"}
We now describe ${\textnormal{DRCN}}$ more formally. Let $f_{c}: {{\mathcal X}}\rightarrow {{\mathcal Y}}$ be the (supervised) label prediction pipeline and $f_{r}: {{\mathcal X}}\rightarrow {{\mathcal X}}$ be the (unsupervised) data reconstruction pipeline of ${\textnormal{DRCN}}$. Define three additional functions: 1) an encoder / feature mapping $g_{{\mathrm{enc}}} : {{\mathcal X}}\rightarrow {{\mathcal F}}$, 2) a decoder $g_{{\mathrm{dec}}} : {{\mathcal F}}\rightarrow {{\mathcal X}}$, and 3) a feature labeling $g_{{\mathrm{lab}}}: {{\mathcal F}}\rightarrow {{\mathcal Y}}$. For $m$-class classification problems, the output of $g_{{\mathrm{lab}}}$ usually forms an $m$-dimensional vector of real values in the range $[0,1]$ that add up to 1, i.e., *softmax* output. Given an input $x \in {{\mathcal X}}$, one can decompose $f_{c}$ and $f_{r}$ such that $$\begin{aligned}
\label{eq:mdgn_dpass} f_{c}(x) &=& (g_{{\mathrm{lab}}} \circ g_{{\mathrm{enc}}} ) (x) , \\
\label{eq:mdgn_gpass} f_{r}(x) &=& (g_{{\mathrm{dec}}} \circ g_{{\mathrm{enc}}} ) (x) .\end{aligned}$$
Let $\Theta_{c}= \{\Theta_{{\mathrm{enc}}}, \Theta_{{\mathrm{lab}}} \}$ and $\Theta_{r}= \{\Theta_{{\mathrm{enc}}}, \Theta_{{\mathrm{dec}}} \}$ denote the parameters of the supervised and unsupervised model. $\Theta_{{\mathrm{enc}}}$ are shared parameters for the feature mapping $g_{{\mathrm{enc}}}$. Note that $\Theta_{{\mathrm{enc}}}, \Theta_{{\mathrm{dec}}}, \Theta_{{\mathrm{lab}}}$ may encode parameters of multiple layers. The goal is to seek a single feature mapping $g_{{\mathrm{enc}}}$ model that supports both $f_{c}$ and $f_{r}$.
#### **Learning algorithm:**
The learning objective is as follows. Suppose the inputs lie in ${{\mathcal X}}\subseteq {{\mathbb R}}^d$ and their labels lie in ${{\mathcal Y}}\subseteq {{\mathbb R}}^m$. Let $\ell_{c}: {{\mathcal Y}}\times {{\mathcal Y}}\rightarrow {{\mathbb R}}$ and $\ell_{r}: {{\mathcal X}}\times {{\mathcal X}}\rightarrow {{\mathbb R}}$ be the classification and reconstruction loss respectively. Given labeled source sample $S^s = \{({{\mathbf x}}^s_i, {{\mathbf y}}^s_i) \}_{i=1}^{n_s} \sim {{\mathbb P}}$, where ${{\mathbf y}}_i \in \{ 0, 1\}^m$ is a *one-hot* vector, and unlabeled target sample $S^t_u = \{({{\mathbf x}}^t_j) \}_{j=1}^{n_t} \sim {{\mathbb Q}}$, we define the empirical losses as: $$\begin{aligned}
{{\mathcal L}}^{n_s}_{c}( \{ \Theta_{{\mathrm{enc}}}, \Theta_{{\mathrm{lab}}} \} ) := \sum_{i=1}^{n_s} \ell_{c}\left( f_{c}({{\mathbf x}}^s_i; \{ \Theta_{{\mathrm{enc}}}, \Theta_{{\mathrm{lab}}} \}), {{\mathbf y}}^s_i\right), \\
{{\mathcal L}}^{n_t}_{r}( \{ \Theta_{{\mathrm{enc}}}, \Theta_{{\mathrm{dec}}} \} ) := \sum_{j=1}^{n_t} \ell_{r}\left( f_{r}({{\mathbf x}}^t_j; \{ \Theta_{{\mathrm{enc}}}, \Theta_{{\mathrm{dec}}} \}), {{\mathbf x}}^t_j)\right).\end{aligned}$$ Typically, $\ell_{c}$ is of the form *cross-entropy loss* $\displaystyle \sum_{k=1}^m y_k \log [f_{c}({{\mathbf x}})]_k$ (recall that $f_c({{\mathbf x}})$ is the softmax output) and $\ell_{r}$ is of the form *squared loss* $\displaystyle \| {{\mathbf x}}- f_{r}({{\mathbf x}}) \|_2^2$.
Our aim is to solve the following objective: $$\label{eq:mdgn_obj}
\min \lambda {{\mathcal L}}^{n_s}_{c}( \{ \Theta_{{\mathrm{enc}}}, \Theta_{{\mathrm{lab}}} \} ) + (1-\lambda) {{\mathcal L}}^{n_t}_{r}( \{ \Theta_{{\mathrm{enc}}}, \Theta_{{\mathrm{dec}}} \} ),$$ where $0 \leq \lambda \leq 1$ is a hyper-parameter controlling the trade-off between classification and reconstruction. The objective is a convex combination of supervised and unsupervised loss functions. We justify the approach in Section \[sec:analysis\].
Objective (\[eq:mdgn\_obj\]) can be achieved by alternately minimizing ${{\mathcal L}}^{n_s}_{c}$ and ${{\mathcal L}}^{n_t}_{r}$ using *stochastic gradient descent* (SGD). In the implementation, we used RMSprop [@Tieleman2012], the variant of SGD with a gradient normalization – the current gradient is divided by a moving average over the previous root mean squared gradients. We utilize dropout regularization [@Srivastava_JMLR2014] during ${{\mathcal L}}^{n_s}_{c}$ minimization, which is effective to reduce overfitting. Note that dropout regularization is applied in the fully-connected/dense layers only, see Figure \[fig:mdgn\_arch\].
The stopping criterion for the algorithm is determined by monitoring the average reconstruction loss of the unsupervised model during training – the process is stopped when the average reconstruction loss stabilizes. Once the training is completed, the optimal parameters $\hat{\Theta}_{{\mathrm{enc}}}$ and $\hat{\Theta}_{{\mathrm{lab}}}$ are used to form a classification model $f_{c}({{\mathbf x}}^t; \{ \hat{\Theta}_{{\mathrm{enc}}}, \hat{\Theta}_{{\mathrm{lab}}}\})$ that is expected to perform well on the target domain. The ${\textnormal{DRCN}}$ learning algorithm is summarized in Algorithm \[alg:mdgn\] and implemented using Theano [@Theano:2012].
[**Input:**]{}\
$\bullet$ Labeled source data: $S^s = \{( {{\mathbf x}}^s_i, y^s_i) \}_{i=1}^{n_s}$;\
$\bullet$ Unlabeled target data: $S^t_u = \{ {{\mathbf x}}^t_j\}_{i=j}^{n_t}$;\
$\bullet$ Learning rates: $\alpha_{c}$ and $\alpha_{r}$;
Initialize parameters $\Theta_{{\mathrm{enc}}}, \Theta_{{\mathrm{dec}}}, \Theta_{{\mathrm{lab}}}$ $$\Theta_{c}\leftarrow \Theta_{c}- \alpha_{c}\lambda \nabla_{\Theta_{c}} {{\mathcal L}}^{m_s}_{c}(\Theta_{c}); \nonumber$$ $$\Theta_{r}\leftarrow \Theta_{r}- \alpha_{r}(1 - \lambda)\nabla_{\Theta_{r}} {{\mathcal L}}^{m_t}_{r}(\Theta_{r}). \nonumber$$
[**Output:**]{}\
$\bullet$ ${\textnormal{DRCN}}$ learnt parameters: $\hat{\Theta} = \{ \hat{\Theta}_{{\mathrm{enc}}}, \hat{\Theta}_{{\mathrm{dec}}}, \hat{\Theta}_{{\mathrm{lab}}}\}$;
#### **Data augmentation and denoising:**
We use two well-known strategies to improve ${\textnormal{DRCN}}$’s performance: data augmentation and denoising. Data augmentation generates additional training data during the supervised training with respect to some plausible transformations over the original data, which improves generalization, see e.g. [@Simard:2003]. Denoising involves reconstructing *clean* inputs given their *noisy* counterparts. It is used to improve the feature invariance of denoising autoencoders (DAE) [@Vincent:2010aa]. Generalization and feature invariance are two properties needed to improve domain adaptation. Since ${\textnormal{DRCN}}$ has both classification and reconstruction aspects, we can naturally apply these two tricks simultaneously in the training stage.
Let ${{\mathbb Q}}_{\tilde{X} | X}$ denote the noise distribution given the original data from which the noisy data are sampled from. The classification pipeline of ${\textnormal{DRCN}}$ $f_{c}$ thus actually observes additional pairs $\{ (\tilde{{{\mathbf x}}}^s_i, y^s_i) \}_{i=1}^{n_s}$ and the reconstruction pipeline $f_{r}$ observes $\{ (\tilde{{{\mathbf x}}}^t_i, {{\mathbf x}}^t_i) \}_{i=1}^{n_t}$. The noise distribution ${{\mathbb Q}}_{\tilde{X} | X}$ are typically geometric transformations (translation, rotation, skewing, and scaling) in data augmentation, while either zero-masked noise or Gaussian noise is used in the denoising strategy. In this work, we combine all the fore-mentioned types of noise for denoising and use only the geometric transformations for data augmentation.
Experiments and Results
=======================
This section reports the evaluation results of ${\textnormal{DRCN}}$. It is divided into two parts. The first part focuses on the evaluation on large-scale datasets popular with deep learning methods, while the second part summarizes the results on the Office dataset [@Saenko:2010aa].
Experiment I: SVHN, MNIST, USPS, CIFAR, and STL
-----------------------------------------------
The first set of experiments investigates the empirical performance of ${\textnormal{DRCN}}$ on five widely used benchmarks: MNIST [@LeCun:1998aa], USPS [@usps1994], Street View House Numbers (SVHN) [@svhn:2011], CIFAR [@Krizhevsky:2009aa], and STL [@Coates:2011ab], see the corresponding references for more detailed configurations. The task is to perform cross-domain recognition: *taking the training set from one dataset as the source domain and the test set from another dataset as the target domain*. We evaluate our algorithm’s recognition accuracy over three cross-domain pairs: 1) MNIST vs USPS, 2) SVHN vs MNIST, and 3) CIFAR vs STL.
MNIST (<span style="font-variant:small-caps;">mn</span>) vs USPS (<span style="font-variant:small-caps;">us</span>) contains 2D grayscale handwritten digit images of 10 classes. We preprocessed them as follows. USPS images were rescaled into $28 \times 28$ and pixels were normalized to $[0,1]$ values. From this pair, two cross-domain recognition tasks were performed: <span style="font-variant:small-caps;">mn</span> $\rightarrow$ <span style="font-variant:small-caps;">us</span> and <span style="font-variant:small-caps;">us</span> $\rightarrow$ <span style="font-variant:small-caps;">mn</span>.
In SVHN (<span style="font-variant:small-caps;">sv</span>) vs MNIST (<span style="font-variant:small-caps;">mn</span>) pair, MNIST images were rescaled to $32 \times 32$ and SVHN images were grayscaled. The $[0,1]$ normalization was then applied to all images. Note that we did not preprocess SVHN images using local contrast normalization as in [@Sermanet:2012]. We evaluated our algorithm on <span style="font-variant:small-caps;">sv</span> $\rightarrow$ <span style="font-variant:small-caps;">mn</span> and <span style="font-variant:small-caps;">mn</span> $\rightarrow$ <span style="font-variant:small-caps;">sv</span> cross-domain recognition tasks.
STL (<span style="font-variant:small-caps;">st</span>) vs CIFAR (<span style="font-variant:small-caps;">ci</span>) consists of RGB images that share eight object classes: *airplane*, *bird*, *cat*, *deer*, *dog*, *horse*, *ship*, and *truck*, which forms $4,000$ (train) and $6,400$ (test) images for STL, and $40,000$ (train) and $8,000$ (test) images for CIFAR. STL images were rescaled to $32 \times 32$ and pixels were standardized into zero-mean and unit-variance. Our algorithm was evaluated on two cross-domain tasks, that is, <span style="font-variant:small-caps;">st</span> $\rightarrow$ <span style="font-variant:small-caps;">ci</span> and <span style="font-variant:small-caps;">ci</span> $\rightarrow$ <span style="font-variant:small-caps;">st</span>.
#### **The architecture and learning setup:**
The ${\textnormal{DRCN}}$ architecture used in the experiments is adopted from [@Masci2011]. The label prediction pipeline has three convolutional layers: 100 5x5 filters (<span style="font-variant:small-caps;">conv1</span>), 150 5x5 filters (<span style="font-variant:small-caps;">conv2</span>), and 200 3x3 filters (<span style="font-variant:small-caps;">conv3</span>) respectively, two max-pooling layers of size 2x2 after the first and the second convolutional layers (<span style="font-variant:small-caps;">pool1</span> and <span style="font-variant:small-caps;">pool2</span>), and three fully-connected layers (<span style="font-variant:small-caps;">fc4</span>, <span style="font-variant:small-caps;">fc5</span>,and <span style="font-variant:small-caps;">fc$\_$out</span>) – <span style="font-variant:small-caps;">fc$\_$out</span> is the output layer. The number of neurons in <span style="font-variant:small-caps;">fc4</span> or <span style="font-variant:small-caps;">fc5</span> was treated as a tunable hyper-parameter in the range of $[300, 350, ..., 1000]$, chosen according to the best performance on the validation set. The shared encoder $g_{\mathrm{enc}}$ has thus a configuration of <span style="font-variant:small-caps;">conv1</span>-<span style="font-variant:small-caps;">pool1</span>-<span style="font-variant:small-caps;">conv2</span>-<span style="font-variant:small-caps;">pool2</span>-<span style="font-variant:small-caps;">conv3</span>-<span style="font-variant:small-caps;">fc4</span>-<span style="font-variant:small-caps;">fc5</span>. Furthermore, the configuration of the decoder $g_{\mathrm{dec}}$ is the inverse of that of $g_{\mathrm{enc}}$. Note that the unpooling operation in $g_{\mathrm{dec}}$ performs by upsampling-by-duplication: inserting the pooled values in the appropriate locations in the feature maps, with the remaining elements being the same as the pooled values.
We employ ReLU activations [@Nair:2010aa] in all hidden layers and linear activations in the output layer of the reconstruction pipeline. Updates in both classification and reconstruction tasks were computed via RMSprop with learning rate of $10^{-4}$ and moving average decay of $0.9$. The control penalty $\lambda$ was selected according to accuracy on the source validation data – typically, the optimal value was in the range $[0.4,0.7]$.
#### **Benchmark algorithms:**
We compare DRCN with the following methods. 1) ${\textnormal{ConvNet}_{src}}$: a supervised convolutional network trained on the labeled source domain only, with the same network configuration as that of ${\textnormal{DRCN}}$’s label prediction pipeline, 2) SCAE: ConvNet preceded by the layer-wise pretraining of stacked convolutional autoencoders on all unlabeled data [@Masci2011], 3) ${\textnormal{SCAE}_t}$: similar to SCAE, but only unlabeled data from the target domain are used during pretraining, 4) ${\textnormal{SDA}_{sh}}$ [@Glorot:2011aa]: the deep network with three fully connected layers, which is a successful domain adaptation model for sentiment classification, 5) Subspace Alignment (SA) [@Fernando:2013aa],[^2] and 6) ReverseGrad [@Ganin2015]: a recently published domain adaptation model based on deep convolutional networks that provides the state-of-the-art performance.
All deep learning based models above have the same architecture as DRCN for the label predictor. For ReverseGrad, we also evaluated the “original architecture” devised in [@Ganin2015] and chose whichever performed better of the original architecture or our architecture. Finally, we applied the data augmentation to all models similarly to ${\textnormal{DRCN}}$. The ground-truth model is also evaluated, that is, a convolutional network trained from and tested on images from the target domain only (${\textnormal{ConvNet}_{tgt}}$), to measure the difference between the cross-domain performance and the ideal performance.
#### **Classification accuracy:**
Table \[tab:acc\] summarizes the cross-domain recognition accuracy (*mean $\pm$ std*) of all algorithms over ten independent runs. ${\textnormal{DRCN}}$ performs best in all but one cross-domain tasks, better than the prior state-of-the-art ReverseGrad. Notably on the <span style="font-variant:small-caps;">sv</span> $\rightarrow$ <span style="font-variant:small-caps;">mn</span> task, ${\textnormal{DRCN}}$ outperforms ReverseGrad with $\sim8\%$ accuracy gap. ${\textnormal{DRCN}}$ also provides a considerable improvement over ReverseGrad ($\sim5\%$) on the reverse task, <span style="font-variant:small-caps;">mn</span> $\rightarrow$ <span style="font-variant:small-caps;">sv</span>, but the gap to the groundtruth is still large – this case was also mentioned in previous work as a failed case [@Ganin2015]. In the case of <span style="font-variant:small-caps;">ci</span> $\rightarrow$ <span style="font-variant:small-caps;">st</span>, the performance of ${\textnormal{DRCN}}$ almost matches the performance of the target baseline.
${\textnormal{DRCN}}$ also convincingly outperforms the greedy-layer pretraining-based algorithms (${\textnormal{SDA}_{sh}}$, SCAE, and ${\textnormal{SCAE}_t}$). This indicates the effectiveness of the simultaneous reconstruction-classification training strategy over the standard pretraining-finetuning in the context of domain adaptation.
#### **Comparison of different ${\textbf{DRCN}}$ flavors:**
Recall that ${\textnormal{DRCN}}$ uses only the unlabeled target images for the unsupervised reconstruction training. To verify the importance of this strategy, we further compare different flavors of ${\textnormal{DRCN}}$: ${\textnormal{DRCN}_s}$ and ${\textnormal{DRCN}_{st}}$. Those algorithms are conceptually the same but different only in utilizing the unlabeled images during the unsupervised training. ${\textnormal{DRCN}_s}$ uses only unlabeled source images, whereas ${\textnormal{DRCN}_{st}}$ combines both unlabeled source and target images.
The experimental results in Table \[tab:acc2\] confirm that ${\textnormal{DRCN}}$ always performs better than ${\textnormal{DRCN}_s}$ and ${\textnormal{DRCN}_{st}}$. While ${\textnormal{DRCN}_{st}}$ occasionally outperforms ReverseGrad, its overall performance does not compete with that of ${\textnormal{DRCN}}$. The only case where ${\textnormal{DRCN}_s}$ and ${\textnormal{DRCN}_{st}}$ flavors can closely match ${\textnormal{DRCN}}$ is on <span style="font-variant:small-caps;">mn</span>$\rightarrow$ <span style="font-variant:small-caps;">us</span>. This suggests that the use of *unlabeled source data* during the reconstruction training do not contribute much to the cross-domain generalization, which verifies the ${\textnormal{DRCN}}$ strategy in using the unlabeled target data only.
#### **Data reconstruction:**
A useful insight was found when reconstructing source images through the reconstruction pipeline of ${\textnormal{DRCN}}$. Specifically, we observe the visual appearance of $f_{r}(x^s_1), \ldots, f_{r}(x^s_m)$, where $x^s_1, \ldots, x^s_m$ are some images from the source domain. Note that $x^s_1, \ldots, x^s_m$ are unseen during the unsupervised reconstruction training in ${\textnormal{DRCN}}$. We visualize such a reconstruction in the case of <span style="font-variant:small-caps;">sv</span> $\rightarrow$<span style="font-variant:small-caps;">mn</span> training in Figure \[fig:rec\]. Figure \[fig:sv\] and \[fig:mi\] display the original source (SVHN) and target (MNIST) images.
The main finding of this observation is depicted in Figure \[fig:drcn\_rec\]: the reconstructed images produced by ${\textnormal{DRCN}}$ given some SVHN images as the source inputs. We found that *the reconstructed SVHN images resemble MNIST-like digit appearances, with white stroke and black background*, see Figure \[fig:mi\]. Remarkably, ${\textnormal{DRCN}}$ still can produce “correct" reconstructions of some noisy SVHN images. For example, all SVHN digits 3 displayed in Figure \[fig:sv\] are clearly reconstructed by ${\textnormal{DRCN}}$, see the fourth row of Figure \[fig:drcn\_rec\]. ${\textnormal{DRCN}}$ tends to pick only the digit in the middle and ignore the remaining digits. This may explain the superior cross-domain recognition performance of ${\textnormal{DRCN}}$ on this task. However, such a cross-reconstruction appearance does not happen in the reverse task, <span style="font-variant:small-caps;">mn</span> $\rightarrow$ <span style="font-variant:small-caps;">sv</span>, which may be an indicator for the low accuracy relative to the groundtruth performance.
We also conduct such a diagnostic reconstruction on other algorithms that have the reconstruction pipeline. Figure \[fig:convae\_rec\] depicts the reconstructions of the SVHN images produced by ConvAE trained on the MNIST images only. They do not appear to be digits, suggesting that ConvAE recognizes the SVHN images as noise. Figure \[fig:drcnst\_rec\] shows the reconstructed SVHN images produced by ${\textnormal{DRCN}_{st}}$. We can see that they look almost identical to the source images shown in Figure \[fig:sv\], which is not surprising since the source images are included during the reconstruction training.
Finally, we evaluated the reconstruction induced by ${\textnormal{ConvNet}_{src}}$ to observe the difference with the reconstruction of ${\textnormal{DRCN}}$. Specifically, we trained ConvAE on the MNIST images in which the encoding parameters were initialized from those of ${\textnormal{ConvNet}_{src}}$ and not updated during training. We refer to the model as ConvAE+${\textnormal{ConvNet}_{src}}$. The reconstructed images are visualized in Figure \[fig:convae-convnet\_rec\]. Although they resemble the style of MNIST images as in the ${\textnormal{DRCN}}$’s case, only a few source images are correctly reconstructed.
To summarize, the results from this diagnostic data reconstruction correlate with the cross-domain recognition performance. More visualization on other cross-domain cases can be found in the Supplemental materials.
Experiments II: Office dataset
------------------------------
In the second experiment, we evaluated ${\textnormal{DRCN}}$ on the standard domain adaptation benchmark for visual object recognition, <span style="font-variant:small-caps;">Office</span> [@Saenko:2010aa], which consists of three different domains: <span style="font-variant:small-caps;">amazon (a)</span>, <span style="font-variant:small-caps;">dslr (d)</span>, and <span style="font-variant:small-caps;">webcam (w)</span>. <span style="font-variant:small-caps;">Office</span> has 2817 labeled images in total distributed across 31 object categories. The number of images is thus relatively small compared to the previously used datasets.
We applied the ${\textnormal{DRCN}}$ algorithm to *finetune* AlexNet [@Krizhevsky_NIPS2012], as was done with different methods in previous work [@Ganin2015; @Long_DAN:2015; @Tzeng_DDC:2014].[^3] The fine-tuning was performed only on the fully connected layers of AlexNet, $fc6$ and $fc7$, and the last convolutional layer, $conv5$. Specifically, the label prediction pipeline of ${\textnormal{DRCN}}$ contains $conv4$-$conv5$-$fc6$-$fc7$-$label$ and the data reconstruction pipeline has $conv4$-$conv5$-$fc6$-$fc7$-$fc6'$-$conv5'$-$conv4'$ (the $'$ denotes the the inverse layer) – it thus does not reconstruct the original input pixels. The learning rate was selected following the strategy devised in [@Long_DAN:2015]: cross-validating the base learning rate between $10^{-5}$ and $10^{-2}$ with a multiplicative step-size $10^{1/2}$. We followed the standard unsupervised domain adaptation training protocol used in previous work [@Chopra:2013aa; @Gong:2013ab; @Long_DAN:2015], that is, using *all* labeled source data and unlabeled target data. Table \[tab:office\_acc\] summarizes the performance accuracy of ${\textnormal{DRCN}}$ based on that protocol in comparison to the state-of-the-art algorithms. We found that ${\textnormal{DRCN}}$ is competitive against DAN and ReverseGrad – the performance is either the best or the second best except for one case. In particular, ${\textnormal{DRCN}}$ performs best with a convincing gap in situations when the target domain has relatively many data, i.e., $\textsc{amazon}$ as the target dataset.
Analysis {#sec:analysis}
========
This section provides a first step towards a formal analysis of the DRCN algorithm. We demonstrate that optimizing in ${\textnormal{DRCN}}$ relates to solving a semi-supervised learning problem on the target domain according to a framework proposed in [@Cohen:2006]. The analysis suggests that unsupervised training using only unlabeled target data is sufficient. That is, adding unlabeled source data might not further improve domain adaptation.
Denote the labeled and unlabeled distributions as ${{\mathbb D}}_{XY} =: {{\mathbb D}}$ and ${{\mathbb D}}_{X}$ respectively. Let $P^\theta(\cdot)$ refer to a family of models, parameterized by $\theta\in\Theta$, that is used to learn a maximum likelihood estimator. The ${\textnormal{DRCN}}$ learning algorithm for domain adaptation tasks can be interpreted probabilistically by assuming that $P^\theta(x)$ is Gaussian and $P^\theta(y|x)$ is a multinomial distribution, fit by logistic regression.
The objective in Eq. is equivalent to the following maximum likelihood estimate: $$\label{eq:approx_target}
\hat{\theta} =
\operatorname*{argmax}_{\theta} \lambda \sum_{i=1}^{n_s} \log P^{\theta}_{Y|X} (y^s_i | x^s_i) + (1 - \lambda) \sum_{j=1}^{n_t} \log P^{\theta}_{X | \tilde{X}}(x^t_j | \tilde{x}^t_j),$$ where $\tilde{x}$ is the noisy input generated from ${{\mathbb Q}}_{\tilde{X} | X }$. The first term represents the model learned by the supervised convolutional network and the second term represents the model learned by the unsupervised convolutional autoencoder. Note that the discriminative model only observes labeled data from the source distribution ${{\mathbb P}}_X$ in objectives and .
We now recall a semi-supervised learning problem formulated in [@Cohen:2006]. Suppose that labeled and unlabeled samples are taken from the *target domain* ${{\mathbb Q}}$ with probabilities $\lambda$ and $(1-\lambda)$ respectively. By Theorem 5.1 in [@Cohen:2006], the maximum likelihood estimate $\zeta$ is $$\label{eq:true_target}
\zeta = \operatorname*{argmax}_{\zeta} \lambda \operatorname*{\mathbb E}_{{{\mathbb Q}}} [\log P^{\zeta}(x, y)]+ (1 - \lambda) \operatorname*{\mathbb E}_{{{\mathbb Q}}_X }[\log P^{\zeta}_X(x)]$$ The theorem holds if it satisfies the following assumptions: *consistency*, the model contains true distribution, so the MLE is consistent; and *smoothness and measurability* [@White:1982]. Given target data $ (x_1^t, y_1^t) , \ldots, (x_{n_t}^t, y_{n_t}^t) \sim {{\mathbb Q}}$, the parameter $\zeta$ can be estimated as follows: $$\label{eq:true_target2}
\hat{\zeta} = \operatorname*{argmax}_{\zeta} \lambda \sum_{i=1}^{n_t} [\log P^{\zeta}(x_i^t, y_i^t)] + (1 - \lambda) \sum_{i=1}^{n_t}[\log P^{\zeta}_X(x_i^t)]$$ Unfortunately, $\hat{\zeta}$ cannot be computed in the unsupervised domain adaptation setting since we do not have access to target labels.
Next we inspect a certain condition where $\hat{\theta}$ and $\hat{\zeta}$ are closely related. Firstly, by the *covariate shift* assumption [@Shimodaira:2000aa]: ${{\mathbb P}}\neq {{\mathbb Q}}$ and ${{\mathbb P}}_{Y|X} = {{\mathbb Q}}_{Y | X}$, the first term in (\[eq:true\_target\]) can be switched from an expectation over target samples to source samples: $$\operatorname*{\mathbb E}_{{{\mathbb Q}}} \Big[\log P^{\zeta}(x, y)\Big] = \operatorname*{\mathbb E}_{{{\mathbb P}}}\left[\frac{{{\mathbb Q}}_X(x)}{{{\mathbb P}}_X(x)}\cdot \log P^{\zeta}(x, y)\right].$$ Secondly, it was shown in [@Bengio_gdae:2013] that $P^\theta_{X | \tilde{X}}(x | \tilde{x})$, see the second term in (\[eq:approx\_target\]), defines an ergodic Markov chain whose asymptotic marginal distribution of $X$ converges to the data-generating distribution ${{\mathbb P}}_X$. Hence, Eq. (\[eq:true\_target2\]) can be rewritten as $$\label{eq:true_target3}
\hat{\zeta} \approx \operatorname*{argmax}_{\zeta} \lambda \sum_{i=1}^{n_s} \frac{{{\mathbb Q}}_X(x_i^s)}{{{\mathbb P}}_X(x_i^s)} \log P^\zeta(x_i^s, y_i^s) + (1 - \lambda) \sum_{j=1}^{n_t}[\log P^{\zeta}_{X|\tilde{X}}(x_j^t | \tilde{x}_j^t)].$$ The above objective differs from objective (\[eq:approx\_target\]) only in the first term. Notice that $\hat{\zeta}$ would be approximately equal $\hat{\theta}$ if the ratio $\frac{{{\mathbb Q}}_X(x_i^s)}{{{\mathbb P}}_X(x_i^s)}$ is constant for all $x^s$. In fact, it becomes the objective of ${\textnormal{DRCN}_{st}}$. Although the constant ratio assumption is too strong to hold in practice, comparing and suggests that $\hat{\zeta}$ can be a reasonable approximation to $\hat{\theta}$.
Finally, we argue that using unlabeled source samples during the unsupervised training may not further contribute to domain adaptation. To see this, we expand the first term of (\[eq:true\_target3\]) as follows $$\lambda \sum_{i=1}^{n_s} \frac{{{\mathbb Q}}_X(x^s_i)}{{{\mathbb P}}_X(x^s_i)} \log P^\zeta_{Y|X}(y^s_i | x^s_i) +
\lambda \sum_{i=1}^{n_s} \frac{{{\mathbb Q}}_X(x^s_i)}{{{\mathbb P}}_X(x^s_i)} \log P^\zeta_X(x^s_i). \nonumber$$ Observe the second term above. As $n_s \rightarrow \infty$, $P^\theta_X$ will converge to ${{\mathbb P}}_X$. Hence, since $\int_{x \sim {{\mathbb P}}_X} \frac{{{\mathbb Q}}_X(x)}{{{\mathbb P}}_X(x)} \log {{\mathbb P}}_X(x) \leq \int_{x \sim {{\mathbb P}}_X} {{\mathbb P}}_X^t(x)$, adding more unlabeled source data will only result in a constant. This implies an optimization procedure equivalent to (\[eq:approx\_target\]), which may explain the *uselessness* of unlabeled source data in the context of domain adaptation.
Note that the latter analysis does not necessarily imply that incorporating unlabeled source data degrades the performance. The fact that ${\textnormal{DRCN}_{st}}$ performs worse than ${\textnormal{DRCN}}$ could be due to, e.g., the model capacity, which depends on the choice of the architecture.
Conclusions
===========
We have proposed Deep Reconstruction-Classification Network (${\textnormal{DRCN}}$), a novel model for unsupervised domain adaptation in object recognition. The model performs multitask learning, i.e., alternately learning (source) label prediction and (target) data reconstruction using a shared encoding representation. We have shown that ${\textnormal{DRCN}}$ provides a considerable improvement for some cross-domain recognition tasks over the state-of-the-art model. It also performs better than deep models trained using the standard *pretraining-finetuning* approach. A useful insight into the effectiveness of the learned ${\textnormal{DRCN}}$ can be obtained from its data reconstruction. The appearance of ${\textnormal{DRCN}}$’s reconstructed source images resemble that of the target images, which indicates that ${\textnormal{DRCN}}$ learns the domain correspondence. We also provided a theoretical analysis relating the ${\textnormal{DRCN}}$ algorithm to semi-supervised learning. The analysis was used to support the strategy in involving only the target unlabeled data during learning the reconstruction task.
This document is the supplemental material for the paper *Deep Reconstruction-Classification for Unsupervised Domain Adaptation*. It contains some more experimental results that cannot be included in the main manuscript due to a lack of space.
Data Reconstruction {#data-reconstruction-1 .unnumbered}
===================
Figures \[fig:rec\] and \[fig:rec2\] depict the reconstruction of the source images in cases of MNIST $\rightarrow$ USPS and USPS $\rightarrow$ MNIST, respectively. The trend of the outcome is similar to that of SVHN $\rightarrow$ MNIST, see Figure 2 in the main manuscript. That is, the reconstructed images produced by ${\textnormal{DRCN}}$ resemble the *style* of the target images.
Training Progress {#training-progress .unnumbered}
=================
Recall that ${\textnormal{DRCN}}$ has two pipelines with a shared encoding representation; each corresponds to the classification and reconstruction task, respectively. One can consider that the unsupervised reconstruction learning acts as a regularization for the supervised classification to reduce overfitting onto the source domain. Figure \[fig:accs\_plot\] compares the source and target accuracy of ${\textnormal{DRCN}}$ with that of the standard ConvNet during training. The most prominent results indicating the overfitting reduction can be seen in SVHN $\rightarrow$ MNIST case, i.e., ${\textnormal{DRCN}}$ produces higher target accuracy, but with lower source accuracy, than ConvNet.
t-SNE visualization. {#t-sne-visualization. .unnumbered}
====================
For completeness, we also visualize the 2D point cloud of the last hidden layer of ${\textnormal{DRCN}}$ using t-SNE [@Maaten:2008aa] and compare it with that of the standard ConvNet. Figure \[fig:tsne\] depicts the feature-point clouds extracted from the target images in the case of MNIST $\rightarrow$ USPS and SVHN $\rightarrow$ MNIST. Red points indicate the source feature-point cloud, while gray points indicate the target feature-point cloud. Domain invariance should be indicated by the degree of overlap between the source and target feature clouds. We can see that the overlap is more prominent in the case of ${\textnormal{DRCN}}$ than ${\textnormal{ConvNet}}$.
[^1]: The unsupervised convolutional autoencoder is not trained via the greedy layer-wise fashion, but only with the standard back-propagation over the whole pipeline.
[^2]: The setup follows one in [@Ganin2015]: the inputs to SA are the last hidden layer activation values of ${\textnormal{ConvNet}_{src}}$.
[^3]: Recall that AlexNet consists of five convolutional layers: $conv1, \ldots, conv5$ and three fully connected layers: $fc6, fc7$, and $fc8/output$.
|
---
author:
- 'Xiao-Long Ren'
- Niels Gleinig
- 'Dijana Toli[ć]{}'
- 'Nino Antulov-Fantulin'
bibliography:
- 'sample.bib'
title: Underestimated cost of targeted attacks on complex networks
---
Introduction
============
Resilience of complex networks refers to their ability to react on internal failures or external disturbances (attacks) on nodes or edges. The reaction is fundamentally connected to the robustness of the network structure [@Newman2003] that represents the complex system, which is often characterized by the existence of a giant connected component (GCC). Robustness of the connected components under random failure of nodes or edges is described with classical percolation theory [@Sethna06statisticalmechanics]. In network science, percolation is the simplest process showing a continuous phase transition, scale invariance, fractal structure and universality and it is described with just a single parameter, the probability of removing a node or edge. Network science studies have demonstrated that scale-free networks [@BA; @Dorogovtsev2000] are more robust than random networks [@ER; @Gilbert1959] under random attacks but less robust under targeted attacks [@Molloy1995; @Albert2000; @PhysRevLett.85.4626; @Cohen2001; @Tanizawa2005]. Recently, studies of network resilience has moved their focus to more realistic scenarios of interdependent networks [@Buldyrev2010], different failure [@Gao2015] and recovery [@Shekhtman201628; @Bttcher2017] mechanisms.
Although the study of network robustness is mature, the majority of the targeted attack strategies are still based on the heuristic identification of influential nodes [@FREEMAN1978215; @Kitsak2010; @KleinbergHITS; @Cohen2001; @Chen2008EGP] with no performance guarantees for the optimality of the solution. Finding the minimal set of nodes such that their removal maximally fragments the network is called network dismantling problem [@Braunstein2016; @Zdeborova2016] and belongs to the NP-hard class. Thus no polynomial-time algorithm has been found for it and only recently different state-of-the-art methods were proposed as approximation algorithms [@Morone2015Nature; @Braunstein2016; @Zdeborova2016; @Morone2016; @Tian2017NatComm] for this task. Although state-of-the-art methods [@Morone2015Nature; @Braunstein2016; @Zdeborova2016; @Morone2016; @Tian2017NatComm] show promising results for network dismantling, we take one step back and analyze the implicit assumption these network dismantling algorithms have. They make implicit assumption that the cost of removing actions are equivalent for all of the nodes, regardless of their centrality in network. However, attacking a central node, e.g., a high degree node in socio-technical systems usually comes with the additional cost when compared to the same action on a low degree node. Therefore, it is more realistic to explicitly assume that the cost of an attack is proportional to the number of the edges this attack strategy will remove.
We investigated different state-of-the-art algorithms and the results show that with respect to this new concept of cost, most state-of-the-art algorithms are very inefficient for their high cost, and in most instances perform even worse than random removal strategy. To overcome this large cost, we proposed a edge-removal strategy, named Hierarchical Power Iterative Normalized cut (HPI-Ncut) as one of the possible solutions. Actually, removing a node is equivalent to removing all edges of that node, and therefore all node removal actions can be reproduced with edge removal strategy but vice versa does not hold. To partition a network, node removal algorithms always remove all the edges connected to some important nodes. However, it is unnecessary to do this because only some specific edges play a key role both on the importance of the nodes and the connectivity of the network. In cases when the link removal strategies are possible, our results show that the edge removal algorithm we used outperforms all the state-of-the-art targeted node attack algorithms. Finally, we compared the cost of the proposed edge removal strategy HPI-Ncut with other two edge removal strategies which are based on edge betweenness centrality [@FREEMAN1978215] and bridgeness centrality [@Cheng2010Bridge].
Results
=======
A lot of algorithms have been proposed to address network fragmentation problem [@Albert2000; @Cohen2001; @Chen2008EGP; @Altarelli2014; @Morone2015Nature] from the node removal perspective. These algorithms mainly pay attention to the minimization of the size of the giant connected component and assume that the cost is proportional to the number of removed nodes. However, the essence of the node removing is to remove all the edges connected to it. In this article, we make explicit assumption that the cost of removing a node is proportional to the number of the associated edges that has to be removed. This implies that the nodes with higher degree have higher associated removal cost.
In subsection \[Data\_sets\], we introduce the empirical and artificial networks that are used in this paper. Then in subsection \[cost-auc\], we quantify the cost of the state-of-the-art node removal strategies and show that in most cases the cost of such attacks are inefficient. This results have important impact for real world scenarios of network fragmentations where cost budget is limited. Finally, when it is possible to remove single edges (e.g. shielding a communication links, removing power lines, cutting off trading relationships or others), we use a spectral edge removal method and compare its cost with other strategies in subsections \[edge-removal\_res\], \[Result\_gcc\]. The effect of edge removal as an immunization measure for the spreading process is shown in subsection \[Result\_Spread\].
Data sets {#Data_sets}
---------
To evaluate the performances of the network dismantling (fragmentation) algorithms, both real networks and synthetic networks are used in this paper: (i) *Political Blogs*[@Adamic2005] is an undirected social network which was collected around the time of the US. presidential election in 2004. This network is a relatively dense network whose average degree is $27.36$. (ii) *Petster-hamster* is an undirected social network which contains friendships and family links between users of the website *hamsterster.com*. This network data set can be downloaded from KONECT[^1]. (iii) *Powergrid*[@Watts1998] is an undirected power grid network in which a node is either a generator, a transformator or a substation, while a link represents a transmission line. This network data set can also be downloaded from KONECT[^2]. (iv) *Autonomous Systems* is an undirected network from the University of Oregon Route Views Project [@leskovec2005]. This network data set can be downloaded from SNAP[^3]. (v) Erdős–Rényi (ER) network[@erdds1959random] is constructed with $2500$ nodes. Its average degree is 20 and the replacement probability is 0.01. (vi) Scale-free (SF) network with size 10,000, exponent 2.5, and average degree 4.68. (vii) Scale-free (SF) network with size 10,000, exponent 3.5, and average degree 2.35. (viii) Stochastic block model (SBM) with ten clusters is an undirected network with 4232 nodes and average degree 2.60. The basic properties of these networks are listed in table \[Table\_property\].
Cost-fragmentation inefficiency of the node targeting attack strategies {#cost-auc}
-----------------------------------------------------------------------
Let us define the function $f_{\mathcal{D}}(x)$ as the size of GCC for fixed attack cost $x$ for strategy $\mathcal{D}$. The *cost* $x \in [0,1]$ is measured as the ratio of the number of removed edges in the network. Now, for the fixed budget $x$, the strategy $\mathcal{D}$ is more efficient than strategy $\mathcal{L}$ if and only if $f_\mathcal{D}(x) < f_\mathcal{L}(x)$, i.e., size of the GCC is smaller by attacking with strategy $\mathcal{D}$ than with strategy $\mathcal{L}$ with limited budget $x$.
One way to compare the attack performances of strategies is to plot the function $f_{\mathcal{D}}(x)$ of the size of GCC after attack versus the cost, see fig. \[fig\_gcc\_real\_node\]. Here we define the *cost-fragmentation effectiveness (CFE)* for strategy $\mathcal{D}$ as the area under the curve of the size of GCC versus the cost, which can be computed as the integral over all possible budgets: $F_\mathcal{D} = \int_0^1 f_\mathcal{D}(x) dx.$ The smaller the CFE (i.e., the area under the curve), the better the attack effect. Taking into account the role of the cost in targeted attacks, the results are highly counterintuitive: For a fixed budget, many networks are more fragile with the High Degree (HD) attack strategy than by High Degree Adaptive (HDA) strategy, as the results shown in table \[Table\_Decompose\] and table \[Table\_Decompose\_Improve\]. Furthermore, the performances of the state-of-the-art node removal-based methods can become even worse than the naive random removal of nodes (site percolation) when we take into account the attack cost, as shown in fig. \[fig\_gcc\_real\_node\] and fig. \[fig\_gcc\_model\_node\]. In addition, comparing with the CFE of site percolation and bond percolation method in table \[Table\_Decompose\], we can find that the bond percolation works better on the networks with lower average degree, i.e., on the Powergrid, SF ($\gamma=3.5$), and SBM network, otherwise, it is better to choose the site percolation method.
In fact, networks have their intrinsic resilience under attacking for their distinct network structures. To avoid the interference of the architectural difference of networks, we use site percolation method as a baseline null model. The site percolation strategy randomly removes nodes in a network, which could be used to reflected the intrinsic resilience of the attacked network to a certain extent. The cost-fragmentation effectiveness of the site percolation is denoted with $F_{*} = \int_0^1 f_{*}(x) dx$.\
Table \[Table\_Decompose\_Improve\] summaries the improvement of CFE of different attack strategies $\mathcal{D}$ comparing with the null model (site percolation), which is calculated as $\int_0^1 (f_{*}(x) - f_\mathcal{D}(x)) dx$. On the whole, all node-centric strategies (HD[@Cohen2010], HDA[@Cohen2010], EGP[@Chen2008EGP], CI[@Morone2015Nature], CoreHD[@Braunstein2016] and Min-sum[@Zdeborova2016]) distinctly work better than baseline on the three networks with lower average degree, i.e., powergrid, SF ($\gamma=3.5$), and SBM network. However, on empirical social Petster-hamster network, Political Blogs network, Autonomous Systems network and SF ($\gamma=2.5$) network, all node-centric strategies (HD[@Cohen2010], HDA[@Cohen2010], EGP[@Chen2008EGP], CI[@Morone2015Nature], CoreHD[@Braunstein2016] and Min-sum[@Zdeborova2016]) are comparably equal or even less cost-fragmentation inefficient than the baseline random model, according to the CFE score. The last line of the table \[Table\_Decompose\_Improve\], the average value of the improvement over different networks is computed, which can reflect the overall CFE of the algorithms. This results suggest that state-of-the-art node-centric algorithms in realistic settings are rather inefficient if the cost of fragmentation is taken into account.
Network Nodes Links Avg. Degree Sparsity
-------------------- ------- ------- ------------- ----------
Political Blogs 1222 16714 27.36 2.24E-2
Petster-hamster 2000 16098 16.10 8.05E-3
Powergrid 4941 6594 2.67 5.40E-4
Autonomous Systems 6474 12572 3.88 6.00E-4
ER 2500 12500 10.00 4.00E-3
SF ($\gamma=2.5$) 10000 23423 4.68 4.69E-4
SF ($\gamma=3.5$) 10000 11761 2.35 2.35E-4
SBM 4232 5503 2.60 6.15E-4
: Basic statistical features of the GCCs of the eight real and synthetic networks.[]{data-label="Table_property"}
![The size of the GCC of the networks versus the link removing proportion, comparing with classical node removal-based methods on real networks. The results of the site percolation are obtained after 100 independent runs.[]{data-label="fig_gcc_real_node"}](fig_gcc_real_node "fig:"){width="18cm"}\
![The size of the GCC of the networks versus link removing proportion, comparing with classical node removal-based methods on artificial networks. The results of the site percolation are obtained after 100 independent runs.[]{data-label="fig_gcc_model_node"}](fig_gcc_model_node "fig:"){width="18cm"}\
CFE P$_{site}$ HD HDA EGP CI CoreHD Min-Sum P$_{bond}$ Betw Bridg HPI-Ncut
-------------------- ------------ ------- ------- ------- ------- -------- --------- ------------ ------- ------- -----------
Political Blogs 0.638 0.920 0.861 0.619 0.657 0.815 0.726 0.843 0.597 0.910 **0.278**
Petster-hamster 0.627 0.677 0.696 0.747 0.687 0.736 0.675 0.817 0.536 0.689 **0.224**
Powergrid 0.371 0.260 0.293 0.263 0.219 0.256 0.130 0.305 0.145 0.420 **0.014**
Autonomous Systems 0.567 0.576 0.604 0.592 0.567 0.576 0.567 0.605 0.527 0.618 **0.192**
ER 0.601 0.547 0.647 0.502 0.441 0.647 0.268 0.753 0.387 0.542 **0.032**
SF ($\gamma=2.5$) 0.619 0.700 0.706 0.671 0.650 0.660 0.636 0.683 0.672 0.694 **0.342**
SF ($\gamma=3.5$) 0.406 0.231 0.228 0.343 0.214 0.227 0.202 0.298 0.312 0.352 **0.092**
SBM 0.487 0.419 0.378 0.397 0.348 0.374 0.284 0.384 0.348 0.512 **0.075**
: CFE, i.e., the area under the curve of the size of the GCC after attacking by different algorithms. P$_{site}$ is short for site percolation, P$_{bond}$ for bond percolation, Betw for betweenness, Bridg for bridgeness. The best performing algorithm in each column is emphasized in bold. []{data-label="Table_Decompose"}
Improvement P$_{bond}$ HD HDA EGP CI CoreHD Min-Sum Betw Bridg HPI-Ncut
-------------------- ------------ ------ ------ ------ ------ -------- --------- ------ ------- ----------
Political Blogs -32% -44% -35% 3% -3% -28% -14% 6% -43% **56%**
Petster-hamster -30% -8% -11% -19% -10% -17% -8% 15% -10% **64%**
Powergrid 18% 30% 21% 29% 41% 31% 65% 61% -13% **96%**
Autonomous Systems -7% -2% -7% -4% 0 -2% 0% 7% -9% **66%**
ER -25% 9% -8% 17% 27% -8% 55% 36% 10% **95%**
SF ($\gamma=2.5$) -10% -13% -14% -8% -5% -7% -3% -9% -12% **45%**
SF ($\gamma=3.5$) 27% 43% 44% 15% 47% 44% 50% 23% 13% **77%**
SBM 20% 13% 22% 18% 28% 23% 41% 28% -6% **84%**
Average -5% 3% 2% 6% 16% 5% 23% 21% -9% **73%**
: The improvement of the CFE of each algorithm, comparing with the baseline, i.e., site percolation method. The best performing algorithm in each column is emphasized in bold.[]{data-label="Table_Decompose_Improve"}
The edge-removal problem {#edge-removal_res}
------------------------
In network science, nodes represent entities in a system and edges represent the relationships or interactions between them. Both the nodes and the edges are the fundamental part of a network. Deleting or removing a specific ratio of them will lead to great changes in the structure and functions of the network. The problem of network attack or fragmentation has received a huge amount of attention in the past decade [@Pastor2002; @Chen2008EGP; @Pastor2015; @Zhang2016; @Wang20161]. However, as far as we are concerned, almost all of the attack strategies are node removal based, in which the node removal operation is carried out via removing all the edges connected to it. In fact, to partition a network in to small clusters, it is unnecessary to remove all the links of a node. We remove a node because either we suppose it has a higher influence or the node is a bridge between clusters. If we remove part of its connected links, its influence may greatly reduce or it may not be a bridge any more. From another perspective, edges play far different roles in real networks [@Binder2012; @bakshy2012]. Some of them are crucial to the diffusion process, while others are irrelevant. Thus, if the edge removal actions on networks are applicable, the edge removal attack will be more accuracy and efficient.
The link fragmentation or attack problem can be narrated as follows: If we have a budget of $x$ links that can be attacked or removed, which links should we pick? This is mathematically equivalent to asking how to partition a given network with a minimal separate set of edges. The objective function of link attack takes the following general form [@VonLuxburg2007]: $$cut(A_1,\cdots, A_k) := \frac{1}{2} \sum_{i=1}^{k}{W(A_i,\overline{A_i})}$$ where $A_1,\cdots, A_k$ are $k$ nonempty subsets from a partition of the original network, $\overline{A_i}$ is the complement set of the nodes of $A_i$, and $W(A_i,\overline{A_i})$ is the number of the links between the two disjoint subsets $A_i$ and $\overline{A_i}$.
In this paper, we applied the spectral strategy for edge attack problem, which fall in the class of well known spectral clustering and partitioning algorithms [@Bisestion1991; @shi2000Ncut; @SC; @SGT; @NewmanSC]. We use the hierarchical partitioning with Ncut objective function[@shi2000Ncut] combined with power iteration procedure for approximation of eigenvectors. The complete description of this HPI-Ncut edge removal strategy will be presented in the Section \[methods\]. The results show that the HPI-Ncut strategy greatly decreases the cost of the attack, comparing with the state-of-the-art nodes removing strategies. In the following subsection, we will compare HPI-Ncut algorithm with random uniform attack strategy, edge betweenness, bridgeness, and some classical node removing strategies (see the definitions of these algorithms in the Section \[methods\]).
Effectiveness of the HPI-Ncut algorithm {#Result_gcc}
---------------------------------------
In general case, each attack strategy algorithm could generate a ranking list of all (or partial) nodes or links of the network. After removing the nodes or links one after another, the size of the GCC of the residual network characterizes the effectiveness of each algorithm. The removal process will cease when the size of the GCC is smaller than a given threshold (here we use 0.01). In this paper, to test the effectiveness of this spectral edge removal algorithm, HPI-Ncut, we plot the size of the GCC versus the removal fraction of links, for both real networks (fig. \[fig\_gcc\_real\_node\] and fig. \[fig\_gcc\_real\_link\]) and synthetic networks (fig. \[fig\_gcc\_model\_node\] and fig. \[fig\_gcc\_model\_link\]), comparing with classical node removing algorithms (fig. \[fig\_gcc\_real\_node\] and fig. \[fig\_gcc\_model\_node\]) and existed link evaluation methods (fig. \[fig\_gcc\_real\_link\] and fig. \[fig\_gcc\_model\_link\]). The results show that the HPI-Ncut algorithm outperforms all the other attack algorithms.
In fig. \[fig\_gcc\_real\_node\] and fig. \[fig\_gcc\_model\_node\], we compared the HPI-Ncut algorithm with some state-of-the-art node removal-based target attack algorithms. Fig. \[fig\_gcc\_real\_node\] (a) shows that all the node removal-based algorithms are better than the site percolation method on Powergrid network, that is because the average degree of the Powergrid network is very low, only 2.67. This could also be verified by the results in fig. \[fig\_gcc\_model\_node\] (c) and (d), in which the average degree of the SF ($\gamma=3.5$) and the SBM network are 2.35 and 2.60, respectively. The trends of the curves in fig. \[fig\_gcc\_real\_node\] and fig. \[fig\_gcc\_model\_node\] also show that the target attack algorithms works better on networks with lower average degree. Furthermore, regardless of the HPI-Ncut algorithm, other algorithms have poorer performance than baseline method (site percolation). The performances of site percolation are better until the proportion of the removed links is greater than 0.7 on SF ($\gamma=3.5$) network and until the proportion is greater than 0.2 on SF ($\gamma=2.5$) network. The site percolation on the SF ($\gamma=3.5$) presents an obvious phase transition phenomenon [@Cohen2010] comparing with the result on the SF ($\gamma=2.5$). In addition, in fig. \[fig\_gcc\_model\_node\] (a) and (d), the SBM network has obvious clusters structure comparing with the ER network. The Min-Sum, CI, CoreHD, EGP, and site percolation algorithms have a better performance on the SBM network. Moreover, the error of the site percolation method on the ER network is larger than the error on SBM network. That implies that the cluster structure of a network has a big influence on the performance of the attack strategies.
To conclude the results of fig. \[fig\_gcc\_real\_node\] and fig. \[fig\_gcc\_model\_node\], the state-of-the-art targeted node removal strategies make large cost for optimized targeted attacks. Contrary, HPI-Ncut algorithm overwhelmingly outperforms all the node removal-based attack algorithms, no matter on sparse or dense networks, on the networks with or without clusters structure.
![The size of the GCC of the networks versus link removing proportion, comparing with existed link removal-based methods on real networks. The results of the bond percolation are obtained after 100 independent runs.[]{data-label="fig_gcc_real_link"}](fig_gcc_real_link "fig:"){width="18cm"}\
![The size of the GCC of the networks versus link removing proportion, comparing with existed link removal-based methods on artificial networks. The results of the bond percolation are obtained after 100 independent runs.[]{data-label="fig_gcc_model_link"}](fig_gcc_model_link "fig:"){width="18cm"}\
![The spreadabilities of the networks before and after the removing of 10% edges by HPI-Ncut algorithm. The x-axis is the time unites. $P_i$ is the number of infected entities and $P_r$ is the number of recovered entities in the network. In the SIR model, the infection rate $\beta$ is 0.10, the recovery rate is 0.02, and the basic reproduction number is 5. All the results are the average of 100 times independent runs. It is worth noting that the size of GCC of the Powergrid network is only 54 after removing 10% of links by HPI-Ncut algorithm. []{data-label="fig_SIR"}](fig_SIR2 "fig:"){width="18cm"}\
In fig. \[fig\_gcc\_real\_link\] and fig. \[fig\_gcc\_model\_link\], we compared the HPI-Ncut algorithm with some exited link evaluation algorithms. First of all, we can find that the HPI-Ncut algorithm works better and is more stable than all the other algorithms. Secondly, comparing with the results of site and bond percolation in fig. \[fig\_gcc\_real\_node\] and fig. \[fig\_gcc\_model\_node\], we can see that the bond percolation method outperforms the site percolation method only when the average degree of the network is lower (see the results of the Powergrid, SF ($\gamma=3.5$), and SBM network), otherwise, the site percolation is a better choice. Thirdly, in the fig. \[fig\_gcc\_model\_link\] (b) and (c), we can see that the bond percolation method have a better performance comparing with the edge betweenness and bridegeness algorithm when the cost is limited on scale-free networks, i.e., the proportion of the removed links is smaller than 0.63 in fig. \[fig\_gcc\_model\_link\](b) and is smaller than 0.4 in fig. \[fig\_gcc\_model\_link\](c). To conclude, the HPI-Ncut algorithm overwhelmingly outperforms all the node removal-based attack algorithms and link evaluation algorithms, no matter on sparse or dense networks, on networks with or without clusters structure.
Spreading dynamics after spectral edge immunization/attack {#Result_Spread}
----------------------------------------------------------
To more intuitively display the targeted attack by HPI-Ncut, we studied the susceptible-infectious-recovery (SIR) [@Hethcote2000] epidemic spreading process on four real networks. We compared both the spreading speed and spreading scope on these networks before and after targeted immunization by HPI-Ncut. The simulation results in fig. \[fig\_SIR\] show that, by simply removing 10% of links, the function of the networks had been profoundly affected by the HPI-Ncut immunization. The proportion of the GCC of the Political Blogs, Powergrid, Petster-hamster, and Autonomous Systems network after attack are 37% (449/1222), 1% (54/4941), 57% (1146/2000), and 37% (2387/6474), respectively. Thus, the spreading speeds are greatly delayed and the spreading scoops are tremendously shrunken on these networks.
Methods
=======
Some existed attack strategies
------------------------------
In this subsection, we will briefly introduce some state-of-the-art node removal attack algorithms and some edge evaluation methods used in this paper. The two edge evaluation methods, i.e., edge betweenness and bridgesness, are used to measure the importance or significance of links in spread dynamics or structure connectivities of networks. We use them as comparable link removal-based attack algorithms in this paper.
- Percolation method. In percolation theory[@Callaway2000], node of networks usually called ‘site’, while edge usually called ‘bond’. In the study of the network attack, percolation is a random uniform attack method which either removes nodes randomly (site percolation) or removes edges randomly (bond percolation).
- Equal graph partitioning (EGP) algorithm. EGP algorithm[@Chen2008EGP], which is based on the nested dissection [@Lipton1979] algorithm, can partition a network into two groups with arbitrary size ratio. In every iteration, EGP algorithm divides the target nodes set into three subsets: first group, second group, and the separate group. The separate group is made up of all the nodes that connect to both the first group and the second group. Then minimize the separate group by trying to move nodes to the first group or the second group. Finally, after removing all the nodes in the separate group, the original network will be decomposed into two groups. In our implementation, we partition the network into two groups with approximate equal size.
- Collective Influence (CI) algorithm. CI algorithm[@Morone2015Nature] attacks the network by mapping the integrity of a tree-like random network into optimal percolation theory [@kovacs2015network] to identify the minimal separate set. Specifically, the collective influence of a node is computed by the degree of the the neighbors belonging to the frontier of a ball with radius $l$. CI is an adaptive algorithm which iteratively removes the node with highest CI value after computing the CI values of all the nodes in the residual network. In our implementation, we compute the CI values with $l=3$.
- Min-Sum algorithm. The three-stage Min-Sum algorithm [@Braunstein2016] includes: (1) Breaking all the circles, which could be detected form the 2-core[@Kitsak2010] of a network, by the Min-Sum message passing algorithm, (2) Breaking all the trees larger than a threshold $C_1$, (3) Greedily reinserting short cycles that no greater than a threshold $C_2$, which ensures that the size of the GCC is not too large. In our implementation, we set $C_1$ and $C_2$ as 0.5% and 1% of the size of the networks.
- CoreHD algorithm. Inspired by Min-Sum algorithm, CoreHD algorithm[@Zdeborova2016] iteratively deletes the node with highest degree from the the 2-core[@Kitsak2010] of the residual network.
- Edge betweenness centrality[@Freeman1977]. Betweenness is a widely used centrality measure which is the sum of the fraction of all-pairs shortest paths that pass a node. Edge betweenness, an extension of the betweenness, is used to evaluate the importance of a link, and is defined as the sum of the fraction of all-pairs shortest paths that pass this link[@Lu2016].
- Bridgeness[@Cheng2010Bridge]. Bridgeness use local information of the network topology to evaluate the significance of edges in maintaining network connectivity. The bridgeness of a link is determined by the size of $k$-clique communities that the two end points of this link are connected with and the size of the $k$-clique communities that this link is belonging to.
Hierarchical Power Iterative Normalized cut (HPI-Ncut) edge removal strategy
-----------------------------------------------------------------------------
Here we describe the hierarchical iterative algorithm for edge removal strategy. This algorithm hierarchically applies the spectral bisection algorithm, which has the same objective function as the Normalized cut algorithm[@shi2000Ncut]. Furthermore we have used the power iteration method to approximate spectral bisection. We provide proof on the exponential convergence and asymptotic upper bounds for the run-time complexity.\
\
In order to explain our algorithm, we quickly recall the spectral bisection algorithm.\
\
**The spectral bisection algorithm**\
Input: Adjacency matrix $W$ of a network\
Output: A separated set of edges that partition the network into two disconnected clusters $A$, $\bar{A}$.
1. Compute the eigenvector $v_2$, which corresponds to the second smallest eigenvalue of the normalized Laplacian matrix $L_w= D^{-\frac{1}{2}}\left(D-W \right) D^{-\frac{1}{2}}$, or some other vector $v$ for which $\frac{v^TL_w v}{v^Tv}$ is close to minimal. We use the power iteration method to compute this vector, which will be explained later.
2. Put all the nodes with $v_2(i)>0$ into the first cluster $A$ and all the nodes with $v_2(i)\leq 0$ into the second cluster $\bar{A}$. All the edges between these two clusters form the separation set that can partition the network.
The clusters that we obtained by this method had usually very balanced sizes. If, however, it is very important to get clusters of exactly the same size, one could put those $\frac{n}{2}$ nodes with the largest entries in $v_2$ into one cluster and the remaining nodes into the other cluster.\
\
**Hierarchical Power Iterative Normalized cut (HPI-Ncut) algorithm**\
Input: Adjacency matrix of a network\
Output: Partition of the network into small groups
1. Partition the GCC of the network into two disconnected clusters $A$ and $\bar{A}$ by using the spectral bisection algorithm and removing all the links in the separated set.
2. If the budget for link removal has not been overrun, and if the GCC is not yet small enough, partition $A$ and $\bar{A}$ with step 1, respectively.
The reason why we cluster hierarchically is that, this allows us to refine the fragmentation gradually. For example, if after partitioning the network into $2^k$ clusters, we decide that the clusters should be smaller, we would just have to partition each of the existing clusters into $2$ new clusters, obtaining $2^{k+1}$ clusters. So the links that were attacked already remain attacked and we just need to attack some additional ones. If, however, we had used spectral clustering straightforwardly, it could happen that the set of links to be attacked in order to partition the network into $2^{k+1}$ clusters, would not contain the set of links that needed to be attacked for $2^k$ clusters.\
\
**Power iteration method**\[powerMethod\]\
Input: Adjacency matrix $W$ of a network\
Output: The eigenvector $v_2$ or some other vector $v$ for which $\frac{v^TL_wv}{v^Tv}$ is close to $\lambda_2$.
1. Draw $v$ randomly with uniform distribution on the unit sphere.
2. Set $v\leftarrow v-v_1^Tv\cdot v_1$, where $v_1=\frac{1}{\sqrt{n}}(1,...,1)$.
3. For $i=1$ to $\eta (n)$ [\
$v\leftarrow \frac{\tilde{L} v}{\Vert \tilde{L} v\Vert } $, where $\tilde{L}=2\cdot I-L_w$ and $L_w= D^{-\frac{1}{2}}\left(D-W \right) D^{-\frac{1}{2}}$.\
]{}
Objective function of the spectral bisection algorithm {#objective-function-of-the-spectral-bisection-algorithm .unnumbered}
------------------------------------------------------
In appendix A, we show that the spectral bisection algorithm has the same objective function with the relaxed Ncut [@shi2000Ncut] algorithm: $$Ncut(A, \bar{A}):= \sum_{i\in A, j \in \bar{A}} W_{i,j} \left(\frac{1}{\sum_{i\in A} D_{ii}}+\frac{1}{\sum_{j\in \bar{A}} D_{jj}} \right),$$ where $A\subseteq V$ denotes set of nodes in the first partition, $\bar{A}$ the set of nodes in the second partition and $D_{ii}$ is the degree of the node $i$.
The main reason we used this objective function is that it minimizes the number of links that are removed and the total sum of node degree centralities in both partition $A$ and $\bar{A}$ is approximately equal.
In appendix B, we show the exponential convergence of the power iteration method to the eigenvector associated with the second smallest eigenvalue of $L_w$.
Complexity {#complexity .unnumbered}
----------
In appendix C, we show that the complexity of the spectral bisection algorithm is $O(\eta (n)\cdot n\cdot \bar{d})$ and the complexity of the hierarchical clustering algorithm is $O(\eta (n)\cdot n\cdot \bar{d}\cdot \log(n))$ where $\eta (n)$ is the number of iterations in the power iteration method. The power iteration method converges with exponential speed as $\eta (n) \to \infty$. The average degree $\bar{d}$ is almost constant for large sparse network. Hence we may expect assymptotically good results with $\eta (n)=\log (n)^{1+\epsilon}$ for any $\epsilon >0$, giving the hierarchical spectral clustering algorithm a complexity of $O( n \cdot \log^{2+\epsilon}(n))$. In practice, we have used $\epsilon=0.1$, which gives a complexity of $O( n \cdot \log^{2.1}(n))$.
Conclusion
==========
To summarize, we investigated some state-of-the-art node target attack algorithms and found that they are very inefficient when the cost of the attack is taken into consideration. The cost of removing a node is defined as the number of links that are removed in the attack process. We found some highly counterintuitive results, that is, the performances of the state-of-the-art node removal-based methods are even worse than the naive site percolation method with respect to the limited cost. This demonstrates that the current state-of-the-art node targeted attack strategies underestimate the heterogeneity of the cost associated to node in complex networks.
Furthermore, in cases when the link removal strategies are possible, we compared the performances of the node-centric (HD[@Cohen2010], HDA[@Cohen2010], EGP[@Chen2008EGP], CI[@Morone2015Nature], CoreHD[@Braunstein2016] and Min-sum[@Zdeborova2016]) and edge removal strategies (edge betweenness [@Freeman1977] and bridgeness[@Cheng2010Bridge] strategy) based on the cost of their attacks, which are measured in the same units, i.e., the ratio of the removed links. Node removal-based algorithms always deletes all the links respected to the removed nodes which is not economical respect to the limited cost. In order to resolve the high-cost problem in network attack, we proposed a hierarchical power iterative algorithm (HPI-Ncut) to partition networks into small groups via edge removing, which has the same objective function with the Ncut [@shi2000Ncut] spectral clustering algorithm. The results show that HPI-Ncut algorithm outperforms all the node removal-based attack algorithms and link evaluation algorithms on all the networks. In addition, the total complexity of the HPI-Ncut algorithm is only $O( n \cdot \log^{2+\epsilon}(n))$.
Acknowledgements {#acknowledgements .unnumbered}
================
The work of N.A.-F. has been funded by the EU Horizon 2020 SoBigData project under grant agreement No. 654024. The work of D.T. is funded by the by the Croatian Science Foundation IP-2013-11-9623 “Machine learning algorithms for insightful analysis of complex data structures”. X.-L.R. thanks the support from China Scholarship Council (CSC).
Appendix A: Objective function {#appendix-a-objective-function .unnumbered}
==============================
Let $G=(V, E)$ be an undirected graph with adjacency matrix $W$ and diagonal degree matrix $D$, whose $i$-th entry $ D_{ii}=\sum_{j=1}^n W_{ij}$, is the degree of the node $i$. For $A\subseteq V$, let $cut(A)$ denote the number of links between $A$ and its complement $\bar{A}$. We define $$Ncut(A, \bar{A}):=cut(A, \bar{A})\left(\frac{1}{assoc(A)}+\frac{1}{assoc(\bar{A})} \right).$$ where $assoc(A)=\sum_{i\in A} D_{ii}$. If we describe the set $A$ by the normalized indicator vector $$x_A(i):=
\begin{cases}
1 & \ if \ i \in \ A, \\
-\frac{\sum_{j\in A} D_{jj}}{\sum_{j\in B} D_{jj}} & \ otherwise
\end{cases}$$ one can show[@shi2000Ncut] that$$\label{iden}\min_{A\subseteq V}
Ncut(A, \bar{A})=\min_{\substack{A\subseteq V, \\ }} \frac{x_A^T\left(D-W \right) x_A}{x_A^TDx_A}.$$ From the definition of $Ncut$ one can see that finding a set $A$ which minimizes $Ncut(A, \bar{A})$ corresponds to partitioning the network into two sets $A$ and $\bar{A}$ such that
1. $cut(A, \bar{A})$ is small and hence there are only few links between $A$ and $\bar{A}$
2. $\left(\frac{1}{assoc(A)}+\frac{1}{assoc(\bar{A})} \right)$ is small and so the sets $A$ and $\bar{A}$ contain more or less equally many links.
Finding such a set $A$ is NP-hard [@shi2000Ncut], but by relaxing the constraints in the RHS of the identity (\[iden\]) one can find good approximate solutions $\tilde{A}$:
1. Find $$\label{Lap}
x_{relaxed}:=arg \min_{\substack{x\in\mathbb{R}^n, x\neq 0 \\ x^TD\vec{1}=0}} \frac{x^T\left(D-W \right) x}{x^TDx},$$ where we have imposed the condition $x^TD\vec{1}=0$, because every set $A$ for which $x_A$ is nontrivial, satisfies $x_A^TD\vec{1}=0$.
2. Set $$\chi_{\tilde{A}}(i)=round(x_{relaxed}(i)):=\begin{cases}
1 & \ if \ x_{relaxed}(i)\geq 0, \\
-1 & \ otherwise
\end{cases}$$ and define $\tilde{A}=\left\lbrace i\in Nodes\vert \chi_{\tilde{A}}(i)=1 \right\rbrace$.
The idea behind this method is that $\chi_{\tilde{A}}$ will be the best approximation of $x_{relaxed}$, out of the set of all vectors with entries in $ -1$ and $1 $, and since $x_{relaxed}$ minimizes $
\frac{x^T\left(D-W \right) x}{x^TDx},
$
$$Ncut(\tilde{A})=\frac{x_{\tilde{A}}^T\left(D-W \right) x_{\tilde{A}}}{x_{\tilde{A}}^TDx_{\tilde{A}}}\approx \frac{\chi_{\tilde{A}}^T\left(D-W \right) \chi_{\tilde{A}}}{\chi_{\tilde{A}}^TD\chi_{\tilde{A}}}$$
will be also close to $$\min_{A\subseteq Nodes} Ncut(A)=\min_{A\subseteq Nodes}\frac{x_A^T\left(D-W \right) x_A}{x_A^TDx_A}.$$
One can show that a solution to (\[Lap\]) is given by $x_{relaxed}=D^{\frac{1}{2}} v_2$, where $v_2$ is the eigenvector of the second smallest eigenvalue $\lambda_2$ of the **normalized Laplacian matrix** $$L_w= D^{-\frac{1}{2}}\left(D-W \right) D^{-\frac{1}{2}}.$$ $D$ is a diagonal matrix and if the network is connected we have $D_{ii}>0$. So the entries of the vectors $x_{relaxed}$ and $v_2$ have the same sign and therefore we have $round(x_{relaxed})=round(v_2)$.
Appendix B: Exponential convergence of the power iteration method {#appendix-b-exponential-convergence-of-the-power-iteration-method .unnumbered}
=================================================================
$L_w$ is real and symmetric. Therefore it has real eigenvalues $\lambda_1\leq \lambda_2\leq...\leq \lambda_n$ corresponding to eigenvectors $v_1,...,v_n$ which form an orthonormal basis of $\mathbb{R}^n$. One can easily show that $\lambda_1 = 0$ and $\lambda_n\leq 2$. So in order to compute $v_2$ we consider the matrix $\tilde{L}=2\cdot I-L_w$, which has the same eigenvectors $v_1,...,v_n$ as $L_w$. Now the corresponding eigenvalues are $\tilde{\lambda_1}=2\geq...\geq\tilde{\lambda_n}=d_{max}-\lambda_n\geq 0$ and in particular $v_1$ corresponds to the largest eigenvalue and $v_2$ to the second largest eigenvalue.
If $v$ is a random vector uniformly drawn from the unit sphere and we force it to be perpendicular to $v_1$ by setting $v\leftarrow v-v_1^Tv\cdot v_1$, then $v=\psi_2 v_2+...+\psi_n v_n$ and $\psi_2\neq 0$ almost surely. Furthermore $\tilde{L} v=\tilde{\lambda_2}\psi_2 v_2+...+\tilde{\lambda_n}\psi_n v_n $ and if we set $v^{(k)}:=\tilde{L}^k v$, then $$\label{conveig}\begin{split}
\frac{v^{(k)}}{\Vert v^{(k)}\Vert}&=\frac{\tilde{\lambda_2}^k\psi_2 v_2+...+\tilde{\lambda_n}^k\psi_n v_n}{\Vert\tilde{\lambda_2}^k\psi_2 v_2+...+\tilde{\lambda_n}^k\psi_n v_n\Vert}\\
&=\frac{\psi_2 v_2+\left(\frac{\tilde{\lambda_3}}{\tilde{\lambda_2}}\right) ^k\psi_3 v_3+...+\left(\frac{\tilde{\lambda_n}}{\tilde{\lambda_2}}\right) ^k\psi_n v_n}{\Vert\psi_2 v_2+\left(\frac{\tilde{\lambda_3}}{\tilde{\lambda_2}}\right) ^k\psi_3 v_3+...+\left(\frac{\tilde{\lambda_n}}{\tilde{\lambda_2}}\right) ^k\psi_n v_n\Vert}
\end{split}$$ converges with exponential speed to some eigenvector of $L$ with eigenvalue $\lambda_2$, because for every $i$ with $\lambda_i>\lambda_2$ we have $\frac{\tilde{\lambda_i}}{\tilde{\lambda_2}}<1$ and therefore $\left(\frac{\tilde{\lambda_i}}{\tilde{\lambda_2}}\right) ^k\psi_i v_i\rightarrow 0$. Generally one can deduce from (\[conveig\]) that $$\frac{{v^{(k)}}^T L_w v^{(k)}}{{v^{(k)}}^T v^{(k)}}=\frac{\lambda_2\vert \psi_2\vert ^2 +\lambda_3\vert \left(\frac{\tilde{\lambda_3}}{\tilde{\lambda_2}}\right) ^k\psi_3\vert ^2+...+\lambda_n \vert \left(\frac{\tilde{\lambda_n}}{\tilde{\lambda_2}}\right) ^k\psi_n\vert ^2}{|\psi_2|^2 +\vert \left(\frac{\tilde{\lambda_3}}{\tilde{\lambda_2}}\right) ^k\psi_3\vert ^2+...+\vert \left(\frac{\tilde{\lambda_n}}{\tilde{\lambda_2}}\right) ^k\psi_n\vert ^2}$$ and therefore this quantity converges to $\lambda_2$ with exponential speed.
Appendix C: Complexity {#appendix-c-complexity .unnumbered}
======================
The complexity of the spectral bisection algorithm is the same as the complexity of the power iteration method. The complexity of the power iteration method equals the number of iterations $\eta (n)$ times the complexity of multiplying $\tilde{L}$ and $v$. That is $O(\eta (n)\cdot n\cdot \bar{d})$ where $\bar{d}$ is the average degree of the network, or equivalently $O(|E|\cdot \eta (n))$ where $|E|$ is the number of edges.
Assuming that the spectral bisection algorithm always produces clusters of equal size, the complexity of the hierarchical spectral clustering algorithm is then given by the sum of:
- The complexity of applying spectral bisection once on the whole network. $\rightarrow O(\eta (n)\cdot n\cdot \bar{d})$.
- The complexity of applying it on each of the two clusters that we obtained from the first application of spectral bisection and which will have size $\frac{n}{2}$.
- The complexity of applying it on each of the 4 clusters that we obtained from the previous step and which will have size $\frac{n}{4}$.
- The complexity of applying it on each of the $\frac{n}{2}=2^{\log_2(n)-1}$ clusters that we obtained from the previous step and which will have size $\frac{n}{2^{\log_2(n)-1}}=2$.\
That is in total at most $$\begin{split}
&O(\eta (n)\cdot n\cdot \bar{d})+2\cdot O(\eta (n)\cdot \frac{n}{2}\cdot \bar{d})+...+2^{\log_2(n)-1}\cdot O(\eta (n)\cdot \frac{n}{2^{\log_2(n)-1}}\cdot \bar{d})\\
&=\sum_{i=0}^{\log_2(n)-1} 2^{i}\cdot O(\eta (n)\cdot \frac{n}{2^{i}}\cdot \bar{d})\\ &= O(\eta (n)\cdot n\cdot \bar{d}) \sum_{i=0}^{\log_2(n)-1} 1\\ &=O(\eta (n)\cdot n\cdot \bar{d}\cdot \log_2(n)),
\end{split}$$ where we have made the pessimistic assumption that the number of iterations and the average degrees are in each step as large as they were in the beginning.
The choice of the function $\eta (n)$ is a little bit involved. If the initial random choice of the vector $v$ is very unfortunate, there may be many iterations needed in order to have a good approximation of the eigenvector $v_2$. In fact, if $\psi_2=0$, then this algorithm would not converge to $v_2$ at all, however this event has probability $0$.
Another condition that might slow down the computation of $v_2$ is if some of the other eigenvalues $\lambda_i$, $i\geq 3$ are close to $\lambda_2$. In that case $\frac{\tilde{\lambda_i}}{\tilde{\lambda_2}}$ would be close to $1$ and therefore one can see from equation (\[conveig\]) that the correspoding $v_i$ might have a large contribution in $v^{(k)}$ for a long time. However when $\lambda_i$ is close to $\lambda_2$, this also implies that $$\frac{v_i^T L_w v_i}{v_i^T v_i}=\lambda_i$$ is close to $$\min_{\Vert v\Vert \neq 0} \frac{v^T L_w v}{v^T v}=\lambda_2$$ and therefore also provides a good partition of the network, since these are the quantities that are related to the cut-size.
Due to this fast convergence, one can expect assymptotically good partitions when $\eta (n) =\log(n)^{1+\epsilon}$ and $\epsilon >0$, giving the hierarchical spectral clustering algorithm a complexity of $O( n \cdot \bar{d}\cdot \log^{2+\epsilon}(n))$ in general and $O( n \cdot \log^{2+\epsilon}(n))$ for sparse networks.
Appendix D: HPI-Ncut algorithm with different number of partitions {#appendix-d-hpi-ncut-algorithm-with-different-number-of-partitions .unnumbered}
===================================================================
Previous sections give us an clear picture about the performances of different attack algorithms. Some algorithms work quite well, such as HPI-Ncut algorithm, Min-Sum algorithm, and edge betweenness algorithm, while others are not. What causes such a difference? Fig. \[fig\_remove\] may give us a clue. In this toy example, the original network is a two clusters SBM model with totally 2078 nodes and 3729 links. Fig. \[fig\_remove\] shows the visualization of the top 10% removed links of different algorithms. Please note that, the number of the red links in fig. \[fig\_remove\](b-f) are the same, namely, 373. However, comparing with edge betweenness and HPI-Ncut algorithm, much less of links between the two clusters are removed by EGP and CI algorithm, and more links are distributed among the left or the right cluster. Further more, comparing with edge betweenness algorithm, the links removed by HPI-Ncut algorithm mainly are distributed in the bridge part of the two clusters. This helps to partition the network into two disconnected clusters.
![The schematic diagram of the removed links in a SBM network with two clusters. (a) is the original network with all the links. (b)-(f) are the top 10% links (i.e., 373 links) removed by different algorithms.[]{data-label="fig_remove"}](fig_remove_new.pdf "fig:"){width="16cm"}\
![The size of the GCC of the networks versus link removing proportion, comparing of different quantities of target disconnected clusters in HPI-Ncut algorithm.[]{data-label="fig_Ncut_k"}](fig_Ncut_k "fig:"){width="16cm"}\
In the previous sections, the default target number of the disconnected clusters in HPI-Ncut algorithm is set to 2. Fig. \[fig\_Ncut\_k\] shows the size of the GCC after targeted attack by HPI-Ncut with different target number of disconnected clusters, on the SBM network with two clusters and with ten clusters, respectively. Fig. \[fig\_Ncut\_k\] indicates that when the original networks contains less clusters, the target number of clusters in HPI-Ncut will greatly affect the size of GCC in the initial stage of the target attack, while, this influence will decline sharply in the later part of the attack process. However, the target number has a smaller impact on the attack performances of the HPI-Nuct when the original network contains much more clusters. Further more, when the target number of the disconnected clusters is set to 2, we can always obtain the optimal outcome on both networks. To conclude, we recommend to set the default target number of the disconnected clusters to 2 in HPI-Ncut algorithm.
[^1]: <http://konect.uni-koblenz.de/networks/petster-hamster>
[^2]: <http://konect.uni-koblenz.de/networks/opsahl-powergrid>
[^3]: <https://snap.stanford.edu/data/as.html>
|
---
abstract: 'To comply with the equivalence principle, fields in curved spacetime can be quantized only in the neighborhood of each point, where one can construct a freely falling [*Minkowski*]{} frame with [*zero*]{} curvature. In each such frame, the geometric forces of gravity can be replaced by a selfinteracting spin-2 field, as proposed by Feynman in 1962. At [*any fixed*]{} distance $R$ from a black hole, the vacuum in each freely falling volume element acts like a thermal bath of all particles with Unruh temperature $T_U=\hbar GM/2\pi c R^2$. At the horizon $R=2GM/c^2$, the falling vacua show the Hawking temperature $T_H=\hbar c^3/8\pi GMk_B$.'
address: |
Institut für Theoretische Physik, Freie Universität Berlin, Arnimallee 14, D14195 Berlin, Germany\
and\
ICRANeT, Piazzale della Republica 1, 10 -65122, Pescara, Italy
author:
- 'H. Kleinert'
title: Equivalence Principle and Field Quantization in Curved Spacetime
---
1.) When including Dirac fermions into the theory of gravity, it is important to remember that the Dirac field is initially defined only by its transformation behavior under the Lorentz group. The invariance under general coordinate transformations can be incorporated only with the help of a vierbein field $e^ \alpha {}_\mu(x)$ which couples Einstein indices with Lorentz indices $ \alpha $. These serve to define anholonomic coordinate differentials $dx^ \alpha $ in curved spacetime $x^\mu$: $$dx^ \alpha =e^ \alpha {}_\mu(x)dx^\mu,
\label{@}$$ which at any given point have a Minkowski metric: $$ds^2 = \eta _{ \alpha \beta }dx^ \alpha dx^ \beta ,~~~~
\eta _{ \alpha \beta }=
\left(
\begin{array}{cccc}
1&0&0&0\\
0&-1&0&0\\
0&0&-1&0\\
0&0&0&-1
\end{array}
\right)
=\eta ^{ \alpha \beta }.
\label{@MIN}$$ With the help of the vierbein field one can write the action simply as [@MVF] $${\cal A}=\int d^4x \sqrt{-g} \bar\psi(x)[
\gamma ^ \alpha e_{ \alpha }{}^\mu(x)(i\partial _\mu-
\Gamma_{\mu}^{ \alpha \beta }
{\raisebox{0.095ex}{\scriptsize${\frac{1}{2}}$}}\Sigma_{ \alpha \beta })
-m]\psi(x), ~~
~g_{\mu \nu }(x)\equiv
\eta _{ \alpha \beta }e^ \alpha {}_\mu (x)
e^ \beta {}_\nu(x) ,~~~~
\label{@LL}$$ where $\Sigma_{ \alpha \beta }$ is the spin matrix, which is formed from the commutator of two Dirac matrices as $i[\gamma_ \alpha , \gamma _ \beta ]/4$, and $\Gamma_{\mu}^{ \alpha \beta }
\equiv
e^ \alpha {}_ \nu
e^ \beta {}_ \lambda
\Gamma_{\mu}{}^{ \nu \lambda }$ is the spin connection, It is constructed from combination of the so-called objects of anholonomity $
\Omega_{\mu \nu }{}^ \lambda
={\raisebox{0.095ex}{\scriptsize${\frac{1}{2}}$}}[
e_\alpha {}^ \lambda \partial _\mu e^ \alpha {}_ \nu
-(\mu\leftrightarrow \nu )] $, by taking the sum $
\Omega^{\mu \nu \lambda}-
\Omega^{\nu \lambda\mu}+
\Omega^{ \lambda\mu \nu }$ and lowering two indices with the help of the metric $g_{\mu \nu }(x)$.
The theory of quantum fields in curvilinear spacetime has been set up on the basis of this Lagrangian, or a simpler version for bosons which we do not write down. The classical field equation is solved on the background metric $g_{\mu \nu }(x)$ in the entire spacetime. The field is expanded into the solutions, and the coefficients are quantized by canonical commutation rules, after which they serve as creation and annihilation operators on some global vacuum of the quantum system.
The purpose of this note is to make this this procedure compatible with the equivalence principle. \
2.) If one wants to quantize the theory in accordance with the equivalence principle one must introduce creation and annihilation operators of proper elementary particles. These, however, are defined as irreducible representations of the Poincaré group with a given mass and spin. The symmetry transformations of the Poincaré group can be performed only in a Minkowski spacetime. According to Einstein’s theory, and confirmed by Satellite experiment, we can remove gravitational forces locally at one point. The neighborhood will still be subject to gravitational fields. For the definition of elementary particles we need only a small neighborhood. In it, the geometric forces can be replaced by the forces coming from the spin-2 gauge field theory of gravitation, which was developed by R. Feynman in his 1962 lectures at Caltech [@Feynmann1]. This can be rederived by expanding of the metric in powers its deviations from the flat Minkowski metric. We define a Minkowski frame $x^a$ around the point of zero gravity, and extend it to an entire finite box without spacetime curvature. Inside this box, particle experiments can be performed and the transformation properties under the Poincaré group can be identified.
Inside the box, the fields are governed by the flat-spacetime action $${\cal A}=\int d^4x \sqrt{-g} \bar\psi(x)\{ \gamma ^ a
e_{ a }{}^b
(i\partial _ b-
\Gamma_{a}^{ bc }
{\raisebox{0.095ex}{\scriptsize${\frac{1}{2}}$}}\Sigma_{ bc })
-m\}\psi(x).
\label{@LL2}$$ In this expression, $ e_{ a }{}^b= \delta_a{}^b+{\raisebox{0.095ex}{\scriptsize${\frac{1}{2}}$}}h _{ a }{}^ b(x)$. The metric and the spin connection are defined as above, exchanging the indices $ \alpha , \beta ,\dots$ by $a,b,\dots~$. All quantities must be expanded in powers of $h_a{}^b$.
Thus we have arrive at a standard local field theory in the freely falling Minkowski laboratory around the point of zero gravity. This action is perfectly Lorentz invariant, and the Dirac field can now be quantized without problems, producing an irreducible representation of the Poincaré group with states of definite momenta and spin orientation $|{\bf p},s_3 [m,s]\rangle$.
The Lagrangian governing the dynamics of the field $h_ a {}^ b (x)$ is well known from Feynman’s lecture [@Feynmann1]. If the laboratory is sufficiently small, we may work with the Newton approximation: $${\cal A}^h=-\frac{1}{8 \kappa }\int d^4x
\,h_{ab}
\epsilon ^{cade}
\epsilon _{c}{}^{bfg} \partial _d \partial _fh_{eg}+\dots~
, ~~~ \kappa = {8\pi G/c^3}, ~~G={\rm Newton~ constant},
\label{@}$$ where $ \epsilon ^{cade}$ is the antisymmetric unit tensor. If the laboratory is larger, for instance, if it contains the orbit of the planet mercury, we must include also the first post-Newtonian corrections.
Thus, although the Feynman spin-2 theory is certainly not a valid replacement of general relativity, it is so in a neighborhood of any freely falling point.
The vacuum of the Dirac field is, of course, not universal. Each point $x^\mu$ has its own vacuum state restricted to the associated freely falling Minkowski frame. \
3.) There is an immediate consequence of this quantum theory. If we consider a Dirac field in a black hole, and go to the neighborhood of any point, the quantization has to be performed in the freely falling Minkowski frame with smooth forces. These are incapable of creating pairs. An observer at a fixed distance $R$ from the center, however, sees the vacua of these Minkowski frames pass by with acceleration $a=GM/R^2$, where $G$ is Newton’s constant. At a given $R$, the frequency factor $e^{i \omega t}$ associated with the zero-point oscillations of each scalar particle wave of the world will be Doppler shifted to $e^{i \omega e^{i at/c}c/a}$, and this wave has frequencies distributed with a probabilty that behaves like $1/(e^{ 2\pi \Omega c/a}-1)$. Indeed, if we Fourier analyze this wave [@SIM]: $$\begin{aligned}
\int_\infty^\infty dt \,e^{i \Omega t}
e^{i \omega e^{i at/c}c/a}=
e^{-\pi c/2a}
\Gamma(i \Omega c/a)e^{-i \Omega c/a \log(\omega c/a)}(c/a).
\label{@}\end{aligned}$$ we see that the probability to find the frequency $ \Omega $ is $ |e^{-\pi c/2a}
\Gamma(i \Omega c/a)c/a|^2$, which is equal to $2\pi c/ (\Omega a)$ times $1/(e^{2\pi \Omega c/a}-1)$. The latter is a thermal Bose-Einstein distribution with an Unruh temperature $T_U=
\hbar a/2\pi c k_B$ [@UNR], where $k_B$ is the Boltzmann constant. The particles in this heat bath can be detected by suitable particle reactions as described in Ref. [@DETE].
The Hawking temperature $T_H$ is equal to the Unruh temperature of the freely falling Minkowski vacua at the surface of the black hole, which lies at the horizon $R=
r_S\equiv 2GM/c^2$. There the Unruh temperature is equal to $
T_U|_{a=GM/R^2,R=2GM/c^2}=\hbar c^3/8\pi GMk_B= T_H$.
Note that there exists a thermal bath of nonzero Unruh temperature $T_U(R)=T_H r_S^2/R^2$ at [*any*]{} distance $R$ from the center — even on the surface of the earth, where the temperature is too small to be measurable. In the light of this it is surprising that the derivation of the thermal bath from semiclassical pair creation is based on a coordinate singularity at the horizon [@PW].
For decreasing $R$ inside the horizon, the temperature rises to infinity, but this radiation cannot reach any outside oberver.
\
Acknowledgment:\
The author thanks V. Belinski pointing out the many papers on the semiclassical explanation of pair creation in a black hole.
[99]{}
Our notation follows\
[H. Kleinert]{}, [*Multivalued Fields in Condensed Matter, Electromagnetism, and Gravitation*]{}, World Scientific, Singapore 2009, pp. 1–497 ([http://www.physik.fu-berlin.de/\~kleinert/b11]{}).\
The only exception is that the vierbein field is here called $e^ \alpha {}_ \mu$ rather than $h^ \alpha {}_\mu$ to have the notation $h^a{}_b$ free for the small deviations of $e^a {}_b$ from the flat limit $ \delta^a {}_b $.
R.P. Feynman, F.B. Moringo, D. Pines, [*Feynman Lectures on Gravitation*]{} (held in 1962 at Caltech), Frontiers in Physics, New York. 1962. P.M. Alsing and P.W. Milonni, Am. J. Phys. [**72**]{}, 1524 (2004) (arXiv:quant-ph/0401170).
W.G. Unruh, Phys. Rev. D [**14**]{}, 870 (1976).
A. Higuchi, G. E. A. Matsas, and D. Sudarsky, Phys. Rev. D 45, R3308 (1992); 46, 3450 (1992); A. Higuchi, G.E.A. Matsas, D. Sudarsky, Phys. Rev. D [**45**]{} R3308 (1992); [*ibid.*]{} [**46**]{}, 3450 (1992); D.A.T. Vanzella and G.E.A. Matsas, Phys. Rev. D [**63**]{}, 014010 (2001). J. Mod.Phys. D [**11**]{}, 1573 (2002).
M.K. Parikh, F. Wilczek, Phys. Rev. Lett. [**85**]{}, 5042 (2000).
|
---
abstract: 'We show that the general Levy process can be embedded in a suitable Fock space, classified by cocycles of the real line regarded as a group. The formula of de Finetti corresponds to coboundaries. Kolmogorov’s processes correspond to cocycles of which the derivative is a cocycle of the Lie algebra of ${\bf R}$. The Lévy formula gives the general cocycle.'
author:
- |
R. F. Streater,\
Department of Mathematics,\
King’s College London,\
Strand, London,\
WC2R 2LS
date: '10/1/2002'
title: Fock Space Decomposition of Lévy Processes
---
Cyclic representations of groups
================================
Let $G$ be a group and $g\mapsto U_g$ a multiplier cyclic representation of $G$ on a Hilbert space ${\cal H}$, with multiplier $\sigma:G\times G\rightarrow {\bf C}$ and cyclic vector $\Psi$. This means that
- $U_gU_h=\sigma(g,h)U_{gh}$ for all $g,h\in G$.
- $u(e)=I$ where $e$ is the identity of then group and $I$ is the identity operator on ${\cal H}$.
- Span$\,\left\{U_g\Psi:g\in G\right\}$ is dense in ${\cal
H}$.
If $\sigma=1$ we say that $U$ is a true representation.
Recall that a multiplier of a group $G$ is a measurable two-cocycle in $Z^2(G,U(1))$; so $\sigma$ is a map $G\times
G\rightarrow U(1)$ such that $\sigma(e,g)=\sigma(g,e)=1$ and $$\sigma(g,h)\sigma(g,hk)^{-1}\sigma(gh,k)\sigma(h,k)^{-1}=1.
\label{sigma}$$ (\[sigma\]) expresses the associativity of operator multiplication of the $U(g)$. $\sigma$ is a coboundary if there is a map $b:G\rightarrow U(1)$ with $b(e)=1$ and $$\sigma(g,h)=b(gh)/(b(g)b(h)).
\label{2boundary}$$ We also need the concept of a one cocycle $\psi$ in a Hilbert space ${\cal K}$ carrying a unitary representation $V$. $\psi$ is a map $G\rightarrow {\cal K}$ such that $$V(g)\psi(h)=\psi(gh)-\psi(g) \hspace{.5in}\mbox{for }g,h\in G.$$ $\psi$ is a coboundary if there is a vector $\psi_0\in{\cal K}$ such that $$\psi(g)=(V(g)-I)\psi_0.
\label{oneboundary}$$ Coboundaries are always cocycles. We say that, in (\[2boundary\]) and (\[oneboundary\]), $\sigma$ is the coboundary of $b$ and $\psi$ is the coboundary of $\psi_0$.
We say that two cyclic $\sigma$-representations $\{{\cal
H},U,\Psi\}$ and $\{{\cal K},V,\Phi\}$ are [*cyclically equivalent*]{} if there exists a unitary operator $W:{\cal
H}\rightarrow{\cal K}$ such that $V_g=WU_gW^{-1}$ for all $g\in
G$, and $W\Psi=\Phi$. Any cyclic multiplier representation $\{{\cal
H},U,\Psi\}$ defines a function $F$ on the group by $$F(g):=\langle\Psi,U_g\Psi\rangle,$$ which satisfies $\sigma$-positivity: $$\begin{aligned}
F(e)&=&1\\
\sum_{ij}\overline{\lambda}_i\lambda_j\sigma(g_i^{-1},g_j)F(g_i^{-1}g_j)
&\geq& 0.
\label{twisted}\end{aligned}$$ $F$ is called the characteristic function of the representation, because
- Two cyclic multiplier representations of $G$ are cyclically equivalent if and only if they have the same characteristic function;
- Given a function on $G$ satisfying $\sigma$-positivity, then there exists a cyclic $\sigma$-representation of which it is the characteristic function.
If $G=\{s\in{\bf R}\}$, $\sigma=1$ and $U_s$ is continuous, then $F$ obeys the hypotheses of Bochner’s theorem and defines a probability measure $\mu$ on ${\bf R}$. More generally, we can apply Bochner’s theorem (if $\sigma=1$) to any one-parameter subgroup $s\mapsto g(s)\in G_0\subseteq\in G$. Then $U_{g(s)},\;s\in{\bf R}$ is a one-parameter unitary group; its infinitesimal generator is a self-adjoint operator $X$ on ${\cal
H}$. The relation to $\mu$ is given as follows: let $X=\int\lambda
dE(\lambda)$ be the spectral resolution of $X$. Then $$\mu(\lambda_1,\lambda_2]=\langle\Psi,\left(E(\lambda_2)-E(\lambda_1)\right)
\Psi\rangle.$$ Conversely, given any random variable $X$ on a probability space $(\Omega,\mu)$, we can define the cyclic unitary representation of the group ${\bf R}$ by the multiplication operator $$U(s)=\exp\{isX\}$$ and use the cyclic vector $\Psi(\omega)=1$ on the Hilbert space $L^2(\Omega, d\mu)$. In this way, probability theory is reduced to the study of cyclic representations of abelian groups, and quantum probability to the study of cyclic $\sigma$-representations of non-abelian groups.
Processes as Tensor Products
============================
Given a cyclic $\sigma$-representation $\{{\cal H},U,\Phi\}$ of a group $G$, we can get a multiplier representation of the product group $G^n:=G\times G\times \ldots \times G$ ($n$ factors) on ${\cal H}\otimes{\cal H}\ldots\otimes{\cal H}$, by acting on the vector $\Psi\otimes\Psi\ldots\otimes\Psi$ by the unitary operators $U(g_1,\ldots,g_n):=U(g_1)\otimes\ldots U(g_n)$, as each $g_j$ runs over the group $G$. The resulting cyclic $\sigma^{\otimes
n}$-representation is denoted $$\left\{{\cal H}^{\otimes n},U^{\otimes n},\Psi^{\otimes n}\right\}.$$ The twisted positive function on $G^n$ defined by this cyclic representation is easily computed to be $$F^{\otimes n}(g_1,\ldots,g_n)=F(g_1)F(g_2)\ldots F(g_n).$$ If $G$ has a one-parameter subgroup $G_0$, then the infinitesimal generators $X_j$ of this subgroup in the $j^{\rm th}$ place define random variables $(j=1,\ldots, n)$ that are all independent in the measure $\mu^{\otimes n}$ on ${\bf R}^n$ defined by $F^{\otimes n}$, and are all identically distributed. They can thus be taken as the increments of a process in discrete time $t=1,\ldots,n$. To get a process with time going to infinity, we can embed each tensor product ${\cal H}^{\otimes n}$ in the “incomplete infinite tensor product” of von Neumann, denoted $$\bigotimes_{j=1}^{j=\infty}\rule{.5cm}{0cm}^\Psi{\cal H}_j\hspace{.5in}\mbox{where }
{\cal H}_j={\cal H}\mbox{ for all }j.$$
It is harder to construct processes in continuous time. We made [@RFS] the following definition:
[*A cyclic $G$-representation $\{{\cal H},U,\Psi\}$ is said to be [*infinitely divisible*]{} if for each positive integer $n$ there exists another cyclic $G$-representation $\{{\cal
K},V,\Phi\}$ such that $\{{\cal H},U,\Psi\}$ is cyclically equivalent to $\{{\cal K}^{\otimes n},V^{\otimes n}, \Phi^{\otimes
n}\}$.*]{}
The picturesque notation $\left\{{\cal H}^{\otimes \frac{1}{n}},
U^{\otimes\frac{1}{n}},\Psi^{\otimes\frac{1}{n}}\right\}$ can be used for $\{{\cal K},V,\Phi\}$.
If $G={\bf R}$ then $\{{\cal H},U,\Psi\}$ is infinitely divisible if and only if the corresponding measure $\mu$ given by Bochner’s theorem is infinitely divisible [@RFS]. It is clear that $\{{\cal H},U,\Psi\}$ is infinitely divisible if and only if there exists a branch of $F(g)^{\frac{1}{n}}$ which is positive semi-definite on $G$.
This criterion was extended in [@PS] to $\sigma$-representations. In that case, for each $n$, there should exist an $n^{\rm th}$ root $\sigma(g,h)^{\frac{1}{n}}$ which is also a multiplier. One can then consider cyclic representations such that for each $n$, $F(g)^{\frac{1}{n}}$ has a branch which is $\sigma^{\frac{1}{n}}$-positive semi-definite.
If $\{{\cal H},U,\Psi\}$ is an infinitely divisible $G$-representation, then we may construct a continuous tensor product of the Hilbert spaces ${\cal H}_t$, where $t\in{\bf
R}$ and all the Hilbert spaces are the same. This gives us, in the non-abelian case, quantum stochastic processes with independent increments. See references. The possible constructions are classified in terms of cocycles of the group $G$. Here we shall limit discussion to the analysis of the Lévy formula in these terms.
The cocycle
===========
Let $F: G\rightarrow {\bf C}$ and $F(e)=1$. It is a classical result for $G={\bf R}$ that a function $F^{\frac{1}{n}}$ has a branch that is positive semidefinite for all $n>0$ if and only if $\log F$ has a branch $f$ such that $f(0)=0$ and $f$ is [*conditionally*]{} positive semidefinite. This is equivalent to $f(x-y)-f(x)-f(-y)$ being positive semidefinite. This result is easily extended to groups [@RFS] and $\sigma$-representations [@PS; @PS2]. Let us consider the case where $\sigma=1$. It follows that an infinitely divisible true cyclic representation $\{{\cal H},U,\Psi\}$ of $G$ defines a conditionally positive semidefinite function $f(g)={\rm log}\langle\Psi,
U(g)\Psi\rangle$, so that $$\sum_{j,k}\overline{\alpha}_j\alpha_k\left(\rule{0cm}{1cm}f\left(
g_j^{-1}g_k\right)-f(g_j)^{-1}-f(g_k)\right)\geq 0.
\label{qform}$$ We can use this positive semidefinite form to make Span$\,G$ into a pre-scalar product space, by defining $$\langle\psi(g),\psi(h)\rangle:=f(g^{-1}h)-f(g^{-1})-f(h),
\hspace{.5in}g,h\in G.
\label{scalar}$$ Let ${\cal K}$ be the Hilbert space, that is the separated and completed space got this way. There is a natural injection $\psi:G\rightarrow{\cal K}$, namely, $g\mapsto [g]$, the equivalence class of $g$ given by the relation $g\sim h$ if the seminorm defined by (\[qform\]) vanishes on $g-h$. The left action of the group $G$ on this function is not quite unitary; in fact the following is a unitary representation [@A]: $$V(h)\psi(g):=\psi(hg)-\psi(h).$$ One just has to check from (\[scalar\]) that the group law $V(g)V(h)=V(gh)$ holds, and that $$\langle V(h)\psi(g_1),V(h)\psi(g_2)\rangle=\langle\psi(g_1),\psi(g_2)\rangle.$$ Thus we see that $\psi(g)$ is a one-cocycle relative to the $G$-representation $V$ [@A].
The embedding theorem
=====================
Given a Hilbert space ${\cal K}$, the [*Fock space*]{} defined by ${\cal K}$ is the direct sum of all symmetric tensor products of ${\cal K}$, $$EXP\,{\cal K}:={\bf C}\bigoplus{\cal K}\bigoplus({\cal
K}\otimes{\cal K})_s \bigoplus\ldots .$$ The element $1\in{\bf C}$ is called the Fock vacuum. The following [*coherent states*]{} form a total set in $EXP\,{\cal K}$: $$EXP\,\psi:=1+\psi+(1/2!)\psi\otimes\psi+\ldots,
\hspace{.5in}\psi\in{\cal K}.$$ The notation is natural, in view of the easy identity $$\langle
EXP\,\psi(g),EXP\,\psi(h)h\rangle=\exp\{\langle\psi(g),\psi(h)\rangle\}.$$ Then the embedding theorem [@RFS] says that if $\{{\cal
H},U,\Psi\}$ is an infinitely divisible cyclic representation, then it is cyclically equivalent to the cyclic representation $W$ on $EXP\,{\cal K}$, with the Fock vacuum as the cyclic vector, with the unitary representation $W(h)$defined on the total set of coherent states by $$W(h)EXP\,\psi(g)=F(hg)/F(g)EXP\,\psi(hg).$$ The proof is simply a verification. This result has been called [@RFS; @PS] the Araki-Woods embedding theorem; more properly this name belongs to the embedding [@RFS] of the [*process*]{} that one constructs from $\{{\cal H},U,\Phi\}$, which is similar to a deep result in [@AW]. For the group ${|bf R}$ it amounts to the Wiener chaos expansion.
The Lévy Formula
================
Although the above theory was developed for quantum mechanics, it includes the theory of Lévy processes, which is the class of processes with independent increments. This is just the case when the group in question is ${\bf R}$, or ${\bf R}^n$. The latter group has projective representations in $n>1$, and using these leads to the free quantised field [@RFS0]. The true representations lead to generalised random fields.
Every projective representation of ${\bf R}$ is a true representation, which is multiplicity free if it is cyclic. By reduction theory, it is then determined by a measure on the dual group, here ${\bf R}$. Araki showed that a one-cocycle can be algebraic or topological. For the group ${\bf R}$, the algebraic cocycles are all of the form $f(x-y)-f(x) -f(-y)=axy$. This is satisfied by the Gaussian term $log F(x)=-\frac{a}{2}x^2+ibx$, and this is the only possibility. The Poisson($\lambda$) is an example of a coboundary, when $\log F(t) =c\lambda(e^{ipt}-1)$ for some $p$, the increment of the jumps. The weighted mixture of these coboundaries gives di Finetti’s formula [@D]: $$\log F(t)=\lambda\left\{ibt-\frac{a^2t^2}{2}+c\int\left(e^{ipt}-1\right)
dP(p)\right\}.$$ That this is not the most general infinitely divisible measure was recognised by Kolmogorov [@K]. In our terms, this is the statement that not all topological cocycles are coboundaries (the cohomology is non-trivial). Kolmogorov considered random variables with finite variance relative to the measure $dP$. This is equivalent in our terms to $dP=|\widehat{\psi}(p)|^2dp$ and the cocyle $\psi$ being of the form $\psi(x)=(V(x)-I)\psi_0$, where $i\partial_x\psi_0$ is square integrable over the group ${\bf R}$, but $\psi_0$ might not be. Thus, $\psi$ is a cocycle for the Lie algebra of the group, a case treated in [@RFS3]. This gives us Kolmogorov’s formula $$\begin{aligned}
\log F(t)&=&\lambda\left\{ibt-a^2t^2/2+\right.\nonumber\\
&+&\left.\int\left(e^{ipt} -1-itp\right)|\psi(p)|^2dp\right\}.\end{aligned}$$ The term $\int(-itp)|\psi(p)|^2dp$ is possibly divergent near $p=0$ but is not required to exist on its own near $p=0$, since the function $M=e^{ipt}-1-ipt$ behaves as $p^2$ near the origin. But to retain a meaning, Kolmogorov’s formula does need $p|\widehat{\psi}(p)|^2$ to be integrable at infinity. This is not needed for the general cocycle, so the formula is not the most general.
Lévy gave the answer [@L] by replacing $M$ by $$e^{ipt}-1-ipt/(1+p^2),$$ so that $\widehat{\psi}$ has no constraint at infinity other than being $L^2$. The general form of an infinitely divisible random variable, the Lévy formula, in effect constructs the most general cocycle of the group ${\bf R}$ by requiring only that $p\widehat{\psi}(p)$ should be locally square-integrable at $p=0$, and $\widehat{\psi}(p)$ should be square-integrable at all other points.
[99]{} Araki, H., Factorizable representations of current algebra, non-commutative extension of the Lévy-Khinchin formula, and cohomology of a solvable group with values in a Hilbert space, Publ. Res Inst. Math Sci (RIMS) Kyoto, 1970/71. Araki, H., and Woods, E. J., Complete Boolean Lattices of Type I von Neumann Algebras, Publ. Res. Inst. Math. Sci., (Kyoto), [**2**]{}, 157- ,1966. de Finetti, B., Sulla funzione a incremento aleatorio, Atti Acad. Naz. Lincei. Rend. Cl. Sci. Fis. Mat. Nat. [**10**]{}, 163-168, 325-329, 548-553, 1929. Erven, J., and Falkowski, B.-J., Low order cohomology and applications, Lecture Notes in Maths., [**877**]{}, Springer-Verlag, 1981. Falkowski, F.-J., Factorizable and infinitely divisible PUA representations of locally compact groups, J. of Mathematical Phys., [**15**]{}, 1060-1066, 1974. Guichardet, A., Symmetric Hilbert spaces and related topics, Lecture Notes in Maths., [**261**]{}, Springer-Verlag, 1972. Kolmogorov, A. N., Sulla forma generale di un processo stocastico omogeneo, Atti Acad. Naz. Lincei Rend. Cl. Sci. Fis. Mat. Nat. [**15**]{}, 805-808, 866-869 1932. Lévy, P., Sur les intégrales dont les éléments sont des variables aléatoires indépendentes, Annali R. Scuola Norm. Sup. Pisa, [**3**]{}, 337-366, 1934; [**4**]{}, 217-218, 1935. Parthasarathy, P. K., and Schmidt, K., Positive definite kernels, continuous tensor products, and central limit theorems..., Lecture Notes in Maths., [**272**]{}, Springer-Verlag, 1972. Parthasarathy, K. R., and Schmidt, K., Factorizable representations of current groups and the Araki-Woods embedding theorem, Acta Math., [**128**]{}, 53-71, 1972. Streater, R. F. Current Commutation Relations and Continuous Tensor Products, Nuovo Cimento, [**53A**]{}, 487-495, 1968. Streater, R. F., Current commutation relations, continuous tensor products, and infinitely divisible group representations; pp 247-263, in [*Local Quantum Theory*]{} ed R. Jost, Academic Press, 1969. Streater, R. F., A continuum analogue of the lattice gas, Commun. Math. Phys., [**12**]{}, 226-232, 1969. Streater, R. F., Infinitely divisible representations of Lie algebras, Zeits. fur Wahr. und verw. Gebiete, [**19**]{}, 67-80, 1971. Vershik, A. M., Gelfand, I. M., and Graev, M. J., Irreducible representations of the group $G^X$ and cohomologies, Funct. Anal. and Appl. (translated from Russian), [**8**]{}, 67-69, 1974.
|
---
abstract: 'An ever increasing amount of our digital communication, media consumption, and content creation revolves around videos. We share, watch, and archive many aspects of our lives through them, all of which are powered by strong video compression. Traditional video compression is laboriously hand designed and hand optimized. This paper presents an alternative in an end-to-end deep learning codec. Our codec builds on one simple idea: Video compression is repeated image interpolation. It thus benefits from recent advances in deep image interpolation and generation. Our deep video codec outperforms today’s prevailing codecs, such as H.261, MPEG-4 Part 2, and performs on par with H.264.'
author:
- 'Chao-Yuan Wu, Nayan Singhal, Philipp Krähenbühl'
bibliography:
- '../bib.bib'
title: Video Compression through Image Interpolation
---
[ccc]{} MPEG-4 Part 2&H.264&Ours\
(MS-SSIM = 0.946)& (MS-SSIM = 0.980)&(MS-SSIM = [**0.984**]{})\
;(0.85,-0.25) rectangle (1.4,-0.7);
&
;(0.85,-0.25) rectangle (1.4,-0.7);
&
;(0.85,-0.25) rectangle (1.4,-0.7);
\
&&
\[fig:teaser\]
Introduction
============
Video commands the lion’s share of internet data, and today makes up three-fourths of all internet traffic [@cisco]. We capture moments, share memories, and entertain one another through moving pictures, all of which are powered by ever powerful digital camera and video compression. Strong compression significantly reduces internet traffic, saves storage space, and increases throughput. It drives applications like cloud gaming, real-time high-quality video streaming [@richardson2002video], or 3D and 360-videos. Video compression even helps better understand and parse videos using deep neural networks [@coviar]. Despite these obvious benefits, video compression algorithms are still largely hand designed. The most competitive video codecs today rely on a sophisticated interplay between block motion estimation, residual color patterns, and their encoding using discrete cosine transform and entropy coding [@schwarz2007overview]. While each part is carefully designed to compress the video as much as possible, the overall system is not jointly optimized, and has largely been untouched by end-to-end deep learning.
This paper presents, to the best of our knowledge, the first end-to-end trained deep video codec. The main insight of our codec is a different view on video compression: We frame video compression as repeated image interpolation, and draw on recent advances in deep image generation and interpolation. We first encode a series of anchor frames (key frames), using standard deep image compression. Our codec then reconstructs all remaining frames by interpolating between neighboring anchor frames. However, this image interpolation is not unique. We additionally provide a small and compressible code to the interpolation network to disambiguate different interpolations, and encode the original video frame as faithfully as possible. The main technical challenge is the design of a compressible image interpolation network.
We present a series of increasingly powerful and compressible encoder-decoder architectures for image interpolation. We start by using a vanilla U-net interpolation architecture [@ronneberger2015u] for reconstructing frames other than the key frames. This architecture makes good use of repeating static patterns through time, but it struggles to properly disambiguate the trajectories for moving patterns. We then directly incorporate an offline motion estimate from either block-motion estimation or optical flow into the network. The new architecture interpolates spatial U-net features using the pre-computed motion estimate, and improves compression rates by an order of magnitude over deep image compression. This model captures most, but not all of the information we need to reconstruct a frame. We additionally train an encoder that extracts the content not present in either of the source images, and represents it compactly. Finally, we reduce any remaining spatial redundancy, and compress them using a 3D PixelCNN [@oord2016pixel] with adaptive arithmetic coding [@witten1987arithmetic].
To further reduce bitrate, our video codec applies image interpolation in a hierarchical manner. Each consecutive level in the hierarchy interpolates between ever closer reference frames, and is hence more compressible. Each level in the hierarchy uses all previously decompressed images.
We compare our video compression algorithm to state-of-the-art video compression (HEVC, H.264, MPEG-4 Part 2, H.261), and various image interpolation baselines. We evaluate all algorithms on two standard datasets of uncompressed video: Video Trace Library (VTL) [@vtl] and Ultra Video Group (UVG) [@uvg]. We additionally collect a subset of the Kinetics dataset [@carreira2017quo] for both training and testing. The Kinetics subset contains high resolution videos, which we down-sample to remove compression artifacts introduced by prior codecs on YouTube. The final dataset contains 2.8M frames. Our deep video codec outperforms all deep learning baselines, MPEG-4 Part 2, and H.261 in both compression rate and visual quality measured by MS-SSIM [@wang2003multiscale] and PSNR. We are on par with the state-of-the-art H.264 codec. shows a visual comparison. All the data is publicly available, and we will publish our code upon acceptance.
Related Work
============
Video compression algorithms must specify an encoder for compressing the video, and a decoder for reconstructing the original video. The encoder and the decoder together constitute a codec. A codec has one primary goal: Encode a series of images in the fewest number of bits possible. Most compression algorithms find a delicate trade-off between compression rate and reconstruction error. The simplest codecs, such as motion JPEG or GIF, encode each frame independently, and heavily rely on image compression.
#### Image compression.
For images, deep networks yield state-of-the-art compression ratios with impressive reconstruction quality [@waveone2017; @johnston2017improved; @toderici2017full; @balle2016end; @Theis2017a]. Most of them train an autoencoder with a small binary bottleneck layer to directly minimize distortion [@waveone2017; @toderici2017full; @johnston2017improved]. A popular variant progressively encodes the image using a recurrent neural network [@toderici2017full; @baig2017learning; @johnston2017improved]. This allows for variable compression rates with a single model. We extend this idea to variable rate video compression.
Deep image compression algorithms use fully convolutional networks to handle arbitrary image sizes. However, the bottleneck in fully convolutional networks still contains spatially redundant activations. Entropy coding further compresses this redundant information [@mentzer2018conditional; @waveone2017; @toderici2017full; @balle2016end; @Theis2017a]. We follow Mentzer [@mentzer2018conditional] and use adaptive arithmetic coding on probability estimates of a Pixel-CNN [@oord2016pixel].
Learning the binary representation is inherently non-differentiable, which complicates gradient based learning. Toderici [@toderici2017full] use stochastic binarization and backpropagate the derivative of the expectation. Agustsson [@agustsson2017soft] use soft assignment to approximate quantization. Balle [@balle2016end] replace the quantization by adding uniform noise. All of these methods work similarly and allow for gradients to flow through the discretization. In this paper, we use stochastic binarization [@toderici2017full].
Combining this bag of techniques, deep image compression algorithms offer a better compression rate than hand-designed algorithms, such as JPEG or WebP [@webp], at the same level of image quality [@waveone2017]. Deep image compression algorithms heavily exploit the spatial structure of an image. However, they miss out on a crucial signal in videos: time. Videos are temporally highly redundant. No deep image compression can compete with state-of-the-art (shallow) video compression, which exploits this redundancy.
#### Video compression.
Hand-designed video compression algorithms, such as H.263, H.264 or HEVC (H.265) [@le1991mpeg] build on two simple ideas: They decompose each frame into blocks of pixels, known as macroblocks, and they divide frames into image (I) frames and referencing (P or B) frames. I-frames directly compress video frames using image compression. Most of the savings in video codecs come from referencing frames. P-frames borrow color values from preceding frames. They store a motion estimate and a highly compressible difference image for each macroblock. B-frames additionally allow bidirectional referencing, as long as there are no circular references. Both H.264 and HEVC encode a video in a hierarchical way. I-frames form the top of the hierarchy. In each consecutive level, P- or B-frames reference decoded frames at higher levels. The main disadvantages of traditional video compression is the intensive engineering efforts required and the difficulties in joint optimization. In this work, we build a hierarchical video codec using deep neural networks. We train it end-to-end without any hand-engineered heuristics or filters. Our key insight is that referencing (P or B) frames are a special case of image interpolation.
Learning-based video compression is largely unexplored, in part due to difficulties in modeling temporal redundancy. Tsai propose a deep post-processing filter encoding errors of H.264 in domain specific videos [@tsai2017learning]. However, it is unclear if and how the filter generalizes in an open domain. To the best of our knowledge, this paper proposes the first general deep network for video compression.
#### Image interpolation and extrapolation.
Image interpolation seeks to hallucinate an unseen frame between two reference frames. Most image interpolation networks build on an encoder-decoder network architecture to move pixels through time [@jia2016dynamic; @niklaus2017sepvideo; @lyty-vfsdv-17; @jiang2017super]. Jia [@jia2016dynamic] and Niklaus [@niklaus2017sepvideo] estimate a spatially-varying convolution kernel. Liu [@lyty-vfsdv-17] produce a flow field. All three methods then combine two predictions, forward and backward in time, to form the final output.
Image extrapolation is more ambitious and predicts a future video from a few frames [@mcl-dmvpb-16], or a still image [@vpt-gvsd-16; @xue2016visual]. Both image interpolation and extrapolation works well for small timesteps, e.g. for creating slow-motion video [@jiang2017super] or predicting a fraction of a second into the future. However, current methods struggle for larger timesteps, where the interpolation or extrapolation is no longer unique, and additional side information is required. In this work, we extend image interpolation and incorporate few compressible bits of side information to reconstruct the original video.
Preliminary
===========
Let $I^{(t)} \in \RR^{W\times H\times 3}$ denote a series of frames for $t \in \{0,1,\ldots\}$. Our goal is to compress each frame $I^{(t)}$ into a binary code $b^{(t)} \in \cbr{0, 1}^{N_t}$. An encoder $E: \{I^{(0)},I^{(1)},\ldots\} \to \{b^{(0)},b^{(1)},\ldots\}$ and decoder $D:\{b^{(0)},b^{(1)},\ldots\} \to \{\hat I^{(0)},\hat I^{(1)},\ldots\}$ compress and decompress the video respectively. $E$ and $D$ have two competing aims: Minimize the total bitrate ${\sum_t N_t}$, and reconstruct the original video as faithfully as possible. We measure the reconstruction error with an L1 loss $\ell(\hat{I}, I) = \|\hat{I} - I\|_1$.
#### Image compression.
The simplest encoders and decoders process each image independently: $E_I: I^{(t)} \to b^{(t)}$, $D_I:b^{(t)} \to \hat I^{(t)}$. Here, we build on the model of Toderici [@toderici2017full], which encodes and reconstructs an image progressively over $K$ iterations. At each iteration, the model encodes a residual $r_k$ between the previously coded image and the original frame: $$\begin{aligned}
r_{0} &:= I\notag\\
b_{k} &:= E_I\rbr{r_{k-1}, g_{k-1}}, & & & r_{k} &:= r_{k-1} - D_I\rbr{b_k, h_{k-1}}, &&& \text{for } k = 1, 2, \dots\notag\end{aligned}$$ where $g_k$ and $h_k$ are latent Conv-LSTM states that are updated at each iteration. All iterations share the same network architecture and parameters forming a recurrent structure. The training objective minimizes the distortion at all the steps ${\sum_{k=1}^K \|r_{k}\|_1}$. The reconstruction $\hat I_K = \sum_{k=1}^K D_I(b_k)$ allows for a variable bitrate encoding depending on the choice of $K$.
Both the encoder and the decoder consist of $4$ Conv-LSTMs with stride $2$. The bottleneck consists of a binary feature map with $L$ channels and 16 times smaller spatial resolution in both width and height. Toderici use a stochastic binarization to allow a gradient signal through the bottleneck. Mathematically, this reduces to *REINFORCE* [@w-ssgac-92] on sigmoidal activations. At inference time, the most likely state is selected.
This architecture yields state-of-the-art image compression performance. However, it fails to exploit any temporal redundancy.
#### Video compression.
Modern video codecs process I-frames using an image encoder $E_I$ and decoder $D_I$. P-frames store a block motion estimate ${\mathcal{T}}\in \RR^{W\times H\times 2}$, similar to an optical flow field, and a residual image ${\mathcal{R}}$, capturing the appearance changes not explained by motion. Both motion estimate and residual are jointly compressed using entropy coding. The original color frame is then recovered by $$\begin{aligned}
I_i^{(t)} = I^{(t-1)}_{i - {\mathcal{T}}^{(t)}_i} + {\mathcal{R}}^{(t)}_i, \label{eq:motion_compensation}\end{aligned}$$ for every pixel $i$ in the image. The compression is uniquely defined by a block structure and motion estimate ${\mathcal{T}}$. The residual is simply the difference between the motion interpolated image and the original.
In this paper, we propose a more general view on video compression through image interpolation. We augment image interpolation network with motion information and add a compressible bottleneck layer.
[0.38]{} ![image](keynote/teaser5_2.pdf){height="4cm"}
[\[fig:frame\_model\]]{}
[0.61]{} ![image](keynote/teaser5.pdf){height="3.8cm"}
[\[fig:interpol\_model\]]{}
[\[fig:model\]]{}
Video Compression through Interpolation {#sec:model}
=======================================
Our codec first encodes I-frames using the compression algorithm of Toderici , see [Figure \[fig:frame\_model\]]{}. We chose every $n$-th frame as an I-frame. The remaining $n-1$ frames are interpolated. We call those frames R-frames, as they reference other frames. We choose $n=12$ in practice, but also experimented with larger groups of pictures. We will first discuss our basic interpolation network, and then show a hierarchical interpolation setup, that further reduces the bitrate.
Interpolation network
---------------------
In the simplest version of our codec, all R-frames use a blind interpolation network to interpolate between two key-frames $I_1$ and $I_2$. Specifically, we train a context network $C: I \to \{f^{(1)},f^{(2)},\ldots\}$ to extract a series of feature maps $f^{(l)}$ of various spatial resolutions. For notational simplicity let $f := \{f^{(1)},f^{(2)},\ldots\}$ be the collection of all context features. In our implementation, we use the upconvolutional feature maps of a U-net architecture with increasing spatial resolution $\frac{W}{8}\times\frac{H}{8}$, $\frac{W}{4}\times\frac{H}{4}$, $\frac{W}{2}\times\frac{H}{2}$, $W\times H$, in addition to the original image.
We extract context features $f_1$ and $f_2$ for key-frames $I_1$ and $I_2$ respectively, and train a network $D$ to interpolate the frame $\hat{I} := D\rbr{f_1, f_2}$. $C$ and $D$ are trained jointly. This simple model favors a high compression rate over image quality, as none of the R-frames capture any information not present in the I-frames.
Without any further information, it is impossible for the network to faithfully reconstruct a frame. What can we provide to the network to make interpolation easier?
#### Motion compensated interpolation.
A great candidate is ground truth motion information. It defines where pixels move through time and greatly disambiguates interpolation. We tried both optical flow [@farneback2003two] and block motion estimation [@richardson2002video]. Block motion estimates are easier to compress, but optical flow retains finer details.
We use the motion information to warp each context feature map $$\begin{aligned}
\tilde f^{(l)}_i = f^{(l)}_{i - {\mathcal{T}}_i},\end{aligned}$$ at every spatial location $i$. We scale the motion estimation with the resolution of the feature map, and use bilinear interpolation for fractional locations. The decoder now uses the warped context features $\tilde f$ instead, which allows it to focus solely on image creation, and ignore motion estimation.
Motion compensation greatly improves the interpolation network, as we will show in . However, it still only produces content seen in either reference image. Variations beyond motion, such as change in lighting, deformation, occlusion, etc. are not captured by this model.
Our goal is to encode the remaining information in a highly compact from.
#### Residual motion compensated interpolation.
Our final interpolation model combines motion compensated interpolation with a compressed residual information, capturing both the motion and appearance difference in the interpolated frames. show an overview of the model.
We jointly train an encoder $E_R$, context model $C$ and interpolation network $D_R$. The encoder sees the same information as the interpolation network, which allows it to compress just the missing information, and avoid a redundant encoding. Formally, we follow the progressive compression framework of Toderici [@toderici2017full], and train a variable bitrate encoder and decoder conditioned on the warped context $\tilde f$: $$\begin{aligned}
r_0 &:= I\\
b_k &:= E_R(r_{k-1}, \tilde f_1, \tilde f_2, g_{k-1}), & r_k &:= r_{k-1} - D_R(b_k, \tilde f_1, \tilde f_2, h_{k-1}), & \text{for }k=1,2,\ldots
$$
This framework allows the encoder to learn a variable rate compression at high reconstruction quality. The interpolation network generally requires fewer bits to encode temporally close images and more bits for images that are farther apart. In one extreme, when key frames do not provide any meaningful signal to the interpolated frame, our residual motion compensated interpolation reduces to image compression. In the other extreme, when the image content does not change, our algorithm reduces to a vanilla interpolation, and requires close to zero bits.
In the next section, we use this to our advantage, and design a hierarchical interpolation scheme, maximizing the number of temporally close interpolations.
![ We apply the interpolation in a hierarchical manner. Each level in the hierarchy uses previously decompressed images.[]{data-label="fig:hierarchy"}](keynote/decode_hier.pdf){width="\linewidth"}
Hierarchical interpolation
--------------------------
The basic idea of hierarchical interpolation is simple: We interpolate some frames first, and use them as key-frames for the next level of interpolations. See [Figure \[fig:hierarchy\]]{} for example. Each interpolation model $\Mcal_{a,b}$ references $a$ frames into the past and $b$ frames into the future. There are a few things we need to trade off. First, every level in our hierarchical interpolation compounds error. The shallower the hierarchy, the fewer errors compound. In practice, the error propagation for more than three levels in the hierarchy significantly reduces the performance of our codec. Second, we need to train a different interpolation network $\Mcal_{a,b}$ for each temporal offset $(a, b)$, as different interpolations behave differently. To maximally use each trained model, we repeat the same temporal offsets as often as possible. Third, we need to minimize the sum of temporal offsets used in interpolation. The compression rate directly relates to the temporal offset, hence minimizing the temporal offset reduces the bitrate.
Considering just the bitrate and the number of interpolation networks, the optimal hierarchy is a binary tree cutting the interpolation range in two at each level. However, this cannot interpolate more than $n=2^3=8$ consecutive frames, without significant error propagation. We extend this binary structure to $n=12$ frames, by interpolating at a spacing of three frames in the last level of the hierarchy. For a sequence of four frames $I_1,\ldots,I_4$, we train an interpolation model $\Mcal_{1,2}$ that predicts frame $I_2$, given $I_1$ and $I_4$. We use the exact same model $\Mcal_{1,2}$ to predict $I_3$, but flip the conditioned images $I_4$ and $I_1$. This yields an equivalent model $\Mcal_{2,1}$ predicting the third instead of the second image in the series. Combining this with an interpolation model $\Mcal_{3,3}$ and $\Mcal_{6,6}$ in a hierarchy, we extend the interpolation range from $n=8$ frames to $n=12$ frames while keeping the same number of models and levels. We tried applying the same trick to all levels in the hierarchy, extending the interpolation to $n=27$ frames, but performance dropped, as we had more distant interpolations. To apply this to a full video of $N$ frames, we divide them into $\ceil{N/n}$ groups of pictures (GOPs). Two consecutive groups share the same boundary I-frame. We apply our hierarchical interpolation to each group independently.
#### Bitrate optimization.
Each interpolation model at a level $l$ of the hierarchy, can choose to spend $K_l$ bits to encode an image. Our goal is to minimize the overall bitrate, while maintaining a low distortion for all encoded frames. The challenge here is that each selection of $K_l$ affects all lower levels, as errors propagate. Selecting a globally optimal set of $\cbr{K_l}$ thus requires iterating through all possible combinations, which is infeasible in practice.
We instead propose a heuristic bitrate selection based on beam search. For each level we chose from $m$ different bitrates. We start by enumerating all $m$ possibilities for the I-frame model. Next, we expand the first interpolation model with all $m$ possible bitrates, leading to $m^2$ combinations. Out of these combinations, not all lead to a good MS-SSIM per bitrate, and we discard combinations not on the envelope of the MS-SSIM vs bitrate curve. In practice, only $O(m)$ combinations remain. We repeat this procedure for all levels of the hierarchy. This reduces the search space from $m^L$ to $O(Lm^2)$ for an $L$-level hierarchy. In practice, this yields sufficiently good bitrates.
Implementation
--------------
#### Architecture.
Our encoder and decoder (interpolation network) architecture follows the image compression model in Toderici [@toderici2017full]. While Toderici use $L=32$ latent bits to compress an image, we found that for interpolation, $L=8$ bits suffice for distance $3$ and $L=16$ for distance $6$ and $12$. This yields a bitrate of $0.0625$ bits per pixel (BPP) and $0.03125$ BPP for each iteration respectively.
We use the original U-net [@ronneberger2015u] as the context model. To speed-up training and save memory, we reduce the number of channels of all filters by half. We did not observe any significant performance degradation.
To make it compatible with our architecture, we remove the final output layer and takes the feature maps at the resolutions that are $2\times$, $4\times$, $8\times$ smaller than the original input image.
#### Conditional encoder and decoder.
To add the information of the context frames into the encoder and decoder, we fuse the U-net features with the individual Conv-LSTM layers. Specifically, we perform the fusion before each Conv-LSTM layer by concatenating the corresponding U-net features of the same spatial resolution. To increase the computational efficiency, we selectively turn some of the conditioning off in both encoder and decoder. This was tuned for each interpolation network; see supplementary material for details.
To help the model compare context frames and the target frame side-by-side, we additionally stack the two context frames with target frame, resulting in a $9$-channel image, and use that instead as the encoder input.
#### Entropy coding.
Since the model is fully-convolutional, it uses the same number of bits for all locations of an image. This disregards the fact that information is not distributed uniformly in an image. Following Mentzer [@mentzer2018conditional], we train a 3D Pixel-CNN on the $\cbr{0,1}^{\nicefrac{W}{16} \times \nicefrac{H}{16} \times L}$ binary representations to obtain the probability of each bit sequentially. We then use this probability with adaptive arithmetic coding to encode the feature map. See supplementary material for more details.
#### Motion compression.
We store forward and backward block motion estimates as a lossless 4-channel WebP [@webp] image. For optical flow we train a separate lossy deep compression model, as lossless WebP was unable to compress the flow field.
Experiments
===========
[\[sec:exp\]]{} In this section, we perform a detailed analysis on the series of interpolation models (), and present both quantitative and qualitative () evaluation of our approach.
#### Datasets and Protocol.
We train our models using videos from the Kinetics dataset [@carreira2017quo]. We only use videos with a width and height greater than $720$px. To remove artifacts induced by previous compression, we downsample those high resolution videos to $352\times288$px. We allow the aspect ratio to change. The resulting dataset contains 2.8M frames in 75K videos. We train on 65K, use 5K for validation, and 5k for testing. For faster testing on Kinetics, we only use a single group of $n=12$ pictures per video.
We additionally test our method on two raw video datasets, Video Trace Library (VTL) [@vtl] and Ultra Video Group (UVG) [@uvg]. The VTL dataset contains $\sim40$K frames of resolution $352\times288$ in 20 videos. The UVG dataset contains $3,900$ frames of resolution $1920\times1080$ in 7 videos.
We evaluate our method based on the compression rate in bits per pixel (BPP), and the quality of compression in multi-scale structural similarity (MS-SSIM) [@wang2003multiscale] and peak signal-to-noise ratio (PSNR). We report the average performance of all videos, as opposed to the average of all frames, as our final performance. We use a GOP size of $n=12$ frames, for all algorithms unless otherwise stated.
#### Training Details.
All of our models are trained from scratch for 200K iterations using ADAM [@kingma2014adam], with gradient norms clipped at $0.5$. We use a batch size of 32 and a learning rate of 0.0005, which is divided by 2 when the validation MS-SSIM plateaus. We augment the data through horizontal flipping. For image models we train on $96\times96$ random crops, and for the interpolation models we train on $64\times64$ random crops. We train all models with $10$ reconstruction iterations.
[0.33]{}
table [plotdata/trace\_ours\_ec\_ssim.data]{}; table [plotdata/trace\_ours\_ssim.data]{}; table [plotdata/trace\_interpolate\_warp.data]{}; table [plotdata/trace\_interpolate.data]{}; table [plotdata/trace\_i\_ssim.data]{}; table [plotdata/trace\_residual\_ssim.data]{};
[0.33]{}
table [plotdata/trace\_d66\_ssim.data]{}; table [plotdata/dist6\_flow\_warp.data]{}; table [plotdata/dist6\_trace\_perfect\_flow\_ssim.data]{}; table [plotdata/dist6\_no\_warp.data]{};
[0.33]{}
table [plotdata/trace\_d12\_ssim\_ec.data]{}; table [plotdata/trace\_d12\_ssim.data]{}; table [plotdata/trace\_d33\_ssim\_ec.data]{}; table [plotdata/trace\_d33\_ssim.data]{}; table [plotdata/trace\_d66\_ssim\_ec.data]{}; table [plotdata/trace\_d66\_ssim.data]{}; table [plotdata/trace\_i\_ssim\_ec.data]{}; table [plotdata/trace\_i\_ssim.data]{};
Ablation study {#sec:ablation}
--------------
We first evaluate the series of image interpolation models in [Section \[sec:model\]]{} on the VTL dataset. [Figure \[fig:ablation\]]{} shows the results.
We can see that image compression model requires by far the highest BPP to achieve high visual quality and performs poorly in the low bitrate region. This is not surprising as it does not exploit any temporal redundancy and needs to encode everything from scratch. Vanilla interpolation does not work much better. We present results for interpolation from 1 to 4 frames, using the best image compression model. While it exploits the temporal redundancy, it fails to accurately reconstruct the image. Motion-compensated interpolation works significantly better. The additional motion information disambiguates the interpolation, improving the accuracy. The presented BPP includes the size of motion vectors. Our final model efficiently encodes residual information and makes good use of hierarchical referencing. It achieves the best performance when combined with entropy coding. Note the large performance gap between our method and the image compression model in the low bitrate regime – our model effectively uses context information and achieves a good performance even with very few bits per pixel.
As a sanity check, we further implemented a simple deep codec that uses image compression to encode the residual $\Rcal$ in traditional codecs. This simple baseline stores the video as the encoded residuals, compressed motion vectors, in addition to key frames compressed by a separate deep image compression model. The residual model struggles to learn patterns from noisy residual images, and works worse than an image-only compression model. This suggests that trivially extending deep image compression to videos is not sufficient. Our end-to-end interpolation network performs considerably better.
#### Motion.
Next, we analyze different motion estimation models, and compare optical flow to block motion vectors. For optical flow, we use the OpenCV implementation of the Farneb[ä]{}ck’s algorithm [@farneback2003two]. For motion compensation, we use the same algorithm as H.264. shows the results of the $\Mcal_{6,6}$ model with both motion sources. Using motion information clearly helps improve the performance of the model, despite the overhead of motion compression. Block motion estimation (MV) works significantly better than the optical flow based model (flow). Almost all of this performance gain comes from better compressible motion information. The block motion estimates are smaller, easier to compress, and fit in a lossless compression scheme.
To understand whether the worse performance of optical flow is due to the errors in flow compression or the property of the flow itself, we further measure the *hypothetical* performance upper bound of an optical flow based model *assuming* a lossless flow compression at no additional cost (flow$^\star$). As shown in , this upper bound performs better than motion vectors, leaving room for improvement through compressible optical flow estimation. However, finding such a compressible flow estimate is beyond the scope of this paper. In the rest of this section, we use block motion estimates in all our experiments.
#### Individual interpolation models and entropy coding.
[Figure \[fig:entropy\]]{} shows the performance of different interpolation models with and without entropy coding. For all models, entropy coding saves up to $52\%$ BPP, at low bitrate regions, and at least $10\%$ BPP, at high bitrate region. More interestingly, the short time-frame interpolation is almost free, achieving the same visual quality as an image-based model at one or two orders of magnitude fewer bits per pixel. This shows that most of our bitrate saving comes from the interpolation models at lower levels in the hierarchy.
Comparison to prior work {#sec:quant}
------------------------
[0.5]{}
table [plotdata/ultra\_h265\_ssim.data]{}; +\[on layer=foreground,\] table [plotdata/ultra\_ours\_ec\_ssim.data]{}; +\[on layer=foreground,\] table [plotdata/ultra\_ours\_ssim.data]{}; table [plotdata/ultra\_h264\_ssim.data]{}; table [plotdata/ultra\_mpeg4\_ssim.data]{}; table [plotdata/ultra\_i\_ssim.data]{};
[0.5]{}
table [plotdata/ultra\_h265\_psnr.data]{}; +\[on layer=foreground,\] table [plotdata/ultra\_ours\_ec\_psnr.data]{}; +\[on layer=foreground,\] table [plotdata/ultra\_ours\_psnr.data]{}; table [plotdata/ultra\_h264\_psnr.data]{}; table [plotdata/ultra\_mpeg4\_psnr.data]{}; table [plotdata/ultra\_i\_psnr.data]{};
[0.5]{}
table [plotdata/trace\_h265\_ssim.data]{}; +\[on layer=foreground,\] table [plotdata/trace\_ours\_ec\_ssim.data]{}; +\[on layer=foreground,\] table [plotdata/trace\_ours\_ssim.data]{}; table [plotdata/trace\_h264\_ssim.data]{}; table [plotdata/trace\_mpeg4\_ssim.data]{}; table [plotdata/trace\_h261\_ssim.data]{}; table [plotdata/trace\_i\_ssim.data]{};
[0.5]{}
table [plotdata/trace\_h265\_psnr.data]{}; +\[on layer=foreground,\] table [plotdata/trace\_ours\_ec\_psnr.data]{}; +\[on layer=foreground,\] table [plotdata/trace\_ours\_psnr.data]{}; table [plotdata/trace\_h264\_psnr.data]{}; table [plotdata/trace\_mpeg4\_psnr.data]{}; table [plotdata/trace\_h261\_psnr.data]{}; table [plotdata/trace\_i\_psnr.data]{};
[0.5]{}
table [plotdata/kinetics\_h265\_ssim.data]{}; +\[on layer=foreground,\] table [plotdata/kinetics\_ours\_ec\_ssim.data]{}; +\[on layer=foreground,\] table [plotdata/kinetics\_ours\_ssim.data]{}; table [plotdata/kinetics\_h264\_ssim.data]{}; table [plotdata/kinetics\_mpeg4\_ssim.data]{}; table [plotdata/kinetics\_h261\_ssim.data]{}; table [plotdata/kinetics\_i\_ssim.data]{};
[0.5]{}
table [plotdata/kinetics\_h265\_psnr.data]{}; +\[on layer=foreground,\] table [plotdata/kinetics\_ours\_ec\_psnr.data]{}; +\[on layer=foreground,\] table [plotdata/kinetics\_ours\_psnr.data]{}; table [plotdata/kinetics\_h264\_psnr.data]{}; table [plotdata/kinetics\_mpeg4\_psnr.data]{}; table [plotdata/kinetics\_h261\_psnr.data]{}; table [plotdata/kinetics\_i\_psnr.data]{};
We now quantitatively evaluate our method on all three datasets and compare our method with today’s prevailing codecs, HEVC (H.265), H.264, MPEG-4 Part 2, and H.261. For consistent comparison, we use the same GOP size, 12, for H.264 and HEVC. We test H.261 on only VTL and Kinetics-5K, since it does not support high-resolution ($1920\times1080$) videos of the UVG dataset.
Figures \[fig:uvg\]-\[fig:kinetics\] present the results. Despite its simplicity, our model greatly outperforms MPEG-4 Part 2 and H.261, performs on par with H.264, and close to state-of-the-art HEVC. In particular, on the high-resolution UVG dataset, it outperforms H.264 by a good margin and matches HEVC in terms of PSNR.
Our testing datasets are not just large in scale ($>$100K frames of $>$5K videos), they also consist of videos in a wide range of sizes (from $352\times288$ to $1920\times1080$), time (from 1990s for most VTL videos to 2018 for Kinetics), quality (from professional UVG to user uploaded Kinetics), and contents (from scenes, animals, to the 400 human activities in Kinetics). Our model, trained on only one dataset, works well on all of them.
\[sec:qual\]
[@cccc@]{} Ground truth& MPEG-4 Part 2&H.264& Ours\
![Comparison of compression results at $0.12\pm 0.01$ BPP. Our method shows faithful images without any blocky artifacts. (Best viewed on screen.) More examples and demo videos showing temporal coherence are available at <https://chaoyuaw.github.io/vcii/>.[]{data-label="fig:qual"}](images/qual/B6odg1_D7xg_000067_000077_0007_big.png "fig:"){width="24.63000%"}& ![Comparison of compression results at $0.12\pm 0.01$ BPP. Our method shows faithful images without any blocky artifacts. (Best viewed on screen.) More examples and demo videos showing temporal coherence are available at <https://chaoyuaw.github.io/vcii/>.[]{data-label="fig:qual"}](images/qual/B6odg1_D7xg_000067_000077_15q_00000007_big.png "fig:"){width="24.63000%"}& ![Comparison of compression results at $0.12\pm 0.01$ BPP. Our method shows faithful images without any blocky artifacts. (Best viewed on screen.) More examples and demo videos showing temporal coherence are available at <https://chaoyuaw.github.io/vcii/>.[]{data-label="fig:qual"}](images/qual/B6odg1_D7xg_000067_000077_27crf_nomaxrate_00000007_big.png "fig:"){width="24.63000%"}& ![Comparison of compression results at $0.12\pm 0.01$ BPP. Our method shows faithful images without any blocky artifacts. (Best viewed on screen.) More examples and demo videos showing temporal coherence are available at <https://chaoyuaw.github.io/vcii/>.[]{data-label="fig:qual"}](images/qual/d6_B6odg1_D7xg_000067_000077_0007_big.png "fig:"){width="24.63000%"}\
![Comparison of compression results at $0.12\pm 0.01$ BPP. Our method shows faithful images without any blocky artifacts. (Best viewed on screen.) More examples and demo videos showing temporal coherence are available at <https://chaoyuaw.github.io/vcii/>.[]{data-label="fig:qual"}](images/qual/lYhpATryoEA_000416_000426_0007_big.png "fig:"){width="24.63000%"}& ![Comparison of compression results at $0.12\pm 0.01$ BPP. Our method shows faithful images without any blocky artifacts. (Best viewed on screen.) More examples and demo videos showing temporal coherence are available at <https://chaoyuaw.github.io/vcii/>.[]{data-label="fig:qual"}](images/qual/lYhpATryoEA_000416_000426_15q_00000007_big.png "fig:"){width="24.63000%"}& ![Comparison of compression results at $0.12\pm 0.01$ BPP. Our method shows faithful images without any blocky artifacts. (Best viewed on screen.) More examples and demo videos showing temporal coherence are available at <https://chaoyuaw.github.io/vcii/>.[]{data-label="fig:qual"}](images/qual/lYhpATryoEA_000416_000426_27crf_nomaxrate_00000007_big.png "fig:"){width="24.63000%"}& ![Comparison of compression results at $0.12\pm 0.01$ BPP. Our method shows faithful images without any blocky artifacts. (Best viewed on screen.) More examples and demo videos showing temporal coherence are available at <https://chaoyuaw.github.io/vcii/>.[]{data-label="fig:qual"}](images/qual/d6_lYhpATryoEA_000416_000426_0007_big.png "fig:"){width="24.63000%"}\
------------------------------------------------------------------------
\
![Comparison of compression results at $0.12\pm 0.01$ BPP. Our method shows faithful images without any blocky artifacts. (Best viewed on screen.) More examples and demo videos showing temporal coherence are available at <https://chaoyuaw.github.io/vcii/>.[]{data-label="fig:qual"}](images/qual/akiyo_cif_00000002_big.png "fig:"){width="24.63000%"}& ![Comparison of compression results at $0.12\pm 0.01$ BPP. Our method shows faithful images without any blocky artifacts. (Best viewed on screen.) More examples and demo videos showing temporal coherence are available at <https://chaoyuaw.github.io/vcii/>.[]{data-label="fig:qual"}](images/qual/akiyo_cif_00000001_15q_00000002_big.png "fig:"){width="24.63000%"}& ![Comparison of compression results at $0.12\pm 0.01$ BPP. Our method shows faithful images without any blocky artifacts. (Best viewed on screen.) More examples and demo videos showing temporal coherence are available at <https://chaoyuaw.github.io/vcii/>.[]{data-label="fig:qual"}](images/qual/akiyo_cif_00000001_27crf_nomaxrate_00000002_big.png "fig:"){width="24.63000%"}& ![Comparison of compression results at $0.12\pm 0.01$ BPP. Our method shows faithful images without any blocky artifacts. (Best viewed on screen.) More examples and demo videos showing temporal coherence are available at <https://chaoyuaw.github.io/vcii/>.[]{data-label="fig:qual"}](images/qual/d12_0_akiyo_cif_00000007_big.png "fig:"){width="24.63000%"}\
![Comparison of compression results at $0.12\pm 0.01$ BPP. Our method shows faithful images without any blocky artifacts. (Best viewed on screen.) More examples and demo videos showing temporal coherence are available at <https://chaoyuaw.github.io/vcii/>.[]{data-label="fig:qual"}](images/qual/highway_cif_00000011_big.png "fig:"){width="24.63000%"}& ![Comparison of compression results at $0.12\pm 0.01$ BPP. Our method shows faithful images without any blocky artifacts. (Best viewed on screen.) More examples and demo videos showing temporal coherence are available at <https://chaoyuaw.github.io/vcii/>.[]{data-label="fig:qual"}](images/qual/highway_cif_00000001_15q_00000011_big.png "fig:"){width="24.63000%"}& ![Comparison of compression results at $0.12\pm 0.01$ BPP. Our method shows faithful images without any blocky artifacts. (Best viewed on screen.) More examples and demo videos showing temporal coherence are available at <https://chaoyuaw.github.io/vcii/>.[]{data-label="fig:qual"}](images/qual/highway_cif_00000001_27crf_nomaxrate_00000011_big.png "fig:"){width="24.63000%"}& ![Comparison of compression results at $0.12\pm 0.01$ BPP. Our method shows faithful images without any blocky artifacts. (Best viewed on screen.) More examples and demo videos showing temporal coherence are available at <https://chaoyuaw.github.io/vcii/>.[]{data-label="fig:qual"}](images/qual/d12_7_highway_cif_00000007_big.png "fig:"){width="24.63000%"}\
------------------------------------------------------------------------
\
![Comparison of compression results at $0.12\pm 0.01$ BPP. Our method shows faithful images without any blocky artifacts. (Best viewed on screen.) More examples and demo videos showing temporal coherence are available at <https://chaoyuaw.github.io/vcii/>.[]{data-label="fig:qual"}](images/qual/YachtRide_1920x1080_120fps_420_8bit_YUV_0007_big_box.jpg "fig:"){width="24.63000%"}& ![Comparison of compression results at $0.12\pm 0.01$ BPP. Our method shows faithful images without any blocky artifacts. (Best viewed on screen.) More examples and demo videos showing temporal coherence are available at <https://chaoyuaw.github.io/vcii/>.[]{data-label="fig:qual"}](images/qual/YachtRide_1920x1080_120fps_420_8bit_YUV_0001_7q_00000007_big_box.jpg "fig:"){width="24.63000%"}& ![Comparison of compression results at $0.12\pm 0.01$ BPP. Our method shows faithful images without any blocky artifacts. (Best viewed on screen.) More examples and demo videos showing temporal coherence are available at <https://chaoyuaw.github.io/vcii/>.[]{data-label="fig:qual"}](images/qual/YachtRide_1920x1080_120fps_420_8bit_YUV_0001_19crf_nomaxrate_00000007_big_box.jpg "fig:"){width="24.63000%"}& ![Comparison of compression results at $0.12\pm 0.01$ BPP. Our method shows faithful images without any blocky artifacts. (Best viewed on screen.) More examples and demo videos showing temporal coherence are available at <https://chaoyuaw.github.io/vcii/>.[]{data-label="fig:qual"}](images/qual/d6_YachtRide_1920x1080_120fps_420_8bit_YUV_0007_big_box.jpg "fig:"){width="24.63000%"}\
![Comparison of compression results at $0.12\pm 0.01$ BPP. Our method shows faithful images without any blocky artifacts. (Best viewed on screen.) More examples and demo videos showing temporal coherence are available at <https://chaoyuaw.github.io/vcii/>.[]{data-label="fig:qual"}](images/qual/Jockey_1920x1080_120fps_420_8bit_YUV_0259_big_box.jpg "fig:"){width="24.63000%"}& ![Comparison of compression results at $0.12\pm 0.01$ BPP. Our method shows faithful images without any blocky artifacts. (Best viewed on screen.) More examples and demo videos showing temporal coherence are available at <https://chaoyuaw.github.io/vcii/>.[]{data-label="fig:qual"}](images/qual/Jockey_1920x1080_120fps_420_8bit_YUV_0001_7q_00000259_big_box.jpg "fig:"){width="24.63000%"}& ![Comparison of compression results at $0.12\pm 0.01$ BPP. Our method shows faithful images without any blocky artifacts. (Best viewed on screen.) More examples and demo videos showing temporal coherence are available at <https://chaoyuaw.github.io/vcii/>.[]{data-label="fig:qual"}](images/qual/Jockey_1920x1080_120fps_420_8bit_YUV_0001_19crf_nomaxrate_00000259_big_box.jpg "fig:"){width="24.63000%"}& ![Comparison of compression results at $0.12\pm 0.01$ BPP. Our method shows faithful images without any blocky artifacts. (Best viewed on screen.) More examples and demo videos showing temporal coherence are available at <https://chaoyuaw.github.io/vcii/>.[]{data-label="fig:qual"}](images/qual/d6_Jockey_1920x1080_120fps_420_8bit_YUV_0259_big_box.jpg "fig:"){width="24.63000%"}\
------------------------------------------------------------------------
\
Finally, we present qualitative results of three of the best performing models, MPEG-4 Part 2, H.264 and ours in . All models here use $0.12\pm 0.01$ BPP. We can see that in all datasets, our method shows faithful images without any blocky artifacts. It greatly outperforms MPEG-4 Part 2 without bells and whistles, and matches state-of-the-art H.264.
Conclusion
==========
This paper presents, to the best of our knowledge, the first end-to-end trained deep video codec. It relies on repeated deep image interpolation. To disambiguate the interpolation, we encode a few compressible bits of information representing information not inferred from the neighboring key frames. This yields a faithful reconstruction instead of pure hallucination. The network is directly trained to optimize reconstruction, without prior engineering knowledge.
Our deep codec is simple, and outperforms the prevailing codecs such as MPEG-4 Part 2 or H.261, matching state-of-the-art H.264. We have not considered the engineering aspects such as runtime or real-time compression. We think they are important directions for future research.
In short, video compression powered by deep image interpolation achieves state-of-the-art performance without sophisticated heuristics or excessive engineering.
Acknowledgment {#acknowledgment .unnumbered}
==============
We would like to thank Manzil Zaheer, Angela Lin, Ashish Bora, and Thomas Crosley for their valuable comments and feedback on an early version of this paper. This work was supported in part by Berkeley DeepDrive and an equipment grant from Nvidia.
[**Appendix**]{}
Model Details
=============
#### Context feature fusion.
In experiments, we found that fusing the U-net features at resolution $\frac{W}{2}\times \frac{H}{2}$ yields good performance for all models. Fusing features at more resolutions improves the performance slightly, but requires more memory and computation. In this paper, we additionally use $\frac{W}{4}\times \frac{H}{4}$ features for the encoder of $\Mcal_{1,2}$ and $\Mcal_{3,3}$, and $\frac{W}{4}\times \frac{H}{4}$ and $\frac{W}{8}\times \frac{H}{8}$ features for the decoder of $\Mcal_{3,3}$. Models are selected based on the performance on validation set.
#### Probability estimation in AAC.
Adaptive arithmetic coding (AAC) relies on a good probability estimation for each bit, given previously decoded bits, to efficiently encode a binary representation. To estimate the probability, we follow Mentzer and use a 3D-Pixel-CNN model [@mentzer2018conditional; @oord2016pixel]. The model contains 11 layers of masked convolution. Each layer has 128 channels, and is followed by batch normalization [@ioffe2015batch] and relu. We train the models using Adam [@kingma2014adam] with a learning rate of 0.0001 for 30K iterations.
Bitrate Optimization
====================
We present detailed results of bitrate optimization. shows the explored performance at the first level of the hierarchy. We pick good combinations from the envelope of the curves, and they proceed to the next level. presents the final combinations. We use these combinations for the experiments in our paper unless otherwise noted.
[0.52]{}
table [plotdata/envelope.data]{}; table [plotdata/envelope1.data]{}; table [plotdata/envelope2.data]{}; table [plotdata/envelope3.data]{}; table [plotdata/envelope4.data]{}; table [plotdata/envelope5.data]{}; table [plotdata/envelope6.data]{}; table [plotdata/envelope7.data]{}; table [plotdata/envelope8.data]{}; table [plotdata/envelope9.data]{}; table [plotdata/envelope10.data]{};
[0.4]{}
[@lllll@]{} $K_0$&$K_1$&$K_2$&$K_3$&Average\
&&&&bitrate\
5 & 3 & 2 & 1 & 0.109\
7 & 3 & 2 & 2 & 0.151\
10 & 4 & 2 & 2 & 0.188\
10 & 10 & 2 & 2 & 0.219\
10 & 10 & 6 & 2 & 0.260\
10 & 10 & 10 & 2 & 0.302\
\
\
|
---
author:
- 'M. Weidinger'
- 'F. Spanier'
bibliography:
- '14299.bib'
date: 'Received 22 February 2010 / Accepted 6 April 2010'
title: 'Modelling the variability of 1ES1218+30.4'
---
Introduction
============
Blazars are a special class of active galactic nuclei (AGN) exhibiting a spectral energy distribution (SED) that is strongly dominated by nonthermal emission across a wide range of wavelengths, from radio waves to gamma rays, and rapid, large-amplitude variability. The source of this emission is presumably the relativistic jet emitted at a narrow angle to the line of sight to the observer.\
In high-peaked BL Lac objects (HBLs) the SED shows a double hump structure as the most notable feature with the first hump in the UV- to X-ray regime and the second hump in the gamma-ray regime. Indeed, a substantial fraction of the known nearby HBLs have already been discovered with Cherenkov telescopes like H.E.S.S., MAGIC or VERITAS. The origin of the first hump is mostly undisputed: nonthermal, relativistic electrons in the jet are emitting synchrotron radiation. The origin of the second hump is still controversially debated. Up to now two kinds of models are discussed: leptonic [e.g. @maraschi92] and hadronic [e.g. @mannheim93] ones, which are mostly applied for other subclasses of blazars.\
Another important feature of AGNs in general and HBLs in particular is their strong variability. The dynamical timescale may range from minutes to years. This requires complex models, which obviously have to include time dependence, but this gives us also the chance to understand the mechanisms that drive AGNs. We will apply a self-consistent leptonic model to new data observed for the source , because those are the ones favoured for HBLs.\
\
The source HBL has been discovered as a candidate BL Lac object on the basis of its X-ray emission and has been identified with the X-ray source [@wilson79; @ledden81]. For the first time, has been observed at VHE energies using the MAGIC telescope in January 2005 [@magic1218] and later from VERITAS [@veritas1218]. Coverage of the optical/X-ray regime is provided by BeppoSAX [@beppo05] and SWIFT [@swift07], unfortunately the data are not always simultaneous. During the observations from December 2008 to April 2009 VERITAS also observed showing a time-variability [@veritas2010]. The observations from the MAGIC telescope have previously been modelled by @michl1218. @veritas2010 claim that their new observations exhibiting variability challenge the previous models. We will show that a timedependent model using a self-consistent treatment of electron acceleration is able to model the new VERITAS data.\
\
We present the kinetic equation, which we solve numerically, describing the synchrotron-self Compton emission (Sect. \[sec:model\]). In Sect. \[sec:results\] we apply our code to , taking the VERITAS data into account and give a set of physical parameters for the most acceptable fit. Finally, we discuss our results in the light of particle acceleration theory and the multiwavelength features.
Model {#sec:model}
=====
Here we will give a brief description of the model used, for a complete overview see [@weidinger2010a; @weidinger2010b].\
We start with the relativistic Vlasov equation [see e.g. @schlick02] in the one dimensional diffusion approximation [e.g. @schlickeiser84], here the relativistic approximation $p\approx \gamma m c$ is used. This kinetic equation will then be solved time-dependently in two spatially different zones, the smaller acceleration zone and the radiation zone, which are assumed to be spherical and homogeneous. Both contain isotropically distributed electrons and a randomly oriented magnetic field as common for these models. All calculations are made in the rest frame of the blob.\
Electrons entering the acceleration zone (radius $R_{\text{acc}}$) from the upstream of the jet are continuously accelerated through diffusive shock acceleration. This extends the model of @kirk98 with a stochastic part. The energy gain due to the acceleration is balanced by radiative (synchrotron) and escape losses, the latter scaling with $t_{\text{esc}}= \eta R_{\text{acc}}/c$ with $\eta=10$ as an empirical factor reflecting the diffusive nature of particle loss. Escaping electrons completely enter the radiation zone (radius $R_{\text{rad}}$) downstream of the acceleration zone.\
Here the electrons are suffering synchrotron losses as in the acceleration zone and also inverse-Compton losses, but they do not undergo acceleration. Pair production and other contributions do not alter the SED in typical SSC conditions and are neglected [@boettcher02]. The SED in the observer’s frame is calculated by boosting the selfconsistently calculated photons towards the observer’s frame and correcting for the redshift $z$: $I_{\nu_{\text{obs}}} = \delta^3 h\nu_{\text{obs}}/(4\pi)N_{\text{ph}}$ with $\nu_{\text{obs}}=\delta/(1+z) \nu$. The acceleration zone will have no contribution to $I_{\text{obs}}$ directly, due to the $R_{\text{i}}^2$ dependence of the observed flux at a distance $r$ ($F_{\nu_{\text{obs}}}(r) = \pi I_{\nu_{\text{obs}}}R_{\text{rad}}^2r^{-2}$) and the small size of the acceleration zone. The kinetic equation in the acceleration zone is $$\begin{aligned}
\label{acczone}
\frac{\partial n_e(\gamma, t)}{\partial t} = & \frac{\partial}{\partial \gamma} \left[( \beta_s \gamma^2 - t_{\text{acc}}^{-1}\gamma ) \cdot n_e(\gamma, t) \right] + \nonumber \\ &\frac{\partial}{\partial \gamma} \left[ [(a+2)t_{\text{acc}}]^{-1}\gamma^2 \frac{\partial n_e(\gamma, t)}{\partial \gamma}\right] + \nonumber \\
& + Q_0(\gamma-\gamma_0) - t_{\text{esc}}^{-1}n_e(\gamma, t)~\text{.}\end{aligned}$$ The injected electrons at $\gamma_0$, as the blob propagates through the jet, are considered via $Q_{\text{inj}}(\gamma , t) := Q_0 \delta(\gamma - \gamma_0)$. The synchrotron losses are calculated using Eq. . $$\begin{aligned}
\label{synchrotronlosses}
P_s(\gamma) & = \frac{1}{6 \pi} \frac{\sigma_{\text{T}}B^2}{mc}\gamma^2 = \beta_s \gamma^2\end{aligned}$$ with the Thomson cross-section $\sigma_{\text{T}}$. The characteristic timescale for the acceleration $t_{\text{acc}} = \left(v_s^2/(4K_{||})+2v_A^2/(9K_{||}) \right)^{-1}$ of the system is found by comparing Eq. with @schlickeiser84 with the parallel spatial diffusion coefficient $K_{||}$ not depending on $\gamma$ when using the hard sphere approximation. The characteristic timescale has an additional factor ($\propto v_A^2$) arising from the Fermi-II processes compared to shock acceleration by itself. The stochastic part of the acceleration also gives rise to the second row in Eq. , while the first row mainly depends on Fermi-I processes. This dependence of $t_{\text{acc}}$ is important for the interpretation of the resulting electron spectra, e.g. of their slopes (depending on $t_{\text{acc}}/t_{\text{esc}}$) or the maximum energies (depending on $1/(t_{\text{acc}}\beta_s)$), see @weidinger2010a for details. For modelling SEDs and lightcurves it is primary important to ensure sensible values for $t_{\text{acc}}$. Unlike in @drury99, the energy-dependence of the escape losses is also neglected because we do not expect a pileup as suggested in @schlickeiser84 at typical SSC conditions. $v_s, v_A$ are the shock and Alfvén speed respectively. Hence $a$ in Eq. measures the efficiency of the shock acceleration compared to stochastic processes. Setting $v_A = 0$, i.e. $a \rightarrow \infty$, will result in a shock-only model like @kirk98.\
\
This model takes account of a much more confined shock region. Fermi-I acceleration will probably not occur over the whole blob when considering a real blazar but rather at a small region in the blob’s front. Neglecting acceleration simplifies the kinetic equation in the radiation zone to $$\begin{aligned}
\label{radzone1}
\frac{\partial N_e(\gamma, t)}{\partial t} = & \frac{\partial}{\partial \gamma}\left[\left(\beta_s \gamma^2 + P_{\text{IC}}(\gamma)\right) \cdot N_e(\gamma, t) \right] \nonumber\\&- \frac{N_e(\gamma, t)}{t_{\text{rad,esc}}} + \left(\frac{R_{\text{acc}}}{R_{\text{rad}}} \right)^3\frac{n_e(\gamma, t)}{t_{\text{esc}}}~\text{.}\end{aligned}$$ $P_{\text{IC}}$ accounts for the inverse-Compton losses of the electrons additionally occurring (beside the synchrotron losses) [e.g. @schlick02]: $$\begin{aligned}
\label{iclosses}
P_{\text{IC}}(\gamma) & = m^3c^7h \int_{0}^{\alpha_{max}}{d\alpha \alpha \int_0^{\infty}{d\alpha_1 N_{\text{ph}}(\alpha_1) \frac{dN(\gamma,\alpha_1)}{dtd\alpha}}}~\text{.}\end{aligned}$$ The photon energies are rewritten in terms of the electron rest mass, $h \nu = \alpha m c^2$ for the scattered photons and $h \nu = \alpha_1 m c^2$ for the target photons respectively. Equation is solved numerically using the full Klein-Nishina cross-section for a single electron scattering off a photon field [see e.g. @jones68]. Here $\alpha_{max}$ accounts for the kinematic restrictions on IC scattering. In analogy to the acceleration zone the catastrophic losses are considered via $t_{\text{esc,rad}} = \eta R_{\text{rad}}/c$ with $\eta = 10$. $t_{\text{esc,rad}}$ is the responding timescale of the electron system, which is proportional to the variability timescale in the observer’s frame [see e.g. @var95]: $$\begin{aligned}
\label{observersframe}
t_{\text{var}} \propto \frac{t_{\text{esc,rad}}}{\delta}~.\end{aligned}$$ To determine the time-dependent model SED of blazars the partial differential equation for the differential photon number density has to be solved time-dependently, which can be done numerically. The PDE can be obtained from the radiative transfer equation making use of the isotropy of the blob $$\begin{aligned}
\label{radzone2}
\frac{\partial N_{\text{ph}}(\nu, t)}{\partial t} & = R_s - c \alpha_{\nu} N_{\text{ph}}(\nu, t) + R_c - \frac{N_{\text{ph}}(\nu, t)}{t_{\text{ph,esc}}}~\text{,}\end{aligned}$$ where $R_s$ and $R_c$ are the production rates for synchrotron photon and the inverse-Compton respectively. $R_s$ is calculated using the well known Melrose approximation and the inverse-Compton production rate $R_c$ is treated in the most exact way, i.e. using the full Klein-Nishina cross section, see @weidinger2010a. Below a critical energy the obtained spectrum is self-absorbed due to synchrotron self-absorption, which is described by $\alpha_{\nu}$ [@weidinger2010a; @michl1218]. The photon-loss rate is set to be the light-crossing time.
Results {#sec:results}
=======
Using the parameters summed up in Table \[tab:1218\] we were able to fit the emission of as a steady state with our SSC model, see Fig. 1. We used all the archival data from BeppoSAX, SWIFT in the X-ray band and the MAGIC 2006, VERITAS 2009 as well as the new released VERITAS 2010 data in the VHE to model the SED of [@beppo05; @swift07; @magic1218; @veritas1218; @veritas2010]. The derived SED is absorbed in the VHE using the EBL model of @primack05 for the corresponding redshift of .\
The parameters of our SSC model are well winthin the standard SSC parameter region with an equipartition parameter of $0.02$. Even though PIC and MHD simulations suggest a higher magnetic field compared to particle energy (in the range of 0.1), this is a common assumption in SSC models, but has to be kept in mind with regard e.g. the stability of the blob. If one wishes to enforce higher equipartition parameters one could to use the model of @nlschlick. In order to allow strong shocks to form $v_A < v_S$ must be fulfilled, which is the case for $a=10$.\
\
Due to relatively small deviation (within the error margins) between the MAGIC 2005/VERITAS 2008 and the averaged VHE data from the VERITAS 2009 campaign we find a steady state the most plausible way to model the emission, i.e. the small fluctuations (see the overall lightcurve in @veritas2010) are not contributing significantly to the averaged observed SEDs. In the Fermi LAT energy regime our model yields a photon index of $\alpha_{\text{Fer}} = -1.69$, whichagrees well with the Fermi measurement of $-1.63\pm0.12$ [@fermiindex].
[1]{}[@ccccccc]{} $Q_0 (\text{cm}^{-3})$ & $B(\text{G})$ & $R_{\text{acc}}(\text{cm})$ & $R_{\text{rad}}(\text{cm})$ & $t_{\text{acc}}/t_{\text{esc}}$ & $a$ & $\delta$\
------------------------------------------------------------------------
$6.25 \cdot 10^{4}$ & $0.12$ & $6.0 \cdot 10^{14}$ & $3.0 \cdot 10^{15}$ & $1.11$ & $10$ & $44$\
\[tab:1218\]
\
\
The lightcurve of @veritas2010 shows a relatively strong outburst at $\approx$ MJD54861. Starting with the steady state emission (solid line, Fig. 1; parameters: Table \[tab:1218\]) we injected more electrons $Q_0$ into the emission region at low $\gamma_0 \approx 3$. As the blob evolves in time the emission in the model at higher energies rises and drops off again when the injected electrons finally relax to the initial $Q_0$. This process can be explained as density fluctuations along the jet axis and finally fits the flare.
![Model SED of (black solid line) as derived using the described model (see Sect. \[sec:model\]) and the parameters shown in Table \[tab:1218\]. The VHE parts of the model SEDs have been absorbed using the EBL model of @primack05. The BeppoSAX data are from @beppo05, SWIFT from @swift07, MAGIC from @magic1218, VERITAS 2009 from @veritas1218 and the blue dots are the new VERITAS 2010 data from @veritas2010. The dashed red curve shows the time integrated SED over the strong outburst shown in Fig. 2, as measured by VERITAS in 2009.](14299_f1){width="8.3cm"}
\
We found that nearly doubling the injected electron number density in a $Q_0(t)=1+b(t/t_{\text{e,var}})^3$ way with a timescale $t_{\text{e,var}} \approx 1.5$ days (as measured in the observer’s frame) and then decreasing them to the initial $Q_0$ in an almost linear way on the same timescale fits the strong outburst of . The corresponding lightcurve of the model as well as the observed one are summarized in Fig. 2 3.
![Lightcurve of the photon flux above $200$ GeV as measured by @veritas2010 (inset of their figure) in January 2009 to February 2009 and our model (red solid line). The outburst was modelled by injecting more electrons into the blob by varying $Q_0$ at a timescale of $\approx 1.5$ days (see text for details).](14299_f2){width="8.3cm"}
Figure 3 shows a more detailed view of the lightcurve in the VHE (above $200$ GeV) as well as the corresponding lightcurves in the X-Ray (between $1.2$ keV and $11$ keV) regime of BeppoSAX/SWIFT and the lower tail of the Fermi LAT energy range (between $0.2$ GeV and $22$ GeV) as predicted by our model. The latter two have been scaled down to the flux level of the VERITAS measurement, see Fig. 3, because the real fluxes are higher than the VHE flux. The model predicts the peak of the lightcurve in the Fermi regime to be $1.26$ hours ahead of the VHE one, where the X-ray regime is delayed by $0.97$ hours for . The delay of the X-ray band can be used to verify the model when multiwavelength data of the flaring behaviour of is available, while the derivation of the $0.2$ GeV to $22$ GeV lightcurve is beyond the resolution of Fermi for this source.
![Detailed view of the high outburst shown already in Fig. 2 as well as the behaviour of in the lower Fermi LAT energy band and the synchrotron regime, measurable by BeppoSAX/SWIFT during such a flare.](14299_f3){width="8.3cm"}
\
\
Additionally we plotted the time averaged SED (over the outburst from MJD54860 until MJD54864) into the SED of , Fig. 1 (dashed red line). As one can see only when separately considering the strongest outburst of within the VERITAS campaign in 2009 the aberration from a presumed steady state is significant. In contrast an average over the whole observation of @veritas2010, which is low-state most of the time, will result in a steady state as shown here or in @michl1218. For the IC photon index $\alpha$ ($\nu F_{\nu} \propto \nu^{\alpha+2}$) above $200$ GeV we get $\alpha_{\text{VHE}} = -3.53$ for the low-state (solid curve in Fig. 1), which slightly softens to $\alpha_{\text{VHE}} = -3.56$ when considering the high-state as the time-average over the single outburst shown in the lightcurve, Fig. 2 in the VERITAS range. Note that the photon index and its behaviour during an outburst in this energy range is very sensitive to the EBL absorption and thus to the EBL model used and shows a strong dependency on the considered energy range. With our model we are able to compute the spectral behaviour in the X-Ray energy range of the BeppoSAX/SWIFT satellites (i.e. $1.2$ keV $<$ E $<$ $11$ keV). The model predicts the source to be spectrally steady in this regime with a photon index $\alpha_{\text{xray}} = -2.68$ for outbursts on timescales of days. Considering shorter averaging timescales of the outburst of , e.g. the first or last two hours, two hours around the peak in the lightcurve, the maximum derivation from $\alpha_{\text{xray}}$ $(\alpha_{\text{VHE}})$ predicted by the model is $\pm 0.05$ $(-0.07)$, which could not be measured with current experiments and thus is considered as spectrally steady in this case.
Discussion {#sec:discussion}
==========
Our results clearly show that the latest observations from the VERITAS telescope for still agree with a constant (steady state) emission from a SSC model when averaged over a long observation period. This is due to the relatively moderate variability of compared to the observation time.\
The variability may be well explained in the context of the self-consistent treatment of acceleration of electrons in the jet. We are aware that an outburst of the timescale of roughly five days as measured from does not necessarily require a shock in jet model, which scales down to a few minutes depending on the SSC parameters [@weidinger2010b], but may also be explained as e.g. different accretion states. Nevertheless the fundamental statement remains the same: long time observation of slightly variable blazars will result in a steady state emission, while an average over a single outburst will, of course, result in a significantly different SED for the source. We are not yet able to rule out different emission models or even complex geometries of the emitting region. But we are able to model the influence of short outbursts of a source on the SED and the lightcurves in the different energy bands selfconsistently.\
The VERITAS collaboration only shows an integrated spectrum for , which is due to the low flux of the source and the photon index behaviour of the combined high-states. This integrated spectrum does not show strong variations with regard to the known low-state observed by MAGIC. Our model now predicts a clear change in the spectrum, which is indicated by the dashed line in Fig. 1, which shows the average over one outburst with a slight, currently not detectable spectral softening in the VHE range, while the synchrotron peak in the BeppoSAX/SWIFT regime remains spectrally constant. This situation changes for shorter and/or stronger outbursts of an overall timescale of hours, which will result in measurable spectral evolutions in all energy regimes when considered with the presented model. Furthermore the time-resolved SEDs during a flare are comprehensible with our model. Hence with better time-resolved spectra or/and better multiwavelength coverage it should be possible to prove this model, and if the model is indeed applicable it will be a good tool to investigate the whole SED during an outburst without having all energy regimes observationally covered.\
*Acknowledgments* MW wants to thank the Elitenetzwerk Bayern and GK1147 for their support. FS acknowledges support from the DFG through grant SP 1124/1.
|
---
abstract: |
A strategy to address the inverse Galois problem over ${{\mathbb Q}}$ consists of exploiting the knowledge of Galois representations attached to certain automorphic forms. More precisely, if such forms are carefully chosen, they provide compatible systems of Galois representations satisfying some desired properties, e.g. properties that reflect on the image of the members of the system. In this article we survey some results obtained using this strategy.
MSC (2010): 11F80 (Galois representations); 12F12 (Inverse Galois theory).
author:
- 'Sara Arias-de-Reyna[^1]'
title: Automorphic Galois representations and the inverse Galois problem
---
Introduction
============
The motivation for the subject of this survey comes from Galois theory. Let $L/K$ be a field extension which is normal and separable. To this extension one can attach a group, namely the group of field automorphisms of $L$ fixing $K$, which is denoted as ${\mathop{\mathrm{Gal} }\nolimits}(L/K)$. The main result of Galois theory, which is usually covered in the program of any Bachelor’s degree in Mathematics, can be stated as follows:
Let $L/K$ be a finite, normal, separable field extension. Then there is the following bijective correspondence between the sets:
$$\begin{array}{ccc}
\left\{\begin{array}{c} E \text{ field} \\ K\subseteq E\subseteq L\end{array}\right\} & \longleftrightarrow &
\left\{\begin{array}{c} H\subseteq{\mathop{\mathrm{Gal} }\nolimits}(L/K) \\ \text{subgroup}\end{array}\right\},\\
[2.3em]
E & \longmapsto & \mathrm{Gal}(L/E)\\
L^H & \longmapsfrom& L\end{array}$$
Usually the students are asked exercises of the following type: Given some finite field extension $L/{{\mathbb Q}}$ which is normal, compute the Galois group $\mathrm{Gal}(L/{{\mathbb Q}})$ attached to it. But, one may also ask the inverse question (hence the name inverse Galois problem): Given a finite group $G$, find a finite, normal extension $L/{{\mathbb Q}}$ with ${\mathop{\mathrm{Gal} }\nolimits}(L/{{\mathbb Q}})\simeq G$. This is not a question one usually expects a student to solve! In fact, there are (many) groups $G$ for which it is not even known if such a field extension exists.
Let $G$ be a finite group. Does there exist a Galois extension $L/{{\mathbb Q}}$ such that ${\mathop{\mathrm{Gal} }\nolimits}(L/{{\mathbb Q}})\simeq G$?
The first mathematician that addressed this problem was D. Hilbert. In his paper [@Hilbert] he proves his famous Irreducibility Theorem, and applies it to show that, for all $n\in \mathbb{N}$, the symmetric group $S_n$ and the alternating group $A_n$ occur as Galois groups over the rationals. Since then, many mathematicians have thought about the inverse Galois problem, and in fact it is now solved (affirmatively) for many (families of) finite groups $G$. For instance, let us mention the result of Shafarevich that all solvable groups occur as Galois groups over the rationals (see [@Cohomology_of_number_fields] for a detailed explanation of the proof). However, it is still not known if the answer is affirmative for every finite group $G$, and as far as I know, there is no general strategy that addresses all finite groups at once. An account of the different techniques used to address the problem can be found in [@Topics]. Let $K$ be a field, and let us fix a separable closure $K_{\rm sep}$. There is a way to group together all the Galois groups of finite Galois extensions $L/K$ contained in $K_{\rm sep}$, namely the *absolute Galois group* of $K$. It is defined as the inverse limit $$G_{K}:={\mathop{\mathrm{Gal} }\nolimits}(K_{\rm sep}/K)=\lim_{\longleftarrow \atop {L/K\atop \text{finite Galois }}}{\mathop{\mathrm{Gal} }\nolimits}(L/K).$$ This group is a profinite group, and as such is endowed with a topology, called the *Krull topology*, which makes it a Hausdorff, compact and totally disconnected group. A very natural question to ask is what information on the field $K$ is encoded in the topological group $G_K$. In this connection, a celebrated result of Neukirch, Iwasawa, Uchida and Ikeda establishes that, if $K_1, K_2$ are two finite extensions of ${{\mathbb Q}}$ contained in a fixed algebraic closure $\overline{{{\mathbb Q}}}$ such that $G_{K_1}\simeq G_{K_2}$, then $K_1$ and $K_2$ are conjugated by some element in $G_{{{\mathbb Q}}}$ (cf. [@Uchida1976], [@Ikeda]). Let us note, however, that we cannot replace ${{\mathbb Q}}$ by any field. For example, the analogous statement does not hold when the base field is ${{\mathbb Q}}_p$, cf. [@Yamagata] and [@JardenRitter]. Thus, we see that the absolute Galois group of ${{\mathbb Q}}$ encodes a wealth of information about the arithmetic of number fields. In this context, the inverse Galois problem can be formulated as the question of determining which finite groups occur as quotient groups of $G_{{{\mathbb Q}}}$.
A natural way to study $G_{{{\mathbb Q}}}$ is to consider its representations, that it, the continuous group morphisms $G_{{{\mathbb Q}}}\rightarrow {\mathop{\mathrm{GL} }\nolimits}_m(k)$, where $k$ is a topological field and $m\in \mathbb{N}$. Such a representation will be called a *Galois representation*. Let us assume that $k$ is a finite field, endowed with the discrete topology, and let $$\rho:G_{{{\mathbb Q}}}\rightarrow {\mathop{\mathrm{GL} }\nolimits}_m(k)$$ be a Galois representation. Since the set $\{\mathrm{Id}\}$ is open in ${\mathop{\mathrm{GL} }\nolimits}_m(k)$, we obtain that $\ker\rho\subset G_{{{\mathbb Q}}}$ is an open subgroup. In other words, there exists a finite Galois extension $K/{{\mathbb Q}}$ such that $\ker\rho= G_K$. Therefore$$\mathrm{Im}\rho\simeq G_{{{\mathbb Q}}}/\ker\rho\simeq G_{{{\mathbb Q}}}/G_{K}\simeq {\mathop{\mathrm{Gal} }\nolimits}(K/{{\mathbb Q}}).$$
This reasoning shows that, whenever we are given a Galois representation of $G_{{{\mathbb Q}}}$ over a finite field $k$, we obtain a realisation of $\mathrm{Im}\rho\subset {\mathop{\mathrm{GL} }\nolimits}_m(k)$ as a Galois group over ${{\mathbb Q}}$. In this way, any source of Galois representations provides us with a strategy to address the inverse Galois problem for the subgroups of ${\mathop{\mathrm{GL} }\nolimits}_m(k)$ that occur as images thereof.
Geometry provides us with many objects endowed with an action of the absolute Galois group of the rationals, thus giving rise to such Galois representations. One classical example is the group of $\overline{{{\mathbb Q}}}$-defined $\ell$-torsion points of an elliptic curve $E$ defined over ${{\mathbb Q}}$. We will treat this example in Section \[sec:Classical\]. In this survey we will be interested in (compatible systems of) Galois representations arising from automorphic representations. In Section \[sec:Automorphic\] we will describe Galois representations attached to an automorphic representation $\pi$ which satisfies several technical conditions. The statements of the most recent results (to the best of my knowledge) on the inverse Galois problem obtained by means of compatible systems of Galois representations attached to automorphic representations can be found in Section \[sec:Ingredients\], together with some ideas about their proofs.
A remarkable feature of this method is that, in addition, one obtains some control of the ramification of the Galois extension that is produced. Namely, it will only be ramified at the residual characteristic and at a finite set of auxiliary primes, that usually one is allowed to choose (inside some positive density set of primes). This will be highlighted in the statements below.
**Acknowledgements:** This article is an expanded version of the plenary lecture I delivered at the conference *Quintas Jornadas de Teoría de Números* (July 2013). I would like to thank the scientific committee for giving me the oportunity to participate in this conference, and the organising committee for their excellent work. I also want to thank Gabor Wiese for his remarks and suggestions on a previous version of this article.
Some classical cases {#sec:Classical}
====================
In this section we revisit some classical examples of Galois representations attached to geometric objects. We begin with the Galois representations attached to the torsion points of elliptic curves, and later we will see them as a particular case of Galois representations attached to modular forms.
Elliptic curves
---------------
An elliptic curve is a genus one curve, endowed with a distinguished base point. Every elliptic curve $E$ can be described by means of a Weierstrass equations, that is, an affine equation of the form $$y^2 + a_1xy + a_3 y=x^3 + a_2 x^2 + a_4 x + a_6$$ where the coefficients $a_1, \dots, a_6$ lie in some field $K$. The most significant property of elliptic curves is that the set of points of $E$ (defined over some field extension $L/K$) can be endowed with a commutative group structure, where the neutral element is the distinguished base point.
Let $E/\mathbb{Q}$ be an elliptic curve and let $\ell$ be a prime number. We can consider the subgroup $ E[\ell](\overline{{{\mathbb Q}}})$ of $E(\overline{{{\mathbb Q}}})$ consisting of $\ell$-torsion points. This group is isomorphic to the product of two copies of $\mathbb{F}_{\ell}$. Moreover, since the elliptic curve is defined over $\mathbb{Q}$, the absolute Galois group $G_{{{\mathbb Q}}}$ acts naturally on the set of $\overline{{{\mathbb Q}}}$-defined points of $E$, and this action restricts to $ E[\ell](\overline{{{\mathbb Q}}})$. We obtain thus a Galois representation $$\overline{\rho}_{E, \ell}:G_{{{\mathbb Q}}}\rightarrow {\mathop{\mathrm{Aut} }\nolimits}(E[\ell](\overline{{{\mathbb Q}}}))\simeq {\mathop{\mathrm{GL} }\nolimits}_2(\mathbb{F}_{\ell}).$$
As explained in the introduction, the image of $\overline{\rho}_{E, \ell}$ can be realised as a Galois group over ${{\mathbb Q}}$. This brings forward the question of determining the image of such a Galois representation. In this context, there is a classical result by J. P. Serre from the seventies ([@Proprietes], Théorème 2).
Let $E/{{\mathbb Q}}$ be an elliptic curve without complex multiplication over $\overline{{{\mathbb Q}}}$. Then the representation $\overline{\rho}_{E, \ell}$ is surjective for all except finitely many primes $\ell$.
We can immediately conclude that ${\mathop{\mathrm{GL} }\nolimits}_2(\mathbb{F}_{\ell})$ can be realised as a Galois group over ${{\mathbb Q}}$ for all except finitely many primes $\ell$. However, we can do even better by picking a particular elliptic curve and analysing the Galois representations attached to it.
\[ex:37\] Let $E/{{\mathbb Q}}$ be the elliptic curve defined by the Weierstrass equation $$y^2 + y=x^3-x.$$
This curve is labelled 37A in [@CremonaTables], and it has the property that $\overline{\rho}_{E, \ell}$ is surjective *for all primes $\ell$* (see [@Proprietes], Example 5.5.6). Therefore we obtain that ${\mathop{\mathrm{GL} }\nolimits}_2(\mathbb{F}_{\ell})$ occurs as the Galois group of a finite Galois extension $K/{{\mathbb Q}}$. Moreover, we have additional information on the ramification of $K/{{\mathbb Q}}$; namely, it ramifies only at $37$ (which is the conductor of $E$) and $\ell$.
The next situation we want to analyse is that of Galois representations attached to modular forms. Let us recall that modular forms are holomorphic functions defined on the complex upper half plane, which satisfy certain symmetry relations. We will not recall here the details of the definition (see e.g. [@Diamond-Shurman] for a complete treatment focusing on the relationship with arithmetic geometry). These objects, of complex-analytic nature, play a central role in number theory. At the core of this relationship is the fact that one can attach Galois representations of $G_{{{\mathbb Q}}}$ to them. More precisely, let $f$ be a cuspidal modular form of weight $k\geq 2$, conductor $N$ and character $\psi$ (in short: $f\in S_k(N, \psi)$), which is a normalised Hecke eigenform. We may write the Fourier expansion of $f$ as $f(z)=\sum_{n\geq 1}a_n q^n$, where $q=e^{2\pi i z}$. A first remark is that the *coefficient field* ${{\mathbb Q}}_f={{\mathbb Q}}(\{a_n:\gcd(n, N)=1\})$ is a number field. Denote by $\mathcal{O}_{{{\mathbb Q}}_f}$ its ring of integers. By a result of Deligne (cf. [@De71]), for each prime $\lambda$ of $\mathcal{O}_{{{\mathbb Q}}_f}$ there exists a (continuous) Galois representation $$\rho_{f, \lambda}: G_{{{\mathbb Q}}}\rightarrow {\mathop{\mathrm{GL} }\nolimits}_2(\mathcal{O}_{\overline{{{\mathbb Q}}}_{f,\lambda}}),$$ related to $f$, where ${{\mathbb Q}}_{f,\lambda}$ denotes the completion of ${{\mathbb Q}}_f$ at the prime $\lambda$, $\overline{{{\mathbb Q}}}_{f,\lambda}$ an algebraic closure thereof and $\mathcal{O}_{\overline{{{\mathbb Q}}}_{f,\lambda}}$ is the valuation ring of $\overline{{{\mathbb Q}}}_{f,\lambda}$. Here the topology considered on ${\mathop{\mathrm{GL} }\nolimits}_2(\mathcal{O}_{\overline{{{\mathbb Q}}}_{f,\lambda}})$ is the one induced by the $\ell$-adic valuation.
The relationship between $\rho_{f, \lambda}$ and $f$ is the following. First, $\rho_{f, \lambda}$ is unramified outside $N\ell$. Moreover, for each $p\nmid N\ell$, we can consider the image under $\rho_{f, \lambda}$ of a lift $\mathrm{Frob}_p$ of a Frobenius element at $p$ (this is well defined because $\rho_{f, \lambda}$ is unramified at $p$). Then the characteristic polynomial of $\rho_{f, \lambda}(\mathrm{Frob}_p)$ equals $T^2 - a_p T + \psi(p) p^{k-1}$.
We may compose each $\rho_{f, \lambda}$ with the reduction modulo the maximal ideal of $\mathcal{O}_{\overline{{{\mathbb Q}}}_{f,\lambda}}$, and we obtain a (residual) representation $$\overline{\rho}_{f, \lambda}: G_{{{\mathbb Q}}}\rightarrow {\mathop{\mathrm{GL} }\nolimits}_2(\kappa(\overline{{{\mathbb Q}}}_{f,\lambda}))\simeq {\mathop{\mathrm{GL} }\nolimits}_2(\overline{{\mathbb{F}}}_{\ell}),$$ where $\ell$ is the rational prime below $\lambda$. One of the main recent achievements in number theory has been the proof of Serre’s Modularity Conjecture, which says that every Galois representation $ \overline{\rho}_{\ell}: G_{{{\mathbb Q}}}\rightarrow {\mathop{\mathrm{GL} }\nolimits}_2(\overline{{\mathbb{F}}}_{\ell})$ which is odd and irreducible is actually isomorphic to $\overline{\rho}_{f, \lambda}$ for some modular form $f$ and some prime $\lambda$ as above.
In this survey we are interested in the image of $\overline{\rho}_{f, \lambda}$. These images have been studied by K. Ribet (cf. [@Ribet75], [@Ribet85]). One first remark is that, when $\rho_{f, \lambda}$ is absolutely irreducible, then it can be conjugated (inside ${\mathop{\mathrm{GL} }\nolimits}_2(\mathcal{O}_{\overline{{{\mathbb Q}}}_{f,\lambda}})$) so that its image is contained in ${\mathop{\mathrm{GL} }\nolimits}_2(\mathcal{O}_{{{\mathbb Q}}_{f,\lambda}})$. Therefore, in this case we can assume that $\overline{\rho}_{f, \lambda}: G_{{{\mathbb Q}}}\rightarrow {\mathop{\mathrm{GL} }\nolimits}_2(\kappa({{\mathbb Q}}_{f,\lambda}))$, where $\kappa({{\mathbb Q}}_{f,\lambda})$ denotes the residue field of ${{\mathbb Q}}_{f,\lambda}$.
To state Ribet’s result, we first introduce two more number fields related to $f$. The first one is the *twist invariant coefficient field of $f$*, which is the subfield of the coefficient field of $f$ defined as $F_f:={{\mathbb Q}}(\{a_n^2/\psi(n): \gcd(n, N)=1\})$. The second field, which is a finite abelian extension of ${{\mathbb Q}}$, is the subfield $K_f$ of $\overline{{{\mathbb Q}}}$ fixed by all inner twists of $f$ (see [@DiWi] for details).
\[theorem:Ribet\] Let $f=\sum_{n\geq 1}a_n q^n\in S_k(N, \psi)$ be a normalised cuspidal Hecke eigenform. Assume $f$ does not have complex multiplication. Then for all except finitely many prime ideals $\lambda$ of ${{\mathbb Q}}_{f}$, $$\overline{\rho}_{f, \lambda}(G_{K_f})=\left\{g\in {\mathop{\mathrm{GL} }\nolimits}_2\left(\kappa\left(F_{f, \lambda'}\right)\right):\det(g)\in (\mathbb{F}_{\ell}^{\times})^{k-1}\right\},$$ where $\lambda'$ is the ideal of $\mathcal{O}_{F_f}$ below $\lambda$ and $\ell$ is the rational prime below $\lambda$.
This result suggests that we look at the representation $\overline{\rho}^{\mathrm{proj}}_{f, \lambda}$ obtained by composing $\overline{\rho}_{f, \lambda}$ with the projection map ${\mathop{\mathrm{GL} }\nolimits}_2(\kappa({{\mathbb Q}}_{f,\lambda}))\rightarrow {\mathop{\mathrm{PGL} }\nolimits}_2(\kappa({{\mathbb Q}}_{f, \lambda}))$.
More precisely, let $k, r$ be integers greater than or equal to $1$. Consider the set $$\mathcal{A}:=\{A\in {\mathop{\mathrm{GL} }\nolimits}_2(\mathbb{F}_{\ell^r}): \det A\in (\mathbb{F}_{\ell}^{\times})^{k-1}\},$$ and let $\mathcal{A}^{\mathrm{proj}}$ be its projection under the map ${\mathop{\mathrm{GL} }\nolimits}_2(\mathbb{F}_{\ell^r})\rightarrow {\mathop{\mathrm{PGL} }\nolimits}_2(\mathbb{F}_{\ell^r})$. Then if $k$ is odd, we have $\mathcal{A}^{\mathrm{proj}}={\mathrm{PSL}}_2(\mathbb{F}_{\ell^r})$, and if $k$ is even, we have $\mathcal{A}^{\mathrm{proj}}={\mathop{\mathrm{PGL} }\nolimits}_2(\mathbb{F}_{\ell^r})$ whenever $r$ is odd and $\mathcal{A}^{\mathrm{proj}}={\mathrm{PSL}}_2(\mathbb{F}_{\ell^r})$ whenever $r$ is even.
In any case it follows that, for $f$ as above, the image of $\overline{\rho}^{\mathrm{proj}}_{f, \lambda}$ equals ${\mathrm{PSL}}_2\left(\kappa\left(F_{f, \lambda'}\right)\right)$ or ${\mathop{\mathrm{PGL} }\nolimits}_2\left(\kappa\left(F_{f, \lambda'}\right)\right)$ for all except finitely many primes $\lambda$ of $\mathcal{O}_{{{\mathbb Q}}_f}$.
A remarkable difference with the situation arising from elliptic curves is that we obtain realisations of linear groups over fields whose cardinality is not necessarily a prime number. In Example \[ex:37\], we used an elliptic curve to obtain realisations of the members of the family $\{{\mathop{\mathrm{GL} }\nolimits}_2(\mathbb{{\mathbb{F}}}_{\ell})\}_{\ell}$. However, now we have two parameters, namely the prime $\ell$ and the exponent $r$. If we pick a modular form as above, we will obtain realisations of members of one of the families $\{{\mathrm{PSL}}_2(\mathbb{F}_{\ell^r})\}_{\ell, r}$ or $\{{\mathop{\mathrm{PGL} }\nolimits}_2(\mathbb{F}_{\ell^r})\}_{\ell, r}$, and the parameter $r$ *depends on $f$ and $\ell$*.
\[ex:Ribet\] Let $f\in S_{24}(1)$ be a normalised Hecke eigenform of level $1$. The field of coefficients ${{\mathbb Q}}_f={{\mathbb Q}}(\sqrt{144169})$ equals $F_f$; so we can expect to obtain realisations of ${\mathrm{PSL}}_2(\mathbb{F}_{\ell^2})$ when $\ell$ is inert in ${{\mathbb Q}}_f$ and ${\mathop{\mathrm{PGL} }\nolimits}_2(\mathbb{F}_{\ell})$ when $\ell$ splits in ${{\mathbb Q}}_f$. Indeed, let $\ell$ be a prime different from $2$, $3$ and $47$. Then $f$ provides a realisation of ${\mathop{\mathrm{PGL} }\nolimits}_2(\mathbb{F}_{\ell})$ if $144169$ is a square modulo $\ell$ and a realisation of ${\mathrm{PSL}}_2(\mathbb{F}_{\ell^2})$ if $144169$ is not a square modulo $\ell$. Moreover, the corresponding Galois extension $K/{{\mathbb Q}}$ with desired Galois group is unramified outside $\ell$.
Compatible systems and the inverse Galois problem {#sec:CompatibleSystems}
=================================================
The examples of the previous section suggest that, instead of considering isolated Galois representations $\overline{\rho}:G_{{{\mathbb Q}}}\rightarrow {\mathop{\mathrm{GL} }\nolimits}_n(\overline{\mathbb{F}}_{\ell})$ for a fixed prime $\ell$, it is a good idea to look at a system of Galois representations $(\overline{\rho}_{\ell})_{\ell}$, where $\ell$ runs through the prime numbers. The notion of *(strictly) compatible system of Galois representations* already appears in [@SerreAbelian]. We recall the definition below.
\[defi:CompatibleSystem\] Let $n\in \mathbb{N}$ and let $F$ be a number field. A [*compatible system $\rho_\bullet = (\rho_\lambda)_\lambda$ of $m$-dimensional representations of $G_{F}$*]{} consists of the following data:
- A number field $L$.
- A finite set $S$ of primes of $F$.
- For each prime $\mathfrak{p}\not\in S$, a monic polynomial $P_\mathfrak{p}(X) \in \mathcal{O}_L[X]$ (with $\mathcal{O}_L$ the ring of integers of $L$).
- For each finite place $\lambda$ of $L$ (together with fixed embeddings $L \hookrightarrow L_\lambda \hookrightarrow \overline{L}_\lambda$) a continuous Galois representation $$\rho_\lambda: G_F \to {\mathop{\mathrm{GL} }\nolimits}_m(\overline{L}_\lambda)$$ such that $\rho_\lambda$ is unramified outside $S \cup S_{\ell}$ (where $\ell$ is the rational prime below $\lambda$ and $S_{\ell}$ is the set of primes of $F$ above $\ell$) and such that for all $\mathfrak{p} \not\in S \cup S_{\ell}$ the characteristic polynomial of $\rho_\lambda(\mathrm{Frob}_{\mathfrak{p}})$ is equal to $P_{\mathfrak{p}}(X)$.
In our context, the main question to ask about a compatible system is the following: If we know that $\rho_{\lambda}$ satisfies some property (A), does it follow that $\rho_{\lambda'}$ also satisfies (A) for another prime $\lambda'$ of $L$? In other words, what properties “propagate” through a compatible system? The idea that the property of “being attached to a modular form” propagates through such a system lies at the core of the proof of the Taniyama-Shimura conjecture by A. Wiles and R. Taylor (which implies Fermat’s Last Theorem), and also of the proof of Serre’s Modularity Conjecture.
In this section we are interested in the relationship between the images of the members $\rho_{\lambda}$ of a compatible system. An example of such a relationship is the following: if $\rho_{\lambda}$, $\rho_{\lambda'}$ are two semisimple representations belonging to a compatible system, then the image of $\rho_{\lambda}$ is abelian if and only if the image of $\rho_{\lambda'}$ is abelian (see [@SerreAbelian] and [@Henniart1982]).
The case of compatible systems of Galois representations attached to the Tate module of abelian varieties has received particular attention. Let $A/F$ be an $n$-dimensional abelian variety, and assume that $$(\rho_{A, \ell}:G_{F}\rightarrow \mathrm{GL}(V_{\mathbb{Q}_{\ell}})\simeq \mathrm{GL}_{2n}(\mathbb{Q}_{\ell}))_{\ell}$$ is the compatible system of Galois representations attached to the $\ell$-adic Tate module $T_{\ell}$ of $A$ (where as usual $V_{\mathbb{Q}_{\ell}}=\mathbb{Q}_{\ell}\otimes_{\mathbb{Z}_{\ell}}T_{\ell})$). To what extent does the image of $\rho_{\ell}$ depend on $\ell$? There are several ways to phrase this question in a precise way. For example, define the *algebraic monodromy group* at $\ell$, $G_{\ell}$, as the Zarisky closure of $\rho_{\ell}(G_F)$ inside the algebraic group ${\mathop{\mathrm{GL} }\nolimits}_{2n, \mathbb{Q}_{\ell}}$, and let $G_{\ell}^0$ be the connected component of $G_{\ell}$. In this connection, the Mumford-Tate conjecture predicts the existence of an algebraic group $G\subset {\mathop{\mathrm{GL} }\nolimits}_{2n, \mathbb{Q}}$ such that, for all $\ell$, $G_{\ell}^0\simeq \mathbb{Q}_{\ell}\times_{\mathbb{Q}}G$ (see [@Serre:Iyanaga], Conjecture C.3.3 for a precise formulation). By work of J. P. Serre it is known that the (finite) group of connected components $G_{\ell}/G_{\ell}^0$ is independent of $\ell$ (see [@Serre:Course], 2.2.3).
There are many partial results in this direction. In particular cases, the conjecture is known to hold (for example, when $\dim A=1$ cf. [@SerreAbelian] and [@Proprietes]. For higher dimension, when ${\mathrm{End}}_{\overline{{{\mathbb Q}}}}(A)=\mathbb{Z}$ and $n=2$ or odd the conjecture holds with $G={\mathop{\mathrm{GSp} }\nolimits}_{2n, {{\mathbb Q}}}$; cf. [@Serre:Course], 2.2.8). In the general case, Serre has proved that the rank of $G_{\ell}$ is independent of $\ell$ [@Serre:RibetLetter]. More partial results can be found in [@LarsenPink1992], [@Hui2013].
Another question is how close $\rho_{\ell}(G_{F})$ is to its Zarisky closure $G_{\ell}$ in ${\mathop{\mathrm{GL} }\nolimits}_{2g, {{\mathbb Q}}_{\ell}}$. For results in this direction the reader is referred to [@Larsen1995] and [@HuiLarsen].
A particular case, which is of interest to us (cf. Section \[sec:Ingredients\]), is proved by C. Hall in [@Hall2011]. Let $A/F$ be an $n$-dimensional abelian variety which is principally polarised and with $\mathrm{End}_{\overline{{{\mathbb Q}}}}(A)=\mathbb{Z}$. Assume that there exists a prime $\mathfrak{p}$ of $F$ such that the reduction of $A$ at $\mathfrak{p}$ is semistable of toric dimension $1$. Then there exists a constant $M$ such that, for all primes $\ell\geq M$, the image of the mod $\ell$ Galois representation $\overline{\rho}_{A, \ell}:G_F\rightarrow {\mathop{\mathrm{GL} }\nolimits}_{2n}(\mathbb{F}_{\ell})$ coincides with $\mathrm{GSp}_{2n}(\mathbb{F}_{\ell})$. As a consequence, it follows that $A$ satisfies the Mumford-Tate conjecture; more precisely, the corresponding algebraic group is $\mathrm{GSp}_{2n, \mathbb{Q}}$. The proof of this result relies heavily on the fact that the existence of the prime $\mathfrak{p}$ implies that the image under $\overline{\rho}_{A, \ell}$ of the inertia group at $\mathfrak{p}$ contains a transvection.
For the applications to the inverse Galois problem, we will be interested in Galois representations taking values in linear groups over finite fields. For the rest of the section, we focus on symplectic groups ${\mathop{\mathrm{GSp} }\nolimits}_{2n}$ for simplicity. Note that ${\mathop{\mathrm{GSp} }\nolimits}_2={\mathop{\mathrm{GL} }\nolimits}_2$ and ${\mathop{\mathrm{Sp} }\nolimits}_2={\mathrm{SL}}_2$, so in the case of dimension $1$ we are in the situation explained in Section \[sec:Classical\]. Consider the following setup:
\[setup:symplectic\] Let $\rho_{\bullet}=(\rho_{\lambda})_{\lambda}$ be a $2n$-dimensional compatible system of Galois representations of $G_{\mathbb{Q}}$ as in Definition \[defi:CompatibleSystem\], such that for all $\lambda$, $\rho_{\lambda}:G_{{{\mathbb Q}}}\rightarrow {\mathop{\mathrm{GSp} }\nolimits}_{2n}(\overline{L}_{\lambda})$ for some number field $L$ (we will say that such a system is *symplectic*).
Note that each of the $\rho_{\lambda}$ is defined over a finite extension of $L_{\lambda}$ inside $\overline{L}_{\lambda}$. Moreover, we can conjugate each $\rho_{\lambda}$ to take values inside the ring of integers of this finite extension of $L_{\lambda}$, and further reduce it modulo $\lambda$, obtaining a residual representation $\overline{\rho}_{\lambda}$. When $\overline{\rho}_{\lambda}$ is absolutely irreducible, then $\rho_{\lambda}$ can be defined over $L_{\lambda}$, and therefore $\overline{\rho}_{\lambda}$ takes values inside $\mathrm{GSp}_{2n}(\kappa(L_{\lambda}))$, where $\kappa(L_{\lambda})$ denotes the residue field of $L_{\lambda}$. Recall the motivating example in Section \[sec:Classical\] of compatible systems attached to modular forms. In this example, the field $L$ can be taken to be the coefficient field $\mathbb{Q}_f$. Like in the case of compatible systems attached to modular forms, it will be convenient to consider the composition $\overline{\rho}_{\lambda}^{\mathrm{proj}}$ of $\overline{\rho}_{\lambda}$ with the natural projection $\mathrm{GSp}_{2n}(\kappa(L_{\lambda}))\rightarrow \mathrm{PGSp}_{2n}(\kappa(L_{\lambda}))$. In what follows, we focus on obtaining realisations of groups in one of the families $\{\mathrm{PSp}_{2n}(\mathbb{F}_{\ell^r})\}_{\ell, r}$ or $\{\mathrm{PGSp}_{2n}(\mathbb{F}_{\ell^r})\}_{\ell, r}$.
Assume that we are given a compatible system of Galois representations as in Set-up \[setup:symplectic\] such that all $\rho_{\lambda}$ are residually absolutely irreducible. We obtain a system $$(\overline{\rho}_{\lambda}:G_{\mathbb{Q}}\rightarrow {\mathop{\mathrm{GSp} }\nolimits}_{2n}(\kappa(L_{\lambda})))_{\lambda}.$$ For each prime $\lambda$ of $L$, $\kappa(L_{\lambda})\simeq \mathbb{F}_{\ell^{r(\lambda)}}$ for some integer $r(\lambda)$, which actually changes with $\lambda$! If we want to realise the family of groups $\{{\mathrm{PSp}}_{2n}(\mathbb{F}_{\ell^r})\}_{\ell}$ for a fixed exponent $r$, it is clear that one compatible system will not suffice for our purposes (unless we are interested in $r=1$ and we have $L=\mathbb{Q}$). This phenomenon already appeared in Section \[sec:Classical\] in the case of compatible systems attached to modular forms.
The strategy to obtain Galois realisations will proceed as follows. We want to construct a compatible system of Galois representations $\rho_{\bullet}$ as in Set-up \[setup:symplectic\], such that the $\rho_{\lambda}$ are absolutely irreducible, and such that the images of the corresponding representations $\overline{\rho}_{\lambda}$ are large in some sense which does not depend on $\lambda$. More precisely, we will say that the image of a representation $\overline{\rho}_{\lambda}:G_{{{\mathbb Q}}}\rightarrow {\mathop{\mathrm{GSp} }\nolimits}_{2n}(\kappa(L_{\lambda}))$ is *huge* if it contains a conjugate (inside ${\mathop{\mathrm{GSp} }\nolimits}_{2n}(\overline{\mathbb{F}}_{\ell})$) of $\mathrm{Sp}_{2n}(\mathbb{F}_{\ell})$ (where $\ell$ is the prime below $\lambda$). A group theoretical reasoning shows that if $\overline{\rho}_{\lambda}$ has huge image, then the image of $\overline{\rho}^{\mathrm{proj}}_{\lambda}$ equals ${\mathrm{PGSp}}_{2n}(\mathbb{F}_{\ell^r})$ or ${\mathrm{PSp}}_{2n}(\mathbb{F}_{\ell^r})$ for some integer $r$ (cf. Corollary 5.7 of [@ArDiWi1]). Moreover, we will have to find some conditions to control the exponent $r$.
The presence of these two parameters, $\ell$ and $r$, gives rise to two different approaches to obtain results on the inverse Galois problem:
- **Vertical Direction:** Fix a prime number $\ell$. Obtain realisations of $\mathrm{PSp}_{2n}(\mathbb{F}_{\ell^r})$ (resp. $\mathrm{PGSp}_{2n}(\mathbb{F}_{\ell^r})$) for all $r\in \mathbb{N}$.
- **Horizontal Direction:** Fix a natural number $r\geq 1$. Obtain realisations of $\mathrm{PSp}_{2n}(\mathbb{F}_{\ell^r})$ (resp. $\mathrm{PGSp}_{2n}(\mathbb{F}_{\ell^r})$) for all primes $\ell$.
This nomenclature stems from the following representation: Place in a graphic the groups in the family $\mathrm{PSp}_{2n}(\mathbb{F}_{\ell^r})$ (resp. $\mathrm{PGSp}_{2n}(\mathbb{F}_{\ell^r})$) that are realised as Galois groups over ${{\mathbb Q}}$ by displaying in the $x$-axis the prime $\ell$ and in the $y$-axis the exponent $r$, and drawing a dot whenever the group $\mathrm{PSp}_{2n}(\mathbb{F}_{\ell^r})$ (resp. $\mathrm{PGSp}_{2n}(\mathbb{F}_{\ell^r})$) is realised as a Galois group over ${{\mathbb Q}}$ (see [@DiWi] for such a graphic when $n=1$).
By exploiting the compatible systems of Galois representations attached to modular forms, the following results have been proved in the vertical direction (see Theorem 1.1 of [@Wiese2008]) and in the horizontal direction (see Theorem 1.1 of [@DiWi]).
\[theorem:Wiese\] Let $\ell$ be a prime number. There exist infinitely many natural numbers $r$ such that ${\mathrm{PSL}}_2(\mathbb{F}_{\ell^r})$ occurs as the Galois group of a finite Galois extension $K/\mathbb{Q}$, which is unramified outside $\ell$ and an auxiliary prime $q$.
\[theorem:DieulefaitWiese\] Let $r\in \mathbb{N}$.
1. There exists a positive density set of primes $\ell$ such that $\mathrm{PSL}_2(\mathbb{F}_{\ell^r})$ occurs as the Galois group of a finite Galois extension $K/\mathbb{Q}$, which is unramified outside $\ell$ and two (resp. three) auxiliary primes if $n$ is even (resp. odd).
2. Assume that $r$ is odd. There exists a positive density set of primes $\ell$ such that $\mathrm{PGL}_2(\mathbb{F}_{\ell^r})$ occurs as the Galois group of a finite Galois extension $K/\mathbb{Q}$, which is unramified outside $\ell$ and two auxiliary primes.
Let us look more closely at the approach in the horizontal direction. We fix a natural number $r$, and we want to realise $\mathrm{PSL}_2(\mathbb{F}_{\ell^r})$ or $\mathrm{PGL}_{2}(\mathbb{F}_{\ell^r})$ as Galois groups over ${{\mathbb Q}}$ for as many primes $\ell$ as we can. From the remarks above, it is clear that a single modular form will not suffice to realise $\mathrm{PSL}_2(\mathbb{F}_{\ell^r})$ for *all* $\ell$. However, nothing prevents us from looking at several modular forms. In fact, Serre’s Modularity Conjecture, which is now a theorem, tells us that every irreducible, odd Galois representation $\overline{\rho}:G_{{{\mathbb Q}}}\rightarrow \mathrm{GL}_2(\overline{\mathbb{F}}_{\ell})$ is attached to some modular form $f$. As a consequence, any realisation of $\mathrm{PSL}_2(\mathbb{F}_{\ell^r})$ as the Galois group of a finite Galois extension $K/\mathbb{Q}$ with $K$ imaginary can be obtained through this method (cf. Proposition 1.2 of [@DiWi]). By making use of Theorem \[theorem:Ribet\], we know that for a normalised Hecke eigenform without complex multiplication, the image of $\overline{\rho}_{f, \lambda}$ is huge for all except finitely many prime ideals $\lambda$ of $\mathbb{Q}_f$, and thus the image of $\overline{\rho}^{\mathrm{proj}}_{f, \lambda}$ is isomorphic to $\mathrm{PSL}_2(\mathbb{F}_{\ell^r})$ or $\mathrm{PGL}_2(\mathbb{F}_{\ell^r})$. The main obstacle here is to obtain some control on the exponent $r$. Under additional conditions, the field $\mathbb{F}_{\ell^r}$ coincides with $\kappa({{\mathbb Q}}_{f, \lambda})$, reducing the problem to the analysis of $\mathbb{Q}_f$. But this is not a minor issue! Very little is known about these fields (although one can always compute them for any given modular form $f$). When the level of $f$ is $1$, there is a strong conjecture in this connection, namely Maeda’s conjecture, stating that the degree $d_f=[\mathbb{Q}_f:\mathbb{Q}]$ should equal the dimension of $S_k(1)$ as a complex vector space ($k$ being the *weight* of $f$) and the Galois group of the normal closure of $\mathbb{Q}_f/\mathbb{Q}$ is equal to the symmetric group $S_{d_f}$. Assuming this conjecture, one can improve Theorem \[theorem:DieulefaitWiese\] as follows (cf. Theorem 1.1 of [@Wiese2013]).
Assume Maeda’s Conjecture holds. Let $r\in \mathbb{N}$. Assume that $r$ is even (resp. odd). There exists a density 1 set of primes $\ell$ such that $\mathrm{PSL}_2(\mathbb{F}_{\ell^r})$ (resp. $\mathrm{PGL}_2(\mathbb{F}_{\ell^r})$) occurs as the Galois group of a finite Galois extension $K/\mathbb{Q}$, which is unramified outside $\ell$.
Galois representations attached to automorphic forms {#sec:Automorphic}
====================================================
In order to use the strategy outlined in the previous section to obtain results on the inverse Galois problem, we first need to find a source of compatible systems of Galois representations of $G_{\mathbb{Q}}$. As discussed in Section \[sec:Classical\], elliptic curves defined over $\mathbb{Q}$ (and, analogously, abelian varieties of higher dimension which are defined over $\mathbb{Q}$) provide us with such systems, and, more generally, classical modular forms give rise to such systems. Both of these examples can be encompassed in the general framework provided by the Langlands conjectures. More precisely, given an automorphic representation $\pi$ (which is *algebraic* in some precise sense) for an arbitrary connected reductive group $G$ over ${{\mathbb Q}}$, one hopes that there exists a compatible system of Galois representations $(\rho_{\bullet}(\pi))$ attached to it, where $\rho_{\lambda}(\pi)$ takes values in the $\overline{{{\mathbb Q}}}_{\ell}$-points of a certain algebraic group attached to $G$ (namely the Langlands dual of $G$). Conjecturally, then, we have many compatible systems of Galois representations, which builds up the hope of eventually applying the strategy described in the previous section to realise many linear groups as Galois groups over the rationals.
There are several cases when these conjectures are known to hold. Recently, there has been a breakthrough in this connection due to P. Scholze [@Scholze] and M. Harris, K.-W. Lan, R. Taylor, J. Thorne. Namely, they attach compatible systems of Galois representations to regular, L-algebraic cuspidal automorphic representations of $\mathrm{GL}_m(\mathbb{A}_F)$, where $F$ is a totally real or a CM number field.
However, in this section we will recall a less recent result, due to L. Clozel, R. Kottwitz, M. Harris, R. Taylor and several others, which is more restrictive, since it deals with RAESDC (regular, algebraic, essentially self-dual, cuspidal) automorphic representations. We will not recall here all definitions (the reader can look them up in [@BLGGT]), but we will try to give some explanations.
Let $\mathbb{A}_{{{\mathbb Q}}}$ denote the ring of adeles of ${{\mathbb Q}}$. We consider so-called irreducible admissible representations $\pi$ of ${\mathop{\mathrm{GL} }\nolimits}_{m}(\mathbb{A}_{{{\mathbb Q}}})$. In fact, $\pi$ is not literally a representation of the group ${\mathop{\mathrm{GL} }\nolimits}_{m}(\mathbb{A}_{{{\mathbb Q}}})$ into some vector space. The interested reader can look at the details in [@Bump]. In this survey, we will treat them as black boxes, focusing rather on the compatible systems of Galois representations that they give rise to.
A *RAESDC (regular, algebraic, essentially self-dual, cuspidal) automorphic representation of ${\mathop{\mathrm{GL} }\nolimits}_{m}(\mathbb{A}_{{\mathbb Q}})$* can be defined as a pair $(\pi, \mu)$ consisting of a cuspidal automorphic representation $\pi$ of ${\mathop{\mathrm{GL} }\nolimits}_{m}(\mathbb{A}_{{\mathbb Q}})$ and a continuous character $\mu: \mathbb{A}_{{\mathbb Q}}^{\times}/ {{\mathbb Q}}^{\times} \rightarrow {{\mathbb C}}^{\times}$ such that:
1. (regular algebraic) $\pi$ has *weight* $a= (a_i) \in {{\mathbb Z}}^n$.
2. (essentially self-dual) $\pi \cong \pi^{\vee} \otimes (\mu \circ \mathrm{Det})$.
Given a RAESDC automorphic representation $\pi$ as above, there exist a number field $M\subset {{\mathbb C}}$, a finite set $S$ of rational primes, and strictly compatible systems of semisimple Galois representations $$\begin{aligned} \rho_\lambda (\pi): G_{{\mathbb Q}}&\rightarrow {\mathop{\mathrm{GL} }\nolimits}_m (\overline{M}_\lambda),\\
\rho_\lambda(\mu): G_{{\mathbb Q}}&\rightarrow \overline{M}_\lambda^{\times}, \end{aligned}$$ where $\lambda$ ranges over all finite places of $M$ (together with fixed embeddings $M \hookrightarrow M_\lambda \hookrightarrow \overline{M}_\lambda$, where $\overline{M}_\lambda$ is an algebraic closure of the localisation $M_\lambda$ of $M$ at $\lambda$) such that the following properties are satisfied. Denote by $\ell$ the rational prime lying below $\lambda$.
1. $\rho_\lambda(\pi) \cong \rho_\lambda(\pi)^{\vee} \otimes
\chi_{\ell}^{1-m} \rho_\lambda(\mu)$, where $\chi_{\ell}$ denotes the $\ell$-adic cyclotomic character.
2. The representations $\rho_\lambda(\pi)$ and $\rho_\lambda(\mu)$ are unramified outside $S \cup \{ \ell \}$.
3. Locally at $\ell$, the representations $\rho_\lambda(\pi)$ and $\rho_\lambda(\mu)$ are de Rham, and if $\ell \notin S$, they are crystalline.
4. $\rho_\lambda(\pi)$ is regular, with Hodge-Tate weights $\{ a_1 + (m-1), a_2 + (m-2), \ldots, a_m \}$.
5. \[item:4\] Fix any isomorphism $\iota:\overline{M}_\lambda\simeq {{\mathbb C}}$ compatible with the inclusion $M\subset {{\mathbb C}}$. Then $$\label{eq:star} \iota{\mathop{\mathrm{WD} }\nolimits}(\rho_{\lambda}(\pi)|_{G_{{{\mathbb Q}}_p}})^{\mathrm{F-ss}} \cong {\mathop{\mathrm{rec} }\nolimits}(\pi_p \otimes | \mathrm{Det} |_p^{(1-m)/2}).$$ Here ${\mathop{\mathrm{WD} }\nolimits}$ denotes the Weil-Deligne representation attached to a representation of $G_{{{\mathbb Q}}_p}$, $\mathrm{F}-\mathrm{ss}$ means the Frobenius semisimplification, and ${\mathop{\mathrm{rec} }\nolimits}$ is the notation for the (unitarily normalised) Local Langlands Correspondence.
The properties (1)–(5) above give us some information about the compatible system $(\rho_{\bullet}(\pi))$. If we want to realise groups in a given family of finite linear groups as Galois groups over ${{\mathbb Q}}$, we will need to find a suitable RAESDC automorphic representation such that the information provided by (1)–(5) allows us to ensure that the images of the corresponding residual representations $\overline{\rho}_{\lambda}(\pi)$ belong to this family. We can already make some remarks in this connection. For example, (1) implies that the image of $\rho_{\lambda}(\pi)$ lies in an orthogonal or symplectic group. (2) provides us with a strong control on the ramification of the Galois realisation that we obtain. This is a characteristic feature of this strategy of addressing the inverse Galois problem. (3) and (4) are of a technical nature, and we will not mention them in the rest of the survey (except briefly in connection to the proof of Theorem \[theorem:ArDiShWi\]). Instead, let us expand on the last property (5). Any $\pi$ as above can be written as a certain restricted product of local components $\pi_p$, where $p$ runs through the places of ${{\mathbb Q}}$. Equation , with is highly involved notation, is essentially telling us that this local component $\pi_p$ determines the restriction of $\rho_{\lambda}(\pi)$ to a decomposition group $G_p\subset G_{{{\mathbb Q}}}$ at the prime $p$. As we will see in the next section, the possibility of prescribing the restriction of $\rho_{\lambda}(\pi)$ to $G_p$ for a finite number of primes $p\not=\ell$ will be the essential ingredient for controlling the image of $\overline{\rho}_{\lambda}(\pi)$.
Main statements and ingredients of proof {#sec:Ingredients}
========================================
In this section we state several results obtained through the strategy described in Section \[sec:CompatibleSystems\], that generalise Theorems \[theorem:Wiese\] and \[theorem:DieulefaitWiese\] to $2n$-dimensional representations. The first result, due to C. Khare, M. Larsen and G. Savin (cf. [@KLS1]), can be encompassed in the vertical direction, as explained in Section \[sec:CompatibleSystems\].
\[theorem:KLS1\] Fix $n, t\in \mathbb{N}$ and a prime $\ell$. Then there exists a natural number $r$ divisible by $t$ such that either ${\mathrm{PSp}}_{2n}(\mathbb{F}_{\ell^r})$ or ${\mathrm{PGSp}}_{2n}(\mathbb{F}_{\ell^r})$ occurs as a Galois group over $\mathbb{Q}$.
More precisely, there exists an irreducible Galois representation $\rho_{\ell}:G_{{{\mathbb Q}}}\rightarrow {\mathop{\mathrm{GSp} }\nolimits}_{2n}(\overline{{{\mathbb Q}}}_{\ell})$, unramified outside $\ell$ and an auxiliary prime $q$, such that the image of $\overline{\rho}^{\mathrm{proj}}_{\ell}$ is either ${\mathrm{PSp}}_{2n}(\mathbb{F}_{\ell^r})$ or ${\mathrm{PGSp}}_{2n}(\mathbb{F}_{\ell^r})$.
We also want to mention the following result, dealing with different families of linear groups (cf. [@KLS2]).
\[theorem:KLS2\] Fix $t\in \mathbb{N}$ and a prime $\ell$.
1. There exists an integer $r$ divisible by $t$ such that $G_2(\mathbb{F}_{\ell^r})$ can be realised as a Galois group over ${{\mathbb Q}}$.
2. Assume that $\ell$ is odd. There exists an integer $r$ divisible by $t$ such that either the group $\mathrm{SO}_{2n+1}(\mathbb{F}_{\ell^r})^{\mathrm{der}}$ or $\mathrm{SO}_{2n+1}(\mathbb{F}_{\ell^r})$ can be realised as a Galois group over ${{\mathbb Q}}$.
3. Assume that $\ell\equiv 3, 5\pmod{8}$. There exists an integer $r$ divisible by $t$ such that the group $\mathrm{SO}_{2n+1}(\mathbb{F}_{\ell^r})^{\mathrm{der}}$ can be realised as a Galois group over ${{\mathbb Q}}$.
In the horizontal direction there is the following result for symplectic groups, due to S. A., L. Dieulefait, S.-W. Shin and G. Wiese (cf. [@ArDiShWi]).
\[theorem:ArDiShWi\] Fix $n, r\in \mathbb{N}$. There exists a set of rational primes of positive density such that, for every prime $\ell$ in this set, the group ${\mathrm{PSp}}_{2n}(\mathbb{F}_{\ell^r})$ or ${\mathrm{PGSp}}_{2n}(\mathbb{F}_{\ell^r})$ can be realised as a Galois group over ${{\mathbb Q}}$.
More precisely, there exists an irreducible Galois representation $\rho_{\ell}:G_{{{\mathbb Q}}}\rightarrow {\mathop{\mathrm{GSp} }\nolimits}_{2n}(\overline{{{\mathbb Q}}}_{\ell})$, unramified outside $\ell$ and two auxiliary primes, such that the image of $\overline{\rho}^{\mathrm{proj}}_{\ell}$ is either ${\mathrm{PSp}}_{2n}(\mathbb{F}_{\ell^r})$ or ${\mathrm{PGSp}}_{2n}(\mathbb{F}_{\ell^r})$.
Note that, in [@DiWi], the authors can control whether the image is ${\mathrm{PSL}}$ or ${\mathop{\mathrm{PGL} }\nolimits}$ because they choose their modular form in such a way that it does not have any nontrivial inner twist. Currently, this has not been generalised to $n>1$.
In both results, there are essentially two different parts: on the one hand, one needs to find conditions on a compatible system of symplectic Galois representations to ensure that the images of the residual representations corresponding to the members of the system will be huge. On the other hand, one needs to show the existence of RAESDC automorphic representations whose compatible systems satisfy the desired conditions. In [@KLS1], the existence of appropriate automorphic representations is shown by means of Poincaré series, which give automorphic representations on ${\mathrm{SO}}_{2n+1}(\mathbb{A}_{{{\mathbb Q}}})$. These are transferred to ${\mathop{\mathrm{GL} }\nolimits}_{2n}(\mathbb{A}_{{{\mathbb Q}}})$ by means of Langlands functoriality. In [@ArDiShWi], the existence of the desired automorphic representations is shown by exploiting results of S.-W. Shin on equidistribution of local components at a fixed prime in the unitary dual with respect to the Plancherel measure (cf. [@Shi12]).
In the rest of the section, we will expand on the first question, namely, on conditions on symplectic compatible systems that allow some control on the images of the residual representations corresponding to the members of the system. A first property of the image that we want to ensure is irreducibility. In both [@KLS1] and [@ArDiShWi], this is achieved by means of a tamely ramified symplectic local parameter. More precisely, fix a prime $\ell$, and let $p, q$ be auxiliary primes such that the order of $q$ modulo $p$ is exactly $2n$. Let ${{\mathbb Q}}_{q^{2n}}$ be the unique unramified extension of ${{\mathbb Q}}_q$ of degree $2n$. Using class field theory, it can be proven that there exists a character $\chi_q:G_{{{\mathbb Q}}_{q^{2n}}}\rightarrow \overline{{{\mathbb Q}}}_{\ell}^{\times}$ of order $2p$ such that (1) the restriction of $\chi_q$ to the inertia group $I_{{{\mathbb Q}}_{q^{2n}}}$ has order exactly $p$; (2) $\chi_q(\mathrm{Frob}_{q^{2n}})=-1$. Then it follows that the Galois representation $\rho_q:=\mathrm{Ind}_{G_{{{\mathbb Q}}_{q^{2n}}}}^{G_{{{\mathbb Q}}_q}}\chi_q$ is irreducible and can be conjugated to take values inside $\mathrm{Sp}_{2n}(\overline{{{\mathbb Q}}}_{\ell})$. As a consequence, we obtain the following result:
\[prop:irreducibility\] Let $(\rho_{\bullet})$ be a $2n$-dimensional compatible system of Galois representations of $G_{{{\mathbb Q}}}$ as in Definition \[defi:CompatibleSystem\]. Let $p$, $q$ two primes such that the order of $q$ modulo $p$ is exactly $2n$. Let $G_q\subset G_{{{\mathbb Q}}}$ be a decomposition group at $q$, and assume that, for all primes $\lambda$ of $L$ which do not lie above $p$ or $q$, we have $$\mathrm{Res}^{G_{{\mathbb Q}}}_{G_{q}} \rho_{\lambda}\simeq \mathrm{Ind}_{G_{{{\mathbb Q}}_{q^{2n}}}}^{G_{{{\mathbb Q}}_q}} \chi_q,$$ where $\ell$ is the rational prime below $\lambda$ and $\chi_q:G_{{{\mathbb Q}}_{q^{2n}}}\rightarrow \overline{{{\mathbb Q}}}_{\ell}$ is a character as above. Then $\overline{\rho}_{\lambda}$ is irreducible.
More precisely, the image of $\overline{\rho}_{\lambda}$ contains a so-called $(2n, p)$-group (cf. [@KLS1] for the definition of this notion). Given a prime $\ell$, if one chooses the auxiliary primes $p$ and $q$ in an appropiate way, it is possible to ensure that the image of $\overline{\rho}_{\lambda}$ is huge (i.e., it contains ${\mathop{\mathrm{Sp} }\nolimits}_{2n}(\mathbb{F}_{\ell})$). This idea appeared originally in the work of C. Khare and J.-P. Wintenberger on Serre’s Modularity Conjecture for $n=1$, and has been exploited in [@Wiese2008] and [@KLS1]. Let us briefly sketch how it works in the case when $n=1$. Assume that we have a representation $\overline{\rho}_{\lambda}:G_{{{\mathbb Q}}}\rightarrow {\mathop{\mathrm{GL} }\nolimits}_2(\overline{\mathbb{F}}_{\ell})$, satisfying that the restriction of $\overline{\rho}_{\lambda}$ to a decomposition group at $q$ is isomorphic to $\mathrm{Ind}_{G_{{{\mathbb Q}}_{q^{2n}}}}^{G_{{{\mathbb Q}}_q}} \overline{\chi}_q$. Consider the composition $\overline{\rho}_{\lambda}^{\mathrm{proj}}$ of $\overline{\rho}_{\lambda}$ with the projection ${\mathop{\mathrm{GL} }\nolimits}_2(\overline{\mathbb{F}}_{\ell})\rightarrow {\mathop{\mathrm{PGL} }\nolimits}_2(\overline{\mathbb{F}}_{\ell})$. We certainly know that the image of $\overline{\rho}_{\lambda}^{\mathrm{proj}}$ is a finite subgroup of ${\mathop{\mathrm{PGL} }\nolimits}_2(\overline{\mathbb{F}}_{\ell})$. L. E. Dickson has classified all finite subgroups of ${\mathop{\mathrm{PGL} }\nolimits}_2(\overline{\mathbb{F}}_{\ell})$ into four types of groups: a subgroup $H\subset {\mathop{\mathrm{PGL} }\nolimits}_2(\overline{\mathbb{F}}_{\ell})$ is either (1) equal to ${\mathrm{PSL}}_2(\mathbb{F}_{\ell^r})$ or ${\mathop{\mathrm{PGL} }\nolimits}_2(\mathbb{F}_{\ell^r})$ for some $r$; or (2) a reducible subgroup; or (3) a dihedral subgroup $D_s$ for some integer $s$ coprime to $\ell$; or (4) isomorphic to one of the alternating groups $A_4$, $A_5$ or the symmetric group $S_4$.
Since we know that the image of $\overline{\rho}_{\lambda}$ contains the subgroup $\overline{\rho}_{\lambda}(G_q)$, which is the dihedral group $D_p$, we can immediately exclude the possibilities (2) and (4) (provided $p$ is large enough so that it does not divide the cardinality of $A_5$ and $S_4$). To conclude that the image of $\overline{\rho}_{\lambda}$ is huge, we have to exclude the case that it is a dihedral group. Assume then that this is the case. Then $\overline{\rho}_{\lambda}=\mathrm{Ind}_{G_K}^{G_{{{\mathbb Q}}}} \psi$ for some quadratic field extension $K/{{\mathbb Q}}$. In addition, we know that $\mathrm{Res}_{G_q}^{G_{{{\mathbb Q}}}}\overline{\rho}_{\lambda}\simeq \mathrm{Ind}_{G_{{{\mathbb Q}}_{q^{2n}}}}^{G_{{{\mathbb Q}}_q}} \overline{\chi}_q$. Is there a way to get a contradiction? The idea is that this can be achieved, provided we choose the auxiliary primes $p$ and $q$ carefully. If this is the case, these two conditions will be rendered incompatible because of the relationship between $p$, $q$ and $\ell$. The reader interested in the details is referred to [@Wiese2008]. In order for this strategy to work, we must start from a prime $\ell$ and choose $p$ and $q$ accordingly. Thus, this idea is particularly well suited to address the vertical direction.
In [@KLS1] this idea is generalised to the $2n$-dimensional setting. The first difficulty that arises is that the classification of finite subgroups of ${\mathop{\mathrm{GL} }\nolimits}_{2n}(\overline{{\mathbb{F}}}_{\ell})$ is much more intrincate when $n>1$. The main group-theoretical tool that is used in [@KLS1] is a theorem from [@LarsenPink2011], which generalises a classic theorem of Jordan from characteristic zero to arbitrary characteristic. More precisely, let $m\in \mathbb{N}$ be an integer. Then there exists a constant $J(m)$ such that, for any finite subgroup $\Gamma$ of ${\mathop{\mathrm{GL} }\nolimits}_m(\overline{{\mathbb{F}}}_{\ell})$ there exist normal subgroups $\Gamma_3\subset \Gamma_2\subset \Gamma_1\subset \Gamma$ such that the index $[\Gamma:\Gamma_1]\leq J(m)$, and such that $\Gamma_3$ is an $\ell$-group, $\Gamma_2/\Gamma_3$ is an abelian group (whose order is not divisible by $\ell$) and $\Gamma_1/\Gamma_2$ is a direct product of finite groups of Lie type in characteristic $\ell$.
Going back to the setting of Galois representations, the main idea now is that, if $\Gamma\subset {\mathop{\mathrm{GSp} }\nolimits}_{2n}(\overline{{\mathbb{F}}}_{\ell})$ is a finite subgroup such that there is a $(2n, p)$-group contained in all normal subgroups of $\Gamma$ of index smaller than or equal to a constant $d(n)$ which depends only on $n$ (this constant will be computed in terms of the quantity $J(2n)$ mentioned above), then it follows that $\Gamma$ must contain ${\mathop{\mathrm{Sp} }\nolimits}_{2n}({\mathbb{F}}_{\ell})$. Given a prime number $\ell$, by choosing the auxiliary primes $p$ and $q$ in a suitable way, one can ensure that if $\mathrm{Res}_{G_q}^{G_{{{\mathbb Q}}}} \overline{\rho}_{\lambda}\simeq \mathrm{Ind}_{G_{{{\mathbb Q}}_{q^{2n}}}}^{G_{{{\mathbb Q}}_q}} \overline{\chi}_q$, then the group $\Gamma=\mathrm{im}\overline{\rho}_{\lambda}$ satisfies that $\overline{\rho}_{\lambda}(G_q)$ is a $(2n, p)$-group contained in all normal subgroups of $\Gamma$ of index at most $d(n)$.
Now we focus our attention on the horizontal direction. Recall that, in this setting, we are given a compatible system $(\rho_{\bullet})$, and we want that the image of the members $\overline{\rho}_{\lambda}$ are huge for as many primes $\lambda$ of $L$ as possible. In this context, the presence of a tamely ramified local parameter at an auxiliary prime $q$ will not suffice to obtain huge image. Since the prime $\ell$ is now varying, we are not allowed to choose the auxiliary primes $p$ and $q$ in terms of $\ell$. A new idea is required.
When $n=1$, L. Dieulefait and G. Wiese construct Hecke eigenforms $f$ such that the compatible system of Galois representations $(\rho_{f, \bullet})$ attached to $f$ satisfies that, for all primes $\lambda$ of ${{\mathbb Q}}_f$, the image of $\overline{\rho}_{f, \lambda}$ is huge (cf. [@DiWi]). The idea is to choose $f$ in such a way that the corresponding compatible system has *two* tamely ramified parameters (at two different auxiliary primes), chosen in such a way that all possibilities for the image of $\overline{\rho}_{f, \lambda}^{\mathrm{proj}}$ given by Dickson’s classification (see above) except huge image are ruled out.
For the $2n$-dimensional case, however, we need a new ingredient. The main result in [@ArDiShWi] relies on a classification of finite subgroups of ${\mathop{\mathrm{GSp} }\nolimits}_{2n}(\overline{{\mathbb{F}}}_{\ell})$ containing a transvection. More precisely, the main result in [@ArDiWi2] shows that, if $\Gamma\subset {\mathop{\mathrm{GSp} }\nolimits}_{2n}(\overline{{\mathbb{F}}}_{\ell})$ is a finite subgroup which contains a (nontrivial) transvection, then either (1) $\Gamma$ is a reducible subgroup; or (2) $\Gamma$ is imprimitive; or (3) $\Gamma$ is huge. The first possibility can be ruled out by introducing a tamely ramified parameter in the compatible system $(\rho_{\bullet})$. The imprimitive case corresponds to the situation when $\rho_{\lambda}$ is induced from some field extension $K/{{\mathbb Q}}$. To rule out this case, one needs to choose the auxiliary primes $p$ and $q$ in the tamely ramified parameter in a suitable way. If the compatible system is regular (in the sense that the tame inertia weights of $\overline{\rho}_{\lambda}$ are independent of $\lambda$ and different, cf. [@ArDiWi2] for a precise definition), then the second case in the classification can be ruled out, and the conclusion that the image of $\overline{\rho}_{\lambda}$ is huge can be drawn.
The question remains whether it is possible to enforce a compatible system $(\rho_{\bullet})$ of Galois representations to satisfy, by means of a local condition, that the images of the residual representations $\overline{\rho}_{\lambda}$ contain a transvection. Recall that in Section \[sec:CompatibleSystems\], transvections already appeared in connection with the image of the Galois representation attached to the group of $\ell$-torsion points of an abelian variety $A$ defined over ${{\mathbb Q}}$. In this setting, to ensure that the image of $\overline{\rho}_{A, \ell}:G_{{{\mathbb Q}}}\rightarrow {\mathop{\mathrm{GSp} }\nolimits}_{2n}(\mathbb{{\mathbb{F}}}_{\ell})$ contains a transvection, C. Hall exploited the fact that, if $A$ has a certain type of reduction at an auxiliary prime $p_1$, then the image of the inertia group at $p_1$ under $\overline{\rho}_{A, \ell}$ already contains a transvection. In the case of $2n$-dimensional compatible systems of Galois representations, the transvection can be obtained by imposing that the restriction of $\rho_{\lambda}$ to a decomposition group at an auxiliary prime $p_1$ has a prescribed shape. Equivalently, this amounts to specifying the Weil-Deligne representation attached to the restriction of $\rho_{\lambda}$ to $G_{p_1}$. If the compatible system $(\rho_{\bullet}(\pi))$ is attached to a RAESDC automorphic representation $\pi$, this condition can be expressed in terms of $\pi$. Here it is very important that the local component $\pi_{p_1}$ of $\pi$ determines, via the Local Langlands correspondence, not only the characteristic polynomial of $\rho_{\lambda}(\mathrm{Frob}_{p_1})$ for $\lambda\nmid p_1$, but the whole restriction $\rho_{\lambda}(\pi)\vert_{G_{p_1}}$. Moreover, one has to take care that the transvection in the image of $\rho_{\lambda}(\pi)$ does not become trivial under reduction modulo $\lambda$. In [@ArDiShWi], the authors ensure that, for a density one set of rational primes $\ell$ and for every $\lambda\vert \ell$, the transvection is preserved after reduction modulo $\lambda$. The main tool they use is a level lowering result from [@BLGGT], which they apply over infinitely many quadratic CM number fields.
Up to this point, we have sketched the main ideas in [@Wiese2008], [@KLS1] and [@DiWi], [@ArDiShWi] to prove the existence of compatible systems of Galois representations $(\rho_{\bullet})$ such that the images of the residual representations $\overline{\rho}_{\lambda}$ are huge, i.e., containing ${\mathop{\mathrm{Sp} }\nolimits}_{2n}({\mathbb{F}}_{\ell})$. For the applications to the inverse Galois problem, we need a certain control of the largest exponent $r$ such that ${\mathop{\mathrm{Sp} }\nolimits}_{2n}({\mathbb{F}}_{\ell^r})$ is contained in the image of $\overline{\rho}_{\lambda}$. We already remarked in Section \[sec:Classical\] that, in the case of Galois representations attached to a Hecke eigenform $f$, this is linked to the knowledge of the coefficient field ${{\mathbb Q}}_f$, which proves to be a difficult task. However, even though it may be difficult to determine precisely what the coefficient field $L$ of the compatible system is, it is possible to ensure that it contains a large subfield. In fact, the tamely ramified parameter at the prime $q$ provides already a lower bound on the size of $r$. For the applications in the horizontal direction, one exploits that if $L/{{\mathbb Q}}$ contains a cyclic subextension $K/{{\mathbb Q}}$ of degree $r$, then there exists a positive density set of primes $\ell$ such that, at some prime $\lambda$ of $L$ above $\ell$, the extension $L/{{\mathbb Q}}$ has the desired residue degree $r$.
[99]{}
Sara Arias-de-Reyna, Luis Dieulefait and Gabor Wiese. *Compatible systems of symplectic Galois representations and the inverse Galois problem I. Images of projective representations*. Preprint arXiv:1203.6546 (2013).
Sara Arias-de-Reyna, Luis Dieulefait and Gabor Wiese. *Compatible systems of symplectic Galois representations and the inverse Galois problem II. Transvections and huge image*. Preprint arXiv:1203.6552 (2013).
Sara Arias-de-Reyna, Luis Dieulefait, Sug Woo Shin and Gabor Wiese. *Compatible systems of symplectic Galois representations and the inverse Galois problem III. Automorphic construction of compatible systems with suitable local properties*. Preprint arXiv: arXiv:1308.2192 (2013).
Thomas Barnet-Lamb, Toby Gee, David Geraghty and Richard Taylor. *Potential automorphy and change of weight*. Annals of Mathematics, to appear (2013).
Daniel Bump. *Automorphic forms and representations*. Cambridge Studies in Advanced Mathematics, **55**. Cambridge University Press, Cambridge, 1997.
John E. Cremona. *Algorithms for modular elliptic curves*. Second edition. Cambridge University Press, Cambridge, 1997.
Pierre Deligne. *Formes modulaires et représentations $\ell$-adiques*. Séminaire Bourbaki vol. 1968/69 Exposé 355, Lecture Notes in Mathematics Volume **179**, 1971.
Pierre Deligne and Jean-Pierre Serre. *Formes modulaires de poids 1*. Ann. Sci. École Norm. Sup. **7** (1974), pages 507–530.
Fred Diamond and Jerry Shurman. *A first course in modular forms*. Graduate Texts in Mathematics, **228**. Springer-Verlag, New York, 2005.
Luis Dieulefait and Gabor Wiese. *On modular forms and the inverse Galois problem*. Trans. Amer. Math. Soc. **363** (2011), no. 9, 4569–4584.
Chris Hall. *An open-image theorem for a general class of abelian varieties*. With an appendix by Emmanuel Kowalski. Bull. Lond. Math. Soc. **43** (2011), no. 4, 703–711.
Guy Henniart. *Représentations $\ell$-adiques abéliennes*. Seminar on Number Theory, Paris 1980-81 (Paris, 1980/1981), pp. 107–126, Progr. Math., **22**, Birkhäuser Boston, Boston, MA, 1982.
David Hilbert. *Ueber die Irreducibilität ganzer rationaler Functionen mit ganzzahligen Coefficienten*. J. Reine Angew. Math. , **110**, 104–129, 1892.
Chun Yin Hui. *Monodromy of Galois representations and equal-rank subalgebra equivalence*. Preprint arXiv:1204.5271 (2013).
Chun Yin Hui and Michael Larsen. *Type A Images of Galois Representations and Maximality*. Preprint arXiv:1305.1989 (2013).
Masatoshi Ikeda. *Completeness of the absolute Galois group of the rational number field*. J. Reine Angew. Math. **291** (1977), 1–22.
Moshe Jarden and Jürgen Ritter, *On the characterization of local fields by their absolute Galois groups*. J. Number Theory **11** (1979), no. 1, 1–13.
Chandrashekhar Khare, Michael Larsen and Gordan Savin. *Functoriality and the inverse Galois problem*. Compos. Math. **144** (2008), no. 3, 541–564.
Chandrashekhar Khare, Michael Larsen and Gordan Savin. *Functoriality and the inverse Galois problem. II. Groups of type $B_n$ and $G_2$*. Ann. Fac. Sci. Toulouse Math. (6) **19** (2010), no. 1, 37–70.
Michael J. Larsen. *Maximality of Galois actions for compatible systems*. Duke Math. J. **80** (1995), no. 3, 601–630.
Michael J. Larsen and Richard Pink. *On $\ell$-independence of algebraic monodromy groups in compatible systems of representations*. Invent. Math. **107** (1992), no. 3, 603–636.
Michael J. Larsen and Richard Pink. *Finite subgroups of algebraic groups*. J. Amer. Math. Soc. **24** (2011), no. 4, 1105–1158.
Jürgen Neukirch, Alexander Schmidt and Kay Wingberg. *Cohomology of number fields*. Second edition. Grundlehren der Mathematischen Wissenschaften, **323**. Springer-Verlag, Berlin, 2008.
Kenneth A. Ribet. *On $\ell$-adic representations attached to modular forms*. Invent. Math. **28** (1975), 245–275.
Kenneth A. Ribet. *On $\ell$-adic representations attached to modular forms. II*. Glasgow Math. J. **27** (1985), 185–194.
Peter Scholze. *On torsion in the cohomology of locally symmetric varieties*. Preprint arXiv:1306.2070 (2013).
Jean-Pierre Serre. *Représentations $\ell$-adiques*. Algebraic number theory (Kyoto Internat. Sympos., Res. Inst. Math. Sci., Univ. Kyoto, Kyoto, 1976), pp. 177–193. Japan Soc. Promotion Sci., Tokyo, 1977.
Jean-Pierre Serre. *Abelian $\ell$-adic representations and elliptic curves*. McGill University lecture notes written with the collaboration of Willem Kuyk and John Labute W. A. Benjamin, Inc., New York-Amsterdam 1968.
Jean-Pierre Serre. *Propriétés galoisiennes des points d’ordre fini des courbes elliptiques*. Invent. Math. **15** (1972), no. 4, 259–331.
Jean-Pierre Serre. *Lettre à Ken Ribet du 1/1/1981*. In *Oeuvres. Collected papers. IV. 1985–1998*. Springer-Verlag, Berlin, 2000.
Jean-Pierre Serre. *Résumé des course 1984–1985*. In *Oeuvres. Collected papers. IV. 1985–1998*. Springer-Verlag, Berlin, 2000.
Jean-Pierre Serre. *Topics in Galois theory*. Second edition. Research Notes in Mathematics, 1. A K Peters, Ltd., Wellesley, MA, 2008.
Sug Woo Shin. *Automorphic Plancherel density theorem*. Israel J. Math. **192** (2012), no. 1, 83–120.
Kôji Uchida. *Isomorphisms of Galois groups*. J. Math. Soc. Japan **28** (1976), no. 4, 617–620.
Gabor Wiese. *On projective linear groups over finite fields as Galois groups over the rational numbers*. Modular forms on Schiermonnikoog, 343–350, Cambridge Univ. Press, Cambridge, 2008.
Gabor Wiese. *An Application of Maeda’s Conjecture to the Inverse Galois Problem*. Math. Res. Letters, to appear (2013).
Shuji Yamagata. *A counterexample for the local analogy of a theorem by Iwasawa and Uchida*. Proc. Japan Acad. **52** (1976), no. 6, 276–278.
[^1]: Université du Luxembourg, Faculté des Sciences, de la Technologie et de la Communication, 6, rue Richard Coudenhove-Kalergi, L-1359 Luxembourg, Luxembourg, sara.ariasdereyna@uni.lu
|
---
author:
- |
Raphael Bousso$^{a,b}$ and Leonard Susskind$^{c}$\
\
$^a$ Center for Theoretical Physics, Department of Physics\
University of California, Berkeley, CA 94720, U.S.A.\
$^b$ Lawrence Berkeley National Laboratory, Berkeley, CA 94720, U.S.A.\
$^c$ Stanford Institute for Theoretical Physics and\
Department of Physics, Stanford University, Stanford, CA 94305, U.S.A.
bibliography:
- 'all.bib'
title: The Multiverse Interpretation of Quantum Mechanics
---
Introduction {#sec-intro}
============
According to an older view of quantum mechanics, objective phenomena only occur when an observation is made and, as a result, the wave function collapses. A more modern view called decoherence considers the effects of an inaccessible environment that becomes entangled with the system of interest (including the observer). But at what point, precisely, do the virtual realities described by a quantum mechanical wave function turn into objective realities?
This question is not about philosophy. Without a precise form of decoherence, one cannot claim that anything really “happened”, including the specific outcomes of experiments. And without the ability to causally access an infinite number of precisely decohered outcomes, one cannot reliably verify the probabilistic predictions of a quantum-mechanical theory.
The purpose of this paper is to argue that these questions may be resolved by cosmology. We will offer some principles that we believe are necessary for a consistent interpretation of quantum mechanics, and we will argue that eternal inflation is the only cosmology which satisfies those principles. There are two views of an eternally inflating multiverse: global (or parallel) vs. local (or series). The parallel view is like looking at a tree and seeing all its branches and twigs simultaneously. The series view is what is seen by an insect climbing from the base of the tree to a particular twig along a specific route.
In both the many-worlds interpretation of quantum mechanics and the multiverse of eternal inflation the world is viewed as an unbounded collection of parallel universes. A view that has been expressed in the past by both of us is that there is no need to add an additional layer of parallelism to the multiverse in order to interpret quantum mechanics. To put it succinctly, the many-worlds and the multiverse are the same thing [@Sus05].
#### Decoherence
Decoherence[^1] explains why observers do not experience superpositions of macroscopically distinct quantum states, such as a superposition of an alive and a dead cat. The key insight is that macroscopic objects tend to quickly become entangled with a large number of “environmental” degrees of freedom, $E$, such as thermal photons. In practice these degrees of freedom cannot be monitored by the observer. Whenever a subsystem $E$ is not monitored, all expectation values behave as if the remaining system is in a density matrix obtained by a partial trace over the Hilbert space of $E$. The density matrix will be diagonal in a preferred basis determined by the nature of the interaction with the environment.
As an example, consider an isolated quantum system $S$ with a two-dimensional Hilbert space, in the general state $a |0\rangle_S + b |1\rangle_S$. Suppose a measurement takes place in a small spacetime region, which we may idealize as an event $M$. By this we mean that at $M$, the system $S$ interacts and becomes correlated with the pointer of an apparatus $A$: $$(a |0\rangle_S + b |1\rangle_S)\otimes |0\rangle_A \to
a\, |0 \rangle _S\otimes |0 \rangle _A + b\, |1 \rangle _S\otimes |1 \rangle _A~,
\label{eq-prem}$$ This process is unitary and is referred to as a pre-measurement.
We assume that the apparatus is not a closed system. (This is certainly the case in practice for a macroscopic apparatus.) Thus, shortly afterwards (still at $M$ in our idealization), environmental degrees of freedom $E$ scatter off of the apparatus and become entangled with it. By unitarity, the system $SAE$ as a whole remains in a pure quantum state,[^2] $$|\psi \rangle = a\, |0 \rangle _S\otimes |0 \rangle _A \otimes |0 \rangle _E + b\, |1 \rangle _S\otimes |1 \rangle _A \otimes |1 \rangle _E ~.
\label{eq-pure}$$ We assume that the observer does not monitor the environment; therefore, he will describe the state of $SA$ by a density matrix obtained by a partial trace over the Hilbert space factor representing the environment: $$\rho_{SA}={\rm Tr}_E |\psi \rangle \langle \psi| %\equiv \sum_i \mbox{}_E\langle i|$$ This matrix is diagonal in the basis $\{ |0\rangle_S \otimes |0\rangle_A, |0\rangle_S \otimes |1\rangle_A, |1\rangle_S \otimes |0\rangle_A, |1\rangle_S \otimes |1\rangle_A\}$ of the Hilbert space of $SA$: $$\rho_{SA}=\mathbf{diag}(|a|^2,0,0,|b|^2)~.$$ This corresponds to a classical ensemble in which the pure state $|0\rangle_S \otimes |0\rangle_A$ has probability $|a|^2$ and the state $ |1\rangle_S \otimes |1\rangle_A$ has probability $|b|^2$.
Decoherence explains the “collapse of the wave function” of the Copenhagen interpretation as the non-unitary evolution from a pure to a mixed state, resulting from ignorance about an entangled subsystem $E$. It also explains the very special quantum states of macroscopic objects we experience, as the elements of the basis in which the density matrix $\rho_{SA}$ is diagonal. This preferred basis is picked out by the apparatus configurations that scatter the environment into orthogonal states. Because interactions are usually local in space, $\rho_{SA}$ will be diagonal with respect to a basis consisting of approximate position space eigenstates. This explains why we perceive apparatus states $|0 \rangle_A$ (pointer up) or $|1 \rangle_A$ (pointer down), but never the equally valid basis states $|\pm\rangle_A \equiv 2^{-1/2}(|0\rangle_A \pm |1\rangle_A)$, which would correspond to superpositions of different pointer positions.
The entangled state obtained after premeasurement, Eq. (\[eq-prem\]) is a superposition of two unentangled pure states or “branches”. In each branch, the observer sees a definite outcome: $|0\rangle $ or $|1\rangle$. This in itself does not explain, however, why a definite outcome is seen with respect to the basis $\{|0\rangle,|1\rangle\}$ rather than $\{|+\rangle, |-\rangle\}$. Because the decomposition of Eq. (\[eq-prem\]) is not unique [@Zur81], the interaction with an inaccessible environment and the resulting density matrix are essential to the selection of a preferred basis of macroscopic states.
Decoherence has two important limitations: it is subjective, and it is in principle reversible. This is a problem if we rely on decoherence for precise tests of quantum mechanical predictions. We argue in Sec. \[sec-nonhat\] that causal diamonds provide a natural definition of environment in the multiverse, leading to an observer-independent notion of decoherent histories. In Sec. \[sec-hat\] we argue that these histories have precise, irreversible counterparts in the “hat”-regions of the multiverse. We now give a more detailed overview of this paper.
#### Outline
In Sec. \[sec-nonhat\] we address the first limitation of decoherence, its subjectivity. Because coherence is never lost in the full Hilbert space $SAE$, the speed, extent, and possible outcomes of decoherence depend on the definition of the environment $E$. This choice is made implicitly by an observer, based on practical circumstances: the environment consists of degrees of freedom that have become entangled with the system and apparatus but remain unobserved. It is impractical, for example, to keep track of every thermal photon emitted by a table, of all of its interactions with light and air particles, and so on. But if we did, then we would find that the entire system $SAE$ behaves as a pure state $|\psi \rangle$, which may be a “cat state” involving the superposition of macroscopically different matter configurations. Decoherence thus arises from the description of the world by an observer who has access only to a subsystem. To the extent that the environment is defined by what a given observer cannot measure in practice, decoherence is subjective.
The subjectivity of decoherence is not a problem as long as we are content to explain our own experience, i.e., that of an observer immersed in a much larger system. But the lack of any environment implies that decoherence cannot occur in a complete unitary description of the whole universe. It is possible that no such description exists for our universe. In Sec. \[sec-future\] we will argue, however, that causality places restrictions on decoherence in much smaller regions, in which the applicability of unitary quantum-mechanical evolution seems beyond doubt.
In Sec. \[sec-global\], we apply our analysis of decoherence and causality to eternal inflation. We will obtain a straightforward but perhaps surprising consequence: in a global description of an eternally inflating spacetime, decoherence cannot occur; so it is inconsistent to imagine that pocket universes or vacuum bubbles nucleate at particular locations and times. In Sec. \[sec-simplicio\], we discuss a number of attempts to rescue a unitary global description and conclude that they do not succeed.
In Sec. \[sec-patch\], we review the “causal diamond” description of the multiverse. The causal diamond is the largest spacetime region that can be causally probed, and it can be thought of as the past light-cone from a point on the future conformal boundary. We argue that the causal diamond description leads to a natural, observer-independent choice of environment: because its boundary is light-like, it acts as a one-way membrane and degrees of freedom that leave the diamond do not return except in very special cases. These degrees of freedom can be traced over, leading to a branching tree of causal diamond histories.
Next, we turn to the question of whether the global picture of the multiverse can be recovered from the decoherent causal diamonds. In Sec. \[sec-dual\], we review a known duality between the causal diamond and a particular foliation of the global geometry known as light-cone time: both give the same probabilities. This duality took the standard global picture as a starting point, but in Sec. \[sec-everett\], we reinterpret it as a way of reconstructing the global viewpoint from the local one. If the causal diamond histories are the many-worlds, this construction shows that the multiverse is the many-worlds pieced together in a single geometry.
In Sec. \[sec-hat\] we turn to the second limitation associated with decoherence, its reversibility. Consider a causal diamond with finite maximal boundary area $A_{\rm max}$. Entropy bounds imply that such diamonds can be described by a Hilbert space with finite dimension no greater than $\exp(A_{\rm max}/2)$ [@CEB2; @Bou00a].[^3] This means that no observables in such diamonds can be defined with infinite precision. In Sec. \[sec-reverse\] and \[sec-limitation\], we will discuss another implication of this finiteness: there is a tiny but nonzero probability that decoherence will be undone. This means that the decoherent histories of causal diamonds, and the reconstruction of a global spacetime from such diamonds, is not completely exact.
No matter how good an approximation is, it is important to understand the precise statement that it is an approximation to. In Sec. \[sec-sagredo\], we will develop two postulates that should be satisfied by a fundamental quantum-mechanical theory if decoherence is to be sharp and the associated probabilities operationally meaningful: decoherence must be irreversible, and it must occur infinitely many times for a given experiment in a single causally connected region.
The string landscape contains supersymmetric vacua with exactly vanishing cosmological constant. Causal diamonds which enter such vacua have infinite boundary area at late times. We argue in Sec. \[sec-hats\] that in these “hat” regions, all our postulates can be satisfied. Exact observables can exist and decoherence by the mechanism of Sec. \[sec-patch\] can be truly irreversible. Moreover, because the hat is a spatially open, statistically homogeneous universe, anything that happens in the hat will happen infinitely many times.
In Sec. \[sec-complementarity\] we review black hole complementarity, and we conjecture an analogous “hat complementarity” for the multiverse. It ensures that the approximate observables and approximate decoherence of causal diamonds with finite area (Sec. \[sec-patch\]) have precise counterparts in the hat. In Sec. \[sec-ct\] we propose a relation between the global multiverse reconstruction of Sec. \[sec-everett\], and the Census Taker cutoff [@Sus07] on the hat geometry.
Two interesting papers have recently explored relations between the many-worlds interpretation and the multiverse [@AguTeg10; @Nom11]. The present work differs substantially in a number of aspects. Among them is the notion that causal diamonds provide a preferred environment for decoherence, our view of the global multiverse as a patchwork of decoherent causal diamonds, our postulates requiring irreversible entanglement and infinite repetition, and the associated role we ascribe to hat regions of the multiverse.
Building the multiverse from the many worlds of causal diamonds {#sec-nonhat}
===============================================================
Decoherence and causality {#sec-future}
-------------------------
The decoherence mechanism reviewed above relies on ignoring the degrees of freedom that a given observer fails to monitor, which is fine if our goal is to explain the experiences of that observer. But this subjective viewpoint clashes with the impersonal, unitary description of large spacetime regions—the viewpoint usually adopted in cosmology. We are free, of course, to pick any subsystem and trace over it. But the outcome will depend on this choice. The usual choices implicitly involve locality but not in a unique way.
For example, we might choose $S$ to be an electron and $E$ to be the inanimate laboratory. The system’s wave function collapses when the electron becomes entangled with some detector. But we may also include in $S$ everything out to the edge of the solar system. The environment is whatever is out beyond the orbit of Pluto. In that case the collapse of the system wavefunction cannot take place until a photon from the detector has passed Pluto’s orbit. This would take about a five hours during which the system wavefunction is coherent.
In particular, [*decoherence cannot occur in the complete quantum description of any region larger than the future light-cone of the measurement event $M$*]{} (Fig. \[fig-futconedec\]). All environmental degrees of freedom that could have become entangled with the apparatus since the measurement took place must lie within this lightcone and hence are included, not traced over, in a complete description of the state. An example of such a region is the whole universe, i.e., any Cauchy surface to the future of $M$. But at least at sufficiently early times, the future light-cone of $M$ will be much smaller than the whole universe. Already on this scale, the system $SAE$ will be coherent.
In our earlier example, suppose that we measure the spin of an electron that is initially prepared in a superposition of spin-up and spin-down, $a |0\rangle_S + b |1\rangle_S$, resulting in the state $|\psi \rangle$ of Eq. (\[eq-pure\]). A complete description of the solar system (defined as the interior of a sphere the size of Pluto’s orbit, with a light-crossing time of about 10 hours) by a local quantum field theory contains every particle that could possibly have interacted with the apparatus after the measurement, for about 5 hours. This description would maintain the coherence of the macroscopic superpositions implicit in the state $|\psi \rangle$, such as apparatus-up with apparatus-down, until the first photons that are entangled with the apparatus leave the solar system.
Of course, a detailed knowledge of the quantum state in such large regions is unavailable to a realistic observer. (Indeed, if the region is larger than a cosmological event horizon, then its quantum state is cannot be probed at all without violating causality.) Yet, our theoretical description of matter fields in spacetime retains, in principle, all degrees of freedom and full coherence of the quantum state. In theoretical cosmology, this can lead to inconsistencies, if we describe regions that are larger than the future light-cones of events that we nevertheless treat as decohered. We now consider an important example.
Failure to decohere: A problem with the global multiverse {#sec-global}
---------------------------------------------------------
The above analysis undermines what we will call the “standard global picture” of an eternally inflating spacetime. Consider an effective potential containing at least one inflating false vacuum, i.e. a metastable de Sitter vacuum with decay rate much less than one decay per Hubble volume and Hubble time. We will also assume that there is at least one terminal vacuum, with nonpositive cosmological constant. (The string theory landscape is believed to have at least $10^{100{\rm 's}}$ of vacua of both types [@BP; @KKLT; @Sus03; @DenDou04b].)
According to the standard description of eternal inflation, an inflating vacuum nucleates bubble-universes in a statistical manner similar to the way superheated water nucleates bubbles of steam. That process is described by classical stochastic production of bubbles which occurs randomly but the randomness is classical. The bubbles nucleate at definite locations and coherent quantum mechanical interference plays no role. The conventional description of eternal inflation similarly based on classical stochastic processes. However, this picture is not consistent with a complete quantum-mechanical description of a global region of the multiverse.
To explain why this is so, consider the future domain of dependence $D(\Sigma_0)$ of a sufficiently large hypersurface $\Sigma_0$, which need not be a Cauchy surface. $D(\Sigma_0)$ consists of all events that can be predicted from data on $\Sigma_0$; see Fig. \[fig-dsigma\]. If $\Sigma_0$ contains sufficiently large and long-lived metastable de Sitter regions, then bubbles of vacua of lower energy do not consume the parent de Sitter vacua in which they nucleate [@GutWei83]. Hence, the de Sitter vacua are said to inflate eternally, producing an unbounded number of bubble universes. The resulting spacetime is said to have the structure shown in the conformal diagram in Fig. \[fig-dsigma\], with bubbles nucleating at definite spacetime events. The future conformal boundary is spacelike in regions with negative cosmological constant, corresponding to a local big crunch. The boundary contains null “hats” in regions occupied by vacua with $\Lambda=0$.
But this picture does not arise in a complete quantum description of $D(\Sigma_0)$. The future light-cones of events at late times are much smaller than $D(\Sigma_0)$. In any state that describes the entire spacetime region $D(\Sigma_0)$, decoherence can only take place at the margin of $D(\Sigma_0)$ (shown light shaded in Fig. \[fig-dsigma\]), in the region from which particles can escape into the complement of $D(\Sigma_0)$ in the full spacetime. No decoherence can take place in the infinite spacetime region defined by the past domain of dependence of the future boundary of $D(\Sigma_0)$. In this region, quantum evolution remains coherent even if it results in the superposition of macroscopically distinct matter or spacetime configurations.
An important example is the superposition of vacuum decays taking place at different places. Without decoherence, it makes no sense to say that bubbles nucleate at particular times and locations; rather, a wavefunction with initial support only in the parent vacuum develops into a superposition of parent and daughter vacua. Bubbles nucleating at all places at times are “quantum superimposed”. With the gravitational backreaction included, the metric, too, would remain in a quantum-mechanical superposition. This contradicts the standard global picture of eternal inflation, in which domain walls, vacua, and the spacetime metric take on definite values, as if drawn from a density matrix obtained by tracing over some degrees of freedom, and as if the interaction with these degrees of freedom had picked out a preferred basis that eliminates the quantum superposition of bubbles and vacua.
Let us quickly get rid of one red herring: Can the standard geometry of eternal inflation be recovered by using so-called semi-classical gravity in which the metric is sourced by the expectation value of the energy-momentum tensor, $$G_{\mu\nu}=8\pi \langle T_{\mu\nu} \rangle~?
\label{eq-scg}$$ This does not work because the matter quantum fields would still remain coherent. At the level of the quantum fields, the wavefunction initially has support only in the false vacuum. Over time it evolves to a superposition of the false vacuum (with decreasing amplitude), with the true vacuum (with increasing amplitude), plus a superposition of expanding and colliding domain walls. This state is quite complicated but the expectation value of its stress tensor should remain spatially homogeneous if it was so initially. The net effect, over time, would be a continuous conversion of vacuum energy into ordinary matter or radiation (from the collision of bubbles and motion of the scalar field). By Eq. (\[eq-scg\]), the geometry spacetime would respond to the homogeneous glide of the vacuum energy to negative values. This would result in a global crunch after finite time, in stark contrast to the standard picture of global eternal inflation. In any case, it seems implausible that semi-classical gravity should apply in a situation in which coherent branches of the wavefunction have radically different gravitational back-reaction. The AdS/CFT correspondence provides an explicit counterexample, since the superposition of two CFT states that correspond to different classical geometries must correspond to a quantum superposition of the two metrics.
The conclusion that we come to from these considerations is not that the global multiverse is meaningless, but that the parallel view should not be implemented by unitary quantum mechanics. But is there an alternative? Can the standard global picture be recovered by considering an observer who has access only to some of the degrees of freedom of the multiverse, and appealing to decoherence? We debate this question in the following section.
Simplicio’s proposal {#sec-simplicio}
--------------------
[*Simplicio and Sagredo have studied Sections \[sec-future\] and \[sec-global\], supplied to them by Salviati. They meet at Sagredo’s house for a discussion.*]{}
[Simplicio:]{} You have convinced me that a complete description of eternal inflation by unitary quantum evolution on global slices will not lead to a picture in which bubbles form at definite places and times. But all I need is an observer somewhere! Then I can take this observer’s point of view and trace over the degrees of freedom that are inaccessible to him. This decoheres events, such as bubble nucleations, in the entire global multiverse. It actually helps that some regions are causally disconnected from the observer: this makes his environment—the degrees of freedom he fails to access—really huge.
[Sagredo:]{} An interesting idea. But you seem to include everything outside the observer’s horizon region in what you call the environment. Once you trace over it, it is gone from your description and you could not possibly recover a global spacetime.
[Simplicio:]{} Your objection is valid, but it also shows me how to fix my proposal. The observer should only trace over environmental degrees in his own horizon. Decoherence is very efficient, so this should suffice.
[Sagredo:]{} I wonder what would happen if there were two observers in widely separated regions. If one observer’s environment is enough to decohere the whole universe, which one should we pick?
[Simplicio:]{} I have not done a calculation but it seems to me that it shouldn’t matter. The outcome of an experiment by one of the observers should be the same, no matter which observer’s environment I trace over. That is certainly how it works when you and I both stare at the same apparatus.
[Sagredo:]{} Something is different about the multiverse. When you and I both observe Salviati, we all become correlated by interactions with a common environment. But how does an observer in one horizon volume become correlated with an object in another horizon volume far away?
[Salviati:]{} Sagredo, you hit the nail on the head. Decoherence requires the interaction of environmental degrees of freedom with the apparatus and the observer. This entangles them, and it leads to a density matrix once the environment is ignored by the observer. But an observer cannot have interacted with degrees of freedom that were never present in his past light-cone.
[Sagredo:]{} Thank you for articulating so clearly what to me was only a vague concern. Simplicio, you look puzzled, so let me summarize our objection in my own words. You proposed a method for obtaining the standard global picture of eternal inflation: you claim that we need only identify an arbitrary observer in the multiverse and trace over his environment. If we defined the environment as [*all*]{} degrees of freedom the observer fails to monitor, then it would include the causally disconnected regions outside his horizon. With this definition, these regions will disappear entirely from your description, in conflict with the global picture. So we agreed to define the environment as the degrees of freedom that have interacted with the observer and which he cannot access in practice. But in this case, the environment includes [*no*]{} degrees of freedom outside the causal future of the observer’s causal past. I have drawn this region in Fig. \[fig-sagredo\]. But tracing over an environment can only decohere degrees of freedom that it is entangled with. In this case, it can decohere some events that lie in the observer’s past light-cone. But it cannot affect quantum coherence in far-away horizon regions, because the environment you have picked is not entangled with these regions. In those regions, bubble walls and vacua will remain in superposition, which again conflicts with the standard global picture of eternal inflation.
[Simplicio:]{} I see that my idea still has some problems. I will need to identify more than one observer-environment pair. In fact, if I wish to preserve the global picture of the multiverse, I will have to assume that an observer is present in every horizon volume, at all times! Otherwise, there will be horizon regions where no one is around to decide which degrees of freedom are hard to keep track of, so there is no way to identify and trace over an environment. In such regions, bubbles would not form at particular places and times, in conflict with the standard global picture.
[Sagredo:]{} But this assumption is clearly violated in many landscape models. Most de Sitter vacua have large cosmological constant, so that a single horizon volume is too small to contain the large number of degrees of freedom required for an observer. And regions with small vacuum energy may be very long lived, so the corresponding bubbles contain many horizon volumes that are completely empty. I’m afraid, Simplicio, that your efforts to rescue the global multiverse are destined to fail.
[Salviati:]{} Why don’t we back up a little and return to Simplicio’s initial suggestion. Sagredo, you objected that everything outside an observer’s horizon would naturally be part of his environment and would be gone from our description if we trace over it...
[Sagredo:]{} ...which means that the whole global description would be gone...
[Salviati:]{} ...but why is that a problem? No observer inside the universe can ever see more than what is in their past light-cone at late times, or more precisely, in their causal diamond. We may not be able to recover the global picture by tracing over the region behind an observer’s horizon, but the same procedure might well achieve decoherence in the region the observer can actually access. In fact, we don’t even need an actual observer: we can get decoherence by tracing over degrees of freedom that leave the causal horizon of any worldline! This will allow us to say that a bubble formed in one place and not another. So why don’t we give up on the global description for a moment. Later on, we can check whether a global picture can be recovered in some way from the decoherent causal diamonds.
[*Salviati hands out Sections \[sec-patch\]–\[sec-everett\].*]{}
Objective decoherence from the causal diamond {#sec-patch}
---------------------------------------------
If Hawking radiation contains the full information about the quantum state of a star that collapsed to form a black hole, then there is an apparent paradox. The star is located inside the black hole at spacelike separation from the Hawking cloud; hence, two copies of the original quantum information are present simultaneously. The xeroxing of quantum information, however, conflicts with the linearity of quantum mechanics [@WooZur82]. The paradox is resolved by “black hole complementarity” [@SusTho93]. By causality, no observer can see both copies of the information. A theory of everything should be able to describe any experiment that can actually be performed by some observer in the universe, but it need not describe the global viewpoint of a “superobserver”, who sees both the interior and the exterior of a black hole. Evidently, the global description is inconsistent and must be rejected.
If the global viewpoint fails in a black hole geometry, then it must be abandoned in any spacetime. Hence, it is useful to characterize generally what spacetime regions can be causally probed. An experiment beginning at a spacetime event $p$ and ending at the event $q$ in the future of $p$ can probe the [*causal diamond*]{} $I^+(p)\cap I^-(q)$ (Fig. \[fig-cdpq\]). By starting earlier or finishing later, the causal diamond can be enlarged. In spacetimes with a spacelike future boundary, such as black holes and many cosmological solutions, the global universe is much larger than any causal diamond it contains. Here we will be interested in diamonds that are as large as possible, in the sense that $p$ and $q$ correspond to the past and future endpoints of an inextendible worldline.
We will now argue that the causal diamond can play a useful role in making decoherence more objective. Our discussion will be completely general, though for concreteness it can be useful to think of causal diamonds in a landscape, which start in a de Sitter vacuum and end up, after a number of decays, in a crunching $\Lambda<0$ vacuum.
Consider a causal diamond, $C$, with future boundary $B$ and past boundary $\tilde B$, as shown in Fig. \[fig-mirror\]. For simplicity, suppose that the initial state on $\tilde B$ is pure. Matter degrees of freedom that leave the diamond by crossing $B$ become inaccessible to any experiment within $C$, by causality. Therefore they [*must*]{} be traced over.
In practice, there will be many other degrees of freedom that an observer fails to control, including most degrees of freedom that have exited his past light-cone at any finite time along his worldline. But such degrees of freedom can be reflected by mirrors, or in some other way change their direction of motion back towards the observer (Fig. \[fig-mirror\]). Thus, at least in principle, the observer could later be brought into contact again with any degrees of freedom that remain within the causal diamond $C$, restoring coherence. Also, the observer at finite time has not had an opportunity to observe degrees of freedom coming from the portion outside his past lightcone on $\tilde B$; but those he might observe by waiting longer. Hence, we will be interested only in degrees of freedom that leave $C$ by crossing the boundary $B$.
The boundary $B$ may contain components that are the event horizons of black holes. If black hole evaporation is unitary, then such degrees of freedom will be returned to the interior of the causal diamond in the form of Hawking radiation. We can treat this formally by replacing the black hole with a membrane that contains the relevant degrees of freedom at the stretched horizon and releases them as it shrinks to zero size [@SusTho93]. However, we insist that degrees of freedom crossing the outermost component of $B$ (which corresponds to the event horizon in de Sitter universes) are traced over. It does not matter for this purpose whether we regard these degrees of freedom as being absorbed by the boundary or as crossing through the boundary, as long as we assume that they are inaccessible to any experiment performed within $C$. This assumption seems reasonable, since there is no compelling argument that the unitarity evaporation of black holes should extend to cosmological event horizons. Indeed, it is unclear how the statement of unitarity would be formulated in that context. (A contrary viewpoint, which ascribes unitarity even to non-Killing horizons, is explored in Ref. [@Sus07].)
The boundary $B$ is a null hypersurface. Consider a cross-section $\beta$ of $B$, i.e., a spacelike two-dimensional surface that divides $B$ into two portions: the upper portion, $B_+$, which contains the tip of the causal diamond, and the lower portion $B_-$. We may trace over degrees of freedom on $B_-$; this corresponds to the matter that has left the causal diamond by the time $\beta$ and hence has become inaccessible from within the diamond. Thus we obtain a density matrix $\rho(\beta)$ on the portion $B_+$. Assuming unitary evolution of closed systems, the same density matrix also determines the state on any spacelike surface bounded by $\beta$; and it determines the state on the portion of the boundary of the past of $\beta$ that lies within $C$, $\gamma$. Note that $\gamma$ is a null hypersurface. In fact, $\gamma$ can be chosen to be a future lightcone from an event inside $C$ (more precisely, the portion of that light-cone that lies within $C$); the intersection of $\gamma$ with $B$ then defines $\beta$.
A useful way of thinking about $\rho(\beta)$ is as follows. The boundary of the causal past of $\beta$ consists of two portions, $\gamma$ and $B_-$. The degrees of freedom that cross $B_-$ are analogous to the environment in the usual discussion of decoherence, except in that they are inaccessible from within the causal diamond $C$ not just in practice but in principle. The remaining degrees of freedom in the past of $\beta$ cross through $\gamma$ and thus stay inside the causal diamond. They are analogous to the system and apparatus, which are now in one of the states represented in the density matrix $\rho(\beta)$. A measurement is an interaction between degrees of freedom that later pass through $\gamma$ and degrees of freedom that later pass through $B_-$. The basis in which $\rho(\beta)$ is diagonal consists of the different pure states that could result from the outcome of measurements in the causal past of $\beta$.
We can now go further and consider foliations of the boundary $B$. Each member of the foliation is a two-dimensional spatial surface $\beta$ dividing the boundary into portions $B_+$ and $B_-$. We can regard $\beta$ as a time variable. For example, any a foliation of the causal diamond $C$ into three-dimensional spacelike hypersurfaces of equal time $\beta$ will induce a foliation of the boundary $B$ into two-dimensional spacelike surfaces. Another example, on which we will focus, is shown in Fig. \[fig-beta\]: consider an arbitrary time-like worldline that ends at the tip of the causal diamond. Now construct the future light-cone from every point on the worldline. This will induce a foliation of $B$ into slices $\beta$. It is convenient to identify $\beta$ with the proper time along the worldline.
The sequence of density matrices $\rho(\beta_1), \rho(\beta_2),\ldots, \rho(\beta_n)$ describes a branching tree, in which any path from the root to one of the final twigs represents a particular history of the entire causal diamond, coarse-grained on the appropriate timescale. These histories are “minimally decoherent” in the sense that the only degrees of freedom that are traced over are those that cannot be accessed even in principle. In practice, an observer at the time $\beta$ may well already assign a definite outcome to an observation even though no particles correlated with the apparatus have yet crossed $B_-(\beta)$. There is a negligible but nonzero probability of recoherence until the first particles cross the boundary; only then is coherence irreversibly lost.
Strictly speaking, the above analysis should be expanded to allow for the different gravitational backreaction of different branches. The exact location of the boundary $B$ at the time $\beta$ depends on what happens at later times. (This suggests that ultimately, it may be more natural to construct the decoherent causal diamonds from the top down, starting in the future and ending in the past.) Here we will be interested mainly in the application to the eternally inflating multiverse,[^4] where we can sidestep this issue by choosing large enough timesteps. In de Sitter vacua, on timescales of order $t_\Lambda \sim |\Lambda|^{-1/2}$, the apparent horizon, which is locally defined, approaches the event horizon $B$ at an exponential rate. Mathematically, the difference between the two depends on the future evolution, but it is exponentially small and thus is irrelevant physically. Vacua with negative cosmological constant crunch on the timescale $t_\Lambda$ [@CDL] and so will not be resolved in detail at this level of coarse-graining.
We expect that the distinction between causal diamond bulk and its boundary is precise only to order $e^{-A(\beta)}$, where $A$ is the area of the boundary at the time $\beta$. Because of entropy bounds [@Tho93; @Sus95; @CEB1; @CEB2], no observables in any finite spacetime region can be defined to better accuracy than this. A related limitation applies to the objective notion of decoherence we have given, and it will be inherited by the reconstruction of global geometry we propose below. This will play an imporant role in Sec. \[sec-hat\], where we will argue that the hat regions with $\Lambda=0$ provide an exact counterpart to the approximate observables and approximate decoherence described here.
Global-local measure duality {#sec-dual}
----------------------------
In this section, we will review the duality that relates the causal diamond to a global time cutoff called [*light-cone time*]{}: both define the same probabilities when they are used as regulators for eternal inflation. As originally derived, the duality assumed the standard global picture as a starting point, a viewpoint we have criticized in Sec. \[sec-global\]. Here we will take the opposite viewpoint: the local picture is the starting point, and the duality suggests that a global spacetime can be reconstructed from the more fundamental structure of decoherent causal diamond histories. Indeed, light-cone time will play a central role in the construction proposed in Sec. \[sec-everett\].
By restricting to a causal diamond, we obtained a natural choice of environment: the degrees of freedom that exit from the diamond. Tracing over this environment leads to a branching tree of objective, observer-independent decoherent histories—precisely the kind of notion that was lacking in the global description. In the causal diamond, bubbles of different vacua really do nucleate at specific times and places. They decohere when the bubble wall leaves the diamond.
Consider a large landscape of vacua. Starting, say, in a vacuum with very large cosmological constant, a typical diamond contains a sequence of bubble nucleations (perhaps hundreds in some toy models [@BP; @BouYan07]), which ends in a vacuum with negative cosmological constant (and thus a crunch), or with vanishing cosmological constant (a supersymmetric open universe, or “hat”). Different paths through the landscape are followed with probabilities determined by branching ratios. Some of these paths will pass through long-lived vacua with anomalously small cosmological constant, such as ours.
The causal diamond has already received some attention in the context of the multiverse. It was proposed [@Bou06] as a probability measure: a method for cutting off infinities and obtaining well-defined amplitudes. Phenomenologically, the causal diamond measure is among the most successful proposals extant [@BouHar07; @BouLei09; @BouHal09; @BouHar10; @BouFre10d]. From the viewpoint of economy, it is attractive since it merely exploits a restriction that was already imposed on us by black hole complementarity and uses it to solve another problem. And conceptually, our contrasting analyses of decoherence in the global and causal diamond viewpoints suggests that the causal diamond is the more fundamental of the two.
This argument is independent of black hole complementarity, though both point at the same conclusion. It is also independent of the context of eternal inflation. However, if we assume that the universe is eternally inflating, then it may be possible to merge all possible causal diamond histories into a single global geometry.
If we are content to take the standard global picture as a starting point, then it is straightforward to deconstruct it into overlapping causal diamonds or patches[^5] [@BouYan09; @Nom11] (see Fig. \[fig-magic\], taken from Ref. [@BouYan09]). Indeed, a stronger statement is possible: as far as any prediction goes, the causal diamond viewpoint is indistinguishable from a particular time cutoff on the eternally inflating global spacetime. An exact duality [@BouYan09] dictates that relative probabilities computed from the causal diamond agree exactly with the probabilities computed from the light-cone time cutoff.[^6] The duality picks out particular initial conditions for the causal diamond: it holds only if one starts in the “dominant vacuum”, which is the de Sitter vacuum with the longest lifetime.
The light-cone time of an event $Q$ is defined [@Bou09] in terms of the volume $\epsilon(Q)$ of the future light-cone of $Q$ on the future conformal boundary of the global spacetime; see Fig. \[fig-magic\]: $$t(Q)\equiv -\frac{1}{3} \log \epsilon(Q)~.$$ The volume $\epsilon$, in turn, is defined as the proper volume occupied by those geodesics orthogonal to an initial hypersurface $\Sigma_0$ that eventually enter the future of $Q$. (For an alternative definition directly in terms of an intrinsic boundary metric, see Ref. [@BouFre10b].) We emphasize again that in these definitions, the standard global picture is taken for granted; we disregard for now the objections of Sec. \[sec-global\].
The light-cone time of an event tells us the factor by which that event is overcounted in the overlapping ensemble of diamonds. This follows from the simple fact that the geodesics whose causal diamonds includes $Q$ are precisely the ones that enter the causal future of $Q$. Consider a discretization of the family of geodesics orthogonal to $\Sigma_0$ into a set of geodesics at constant, finite density, as shown in Fig. \[fig-magic\]. The definition of light-cone time ensures that the number of diamonds that contain a given event $Q$ is proportional to $\epsilon=\exp[-3t(Q)]$. Now we take the limit as the density of geodesics on $\Sigma_0$ tends to infinity. In this limit, the entire global spacetime becomes covered by the causal diamonds spanned by the geodesics. The relative overcounting of events at two different light-cone times is still given by a factor $\exp(-3\Delta t)$. (To show that this implies the exact equivalence of the causal diamond and the light-conetime cutoff, one must also demonstrate that the rate at which events of any type $I$ occur depends only on $t$. This is indeed the case if $\Sigma_0$ is chosen sufficiently late, i.e., if initial conditions on the patches conform to the global attractor regime. Since we are not primarily interested in the measure problem here, we will not review this aspect of the proof; see Ref. [@BouYan09] for details.)
Given the above deconstruction of the global spacetime, it is tempting to identify the eternally inflating multiverse with the many worlds of quantum mechanics, if the latter could somehow be related to branches in the wavefunction of the causal diamonds. Without decoherence, however, there is neither a consistent global picture (as shown in Sec. \[sec-global\]) nor a sensible way of picking out a preferred basis that would associate “many worlds” to the pure quantum state of a causal diamond (Sec. \[sec-patch\]).[^7]
We have already shown that decoherence at the causal diamond boundary leads to distinct causal diamond histories or “worlds”. To recover the global multiverse and demonstrate that it can be viewed as a representation of the many causal diamond worlds, one must show that it is possible to join together these histories consistently into a single spacetime. This task is nontrivial. In the following section we offer a solution in a very simple setting; we leave generalizations to more realistic models to future work. Our construction will [*not*]{} be precisely the inverse of the deconstruction shown in Fig. \[fig-magic\]; for example, there will be no overlaps. However, it is closely related; in particular, we will reconstruct the global spacetime in discrete steps of light-cone time.
Constructing a global multiverse from many causal diamond worlds {#sec-everett}
----------------------------------------------------------------
In this section, we will sketch a construction in $1+1$ dimensions by which a global picture emerges in constant increments of light-cone time (Fig. \[fig-patches\]). For simplicity, we will work on a fixed de Sitter background metric, $$\frac{ds^2}{\ell^2} = -(\log 2~dt)^2+2^{2t}~dx^2~,
\label{eq-11m}$$ where $\ell$ is an arbitrary length scale. A future light-cone from an event at the time $t$ grows to comoving size $\epsilon = 2^{1-t}$, so $t$ represents light-cone time up to a trivial shift and rescaling: $t=1-\log_2 \epsilon$. We take the spatial coordinate to be noncompact, though our construction would not change significantly if $x$ was compactified by identifying $x\cong x+n$ for some integer $n$.
The fixed background assumption allows us to separate the geometric problem—building the above global metric from individual causal diamonds—from the problem of matching matter configurations across the seams. We will be interested in constructing a global spacetime of infinite four-volume in the future but not int the past. Therefore, we take the geodesic generating each diamond to be maximally extended towards the future, but finite towards the past. This means that the lower tips do [*not*]{} lie on the past conformal boundary of de Sitter space. Note that all such semi-infinite diamonds are isometric.
The geometric problem is particularly simple in 1+1 dimensions because it is possible to tile the global geometry precisely with causal diamonds, with no overlaps. We will demonstrate this by listing explicitly the locations of the causal diamond tiles. All diamonds in our construction are generated by geodesics that are comoving in the metric of Eq. (\[eq-11m\]): if the origin of the diamond is at $(t,x)$, then its tip is at $(\infty,x)$. Hence we will label diamonds by the location of their origins, shown as green squares in Fig. \[fig-patches\]. They are located at the events $$\begin{aligned}
(x,t) & = & (m,0) \\
(x,t) & = & \left(\frac{2m+1}{2^n},n \right)~,\end{aligned}$$ where $n$ runs over the positive integers and $m$ runs over all integers. From Fig. \[fig-patches\], it is easy to see that these diamonds tile the global spacetime, covering every point with $t\geq 1$ precisely once, except on the edges. We now turn to the task of matching the quantum states of matter at these seams.
We will assume that there exists a metastable vacuum $\cal{F}$ which decays with a rate $\Gamma$ per unit Hubble time and Hubble volume to a terminal vacuum $\cal{T}$ which does not decay. We will work to order $\Gamma$, neglecting collisions between bubbles of the $\cal{T}$ vacuum. A number of further simplifications are made below.
Our construction will be iterative. We begin by considering the causal diamond with origin at $(x,t)=(0,0)$. In the spirit of Sec. \[sec-patch\], we follow the generating geodesic (green vertical line) for a time $\Delta t=1$ (corresponding to a proper time of order $t_\Lambda $) to the event $t=1$, $x=0$, marked by a red circle in Fig. \[fig-patches\]. The future light-cone of this event divides the boundary of the $(0,0)$ diamond into two portions, $B_\pm$, as in Fig. \[fig-beta\]. $B_-$ itself consists of two disconnected portions, which we label $A$ and $E$. Together with segments $C$ and $D$ of the future light-cone, $ACDE$ forms a Cauchy surface of the diamond. The evolution from the bottom boundary $FG$ to the surface $ACDE$ is unitary. For definiteness, we will assume that the state on $FG$ is the false vacuum, though other initial conditions can easily be considered.
The pure state on $ACDE$ can be thought of as a superposition of the false vacuum with bubbles of true vacuum that nucleate somewhere in the region delimited by $ACDEFG$. To keep things simple, we imagine that decays can occur only at three points, at each with probability $\bar\Gamma\sim \Gamma/3$: at the origin, $(0,0)$; at the midpoint of the edge $F$, $(-\frac{1}{4},\frac{1}{2})$, and at the midpoint of $G$, $(\frac{1}{4},\frac{1}{2})$. We assume, moreover, that the true vacuum occupies the entire future light-cone of a nucleation point. In this approximation the pure state on $ACDE$ takes the form $$\begin{aligned}
(1-3\bar\Gamma)^{1/2} & |{\cal F}\rangle_A |{\cal F}\rangle_C |{\cal F}\rangle_D |{\cal F}\rangle_E & + \bar \Gamma^{1/2} |{\cal T}\rangle_A |{\cal T}\rangle_C |{\cal F}\rangle_D |{\cal F}\rangle_E \nonumber \\
+~ \bar\Gamma^{1/2} & |{\cal T}\rangle_A |{\cal T}\rangle_C |{\cal T}\rangle_D |{\cal T}\rangle_E & +\bar \Gamma^{1/2} |{\cal F}\rangle_A |{\cal F}\rangle_C |{\cal T}\rangle_D |{\cal T}\rangle_E~,
\label{eq-ACDE}\end{aligned}$$ where the last three terms correspond to the possible nucleation points, from left to right.
From the point of view of an observer in the $(0,0)$ diamond, the Hilbert space factor $AE$ should be traced out. This results in a density matrix on the slice $CD$, which can be regarded as an initial condition for the smaller diamond beginning at the point $(x,t)=(0,1)$: $$\begin{aligned}
\rho(0,1) & = (1-3\bar\Gamma) & |{\cal F}\rangle_C |{\cal F}\rangle_D \mbox{~}_D\langle {\cal F}| \mbox{\,}_C \langle {\cal F}| \nonumber \\
& +~\bar\Gamma & |{\cal F}\rangle_C |{\cal T}\rangle_D \mbox{~}_D\langle {\cal T}| \mbox{\,}_C \langle {\cal F}| \nonumber \\
& +~\bar\Gamma & |{\cal T}\rangle_C |{\cal F}\rangle_D \mbox{~}_D\langle {\cal F}| \mbox{\,}_C \langle {\cal T}| \nonumber \\
& +~\bar\Gamma & |{\cal T}\rangle_C |{\cal T}\rangle_D \mbox{~}_D\langle {\cal T}| \mbox{\,}_C \langle {\cal T}|
\label{eq-CD}\end{aligned}$$ The density matrix can be regarded as an ensemble of four pure states: $|{\cal F}\rangle_C |{\cal F}\rangle_D$ with probability $(1-3\bar\Gamma)$; and $|{\cal F}\rangle_C |{\cal T}\rangle_D, |{\cal T}\rangle_C |{\cal F}\rangle_D, |{\cal T}\rangle_C |{\cal T}\rangle_D $, each with probability $\bar\Gamma$.
The same construction can be applied to every “zeroth generation” causal diamond: the diamonds with origin at $(m,0)$, with $m$ integer. Since their number is infinite, we can realize the ensemble of Eq. (\[eq-CD\]) precisely, in the emerging global spacetime, by assigning appropriate initial conditions to the “first generation sub-diamonds” $(m,1)$. The state $|{\cal F}\rangle_C |{\cal T}\rangle_D$ is assigned to a fraction $1-3\bar\Gamma$ of the $(m,1)$ diamonds; and each of the states $|{\cal F}\rangle_C |{\cal T}\rangle_D, |{\cal T}\rangle_C |{\cal F}\rangle_D, |{\cal T}\rangle_C |{\cal T}\rangle_D $ is assigned to a fraction $\bar\Gamma$ of $(m,1)$ diamonds.[^8]
So far, we have merely carried out the process described in Fig. \[fig-beta\] for one time step in each of the $(m,0)$ diamonds, resulting in initial conditions for the subdiamonds that start at the red circles at $(m,1)$. In order to obtain a global description, we must also “fill in the gaps” between the $(m,1)$ diamonds by specifying initial conditions for the “first generation new diamonds” that start at the green squares at $(m+\frac{1}{2},1)$. But their initial conditions are completely determined by the entangled pure state on $ACDE$, Eq. (\[eq-ACDE\]), and the identical pure states on the analogous Cauchy surfaces of the other $(m,0)$ diamonds. Because of entanglement, the state on $C$ is the same as on $A$. If $C$ is in the true vacuum, then so is $A$; and if $C$ is in the false vacuum, then so is $A$. The edges $D$ and $E$ are similarly entangled. Thus, the assignment of definite initial conditions to the $(m,1)$ diamonds completely determines the initial conditions on the $(m+\frac{1}{2})$ diamonds. We have thus generated initial conditions for all first-generation diamonds (those with $n=1$). Now we simply repeat the entire procedure to obtain initial conditions for the second generation ($n=2$), and so on.[^9]
This will generate a fractal distribution of true vacuum bubbles, of the type that is usually associated with a global description (Fig. \[fig-patches\]). The manner in which this picture arises is totally distinct from a naive unitary evolution of global time slices, in which a full superposition of false and true vacuum would persist (with time-dependent amplitudes). The standard global picture can only be obtained by exploiting the decoherence of causal diamonds while proliferating their number. The multiverse is a patchwork of infinitely many branching histories, the many worlds of causal diamonds.
The construction we have given is only a toy model. The causal diamonds in higher dimensions do not fit together neatly to fill the space-time as they do in $1+1$ dimensions, so overlaps will have to be taken into account.[^10] Moreover, we have not considered the backreaction on the gravitational field. Finally, in general the future boundary is not everywhere spacelike but contains hats corresponding to supersymmetric vacua with $\Lambda=0$. Our view will be that the hat regions play a very special role that is complementary, both in the colloquial and in the technical sense, to the construction we have given here. Any construction involving finite causal diamonds in necessarily approximate. We now turn to the potentially precise world of the Census Taker, a fictitious observer living in a hat region.
The many worlds of the census taker {#sec-hat}
===================================
Decoherence and recoherence {#sec-reverse}
---------------------------
In Sec. \[sec-future\], we noted that decoherence is subjective to the extent that the choice of environment is based merely on the practical considerations of an actual observer. We then argued in Sec. \[sec-patch\] that the boundary of a causal patch can be regarded as a preferred environment that leads to a more objective form of decoherence. However, there is another element of subjectivity in decoherence which we have not yet addressed: decoherence is reversible. Whether, and how soon, coherence is restored depends on the dynamical evolution governing the system and environment.
Consider an optical interference experiment (shown in Fig. \[fig-mirrors\]) in which a light beam reflects off two mirrors, $m_1$ and $m_2$, and then illuminates a screen $S$. There is no sense in which one can say that a given photon takes one or the other routes.
On the other hand if an observer or simple detector $D$ interacts with the photon and records which mirror the photon bounced off, the interference is destroyed. One of the two possibilities is made real by the interaction and thereafter the observer may ignore the branch of the wave function which does not agree with the observation. Moreover, if a second observer describes the whole experiment ( including the first observer) as a single quantum system, the second observer’s observation will entangle it in a consistent way.
Now let’s consider an unusual version of the experiment in which the upper arm of the interferometer is replaced by a mirror-detector $m_1D$ which detects a photon and deflects it toward mirror $m_3.$ From $m_3$ the photon is reflected back to the detector and then to the screen. The detector is constructed so that if a single photon passes through it, it flips to a new state (it detects) but the next time a photon passes through, it flips back to the original state. The detector is well-insulated from any environment. The lower arm of the interferometer also has the path length increased but without a detector.
Since if the photon goes through the detector, it goes through twice, at the end of the experiment the detector is left in the original state of no-detection. It is obvious that in this case the interference is restored. But there is something unusual going on. During an intermediate time interval the photon was entangled with the detector. The event of passing through the upper arm has been recorded and the photon’s wave function has collapsed to an incoherent superposition. But eventually the photon and the detector are disentangled. What happened was made to unhappen.
This illustrates that in order to give objective meaning to an event such as the detection of a photon, it is not enough that the system becomes entangled with the environment: the system must become [*irreversibly*]{} entangled with the environment, or more precisely, with [*some*]{} environment.
Failure to irreversibly decohere: A limitation of finite systems {#sec-limitation}
----------------------------------------------------------------
The above example may seem contrived, since it relied on the perfect isolation of the detector from any larger environment, and on the mirror $m_3$ that ensured that the detected photon cannot escape. It would be impractical to arrange in a similar manner for the recoherence of macroscopic superpositions, since an enormous number of particles would have to be carefully controlled by mirrors. However, if we are willing to wait, then recoherence is actually inevitable in any system that is [*dynamically closed*]{}, i.e., a system with finite maximum entropy at all times.
For example consider a world inside a finite box, with finite energy and perfectly reflecting walls. If the box is big enough and is initially out of thermal equilibrium, then during the return to equilibrium structures can form, including galaxies, planets, and observers. Entanglements can form between subsystems, but it is not hard to see that they cannot be irreversible. Such closed systems with finite energy have a finite maximum entropy, and for that reason the state vector will undergo quantum recurrences. Whatever state the system finds itself in, after a suitably long time it will return to the same state or to an arbitrarily close state. The recurrence time is bounded by $\exp(N)=\exp(e^{S_{\rm max}})$, where $N$ is the dimension of the Hilbert space that is explored and $S_{\rm max}$ is the maximum entropy.
This has an important implication for the causal diamonds of Sec. \[sec-patch\]. We argued that the diamond bulk decoheres when degrees of freedom cross the boundary $B$ of the diamond. But consider a potential that just contains a single, stable de Sitter vacuum with cosmological constant $\Lambda$. Then the maximum area of the boundary of the diamond is the horizon area of empty de Sitter space, and the maximum entropy is $S_{\rm max}=A_{\rm max}/4=3\pi/\Lambda$. This is the maximum [*total*]{} entropy [@Bou00a], which is the sum of the matter entropy in the bulk and the Bekenstein-Hawking entropy of the boundary. Assuming unitarity and ergodicity [@DysKle02], this system is dynamically closed, and periodic recurrences are inevitable. (See Ref. [@BanFis01a; @BanFis02; @BanFis04] for a discussion of precision in de Sitter space.)
Next, consider a landscape that contains vacua with positive and negative cosmological constant. We assume, for now, that there are no vacua with $\Lambda=0$. Then the maximum area $A_{\rm max}$ of the causal diamond boundary $B$ is given by the greater of $\Lambda_+^{-1}$ and $\Lambda_-^{-2}$, where $\Lambda_+$ is the smallest positive value of $\Lambda$ among all landscape vacua, and $\Lambda_-$ is the largest negative value [@BouFre10a]. $B$ is a null hypersurface with two-dimensional spatial cross-sections, and it can be thought of as the union of two light-sheets that emanate from the cross-section of maximum area. Therefore, the entropy that passes through $B$ is bounded by $A_{\rm max}/2$ and hence is finite.
What does finite entropy imply for the decoherence mechanism of Sec. \[sec-patch\]? If the causal diamond were a unitary, ergodic quantum system, then it would again follow that recurrences, including the restoration of coherence, are inevitable. This is plausible for causal diamonds that remain forever in inflating vacua, but such diamonds form a set of measure zero. Generically, causal diamonds will end up in terminal vacua with negative cosmological constant, hitting a big crunch singularity after finite time. In this case it is not clear that they admit a unitary quantum mechanical description over arbitrarily long timescales, so recurrences are not mandatory. However, since ergodicity cannot be assumed in this case, it seems plausible to us that there exists a tiny non-zero probability for recurrences. We expect that in each metastable de Sitter vacuum, the probability for re-coherence is given by the ratio of the decay timescale to the recurrence timescale. Typically, this ratio is super-exponentially small [@BouFre06b], but it does not vanish. In this sense, the objective decoherence of causal diamond histories described in Sec. \[sec-patch\] is not completely sharp.
Sagredo’s postulates {#sec-sagredo}
--------------------
[*The next morning, Simplicio and Salviati visit Sagredo to continue their discussion.*]{}
[Simplicio:]{} I have been pondering the idea we came up with yesterday, and I am convinced that we have completely solved the problem. Causal diamonds have definite histories, obtained by tracing over their boundary, which we treat as an observer-independent environment. This gets rid of superpositions of different macroscopic objects, such as bubbles of different vacua, without the need to appeal to actual observers inside the diamond. Each causal diamond history corresponds to a sequence of things that “happen”. And the global picture of the multiverse is just a representation of all the possible diamond histories in a single geometry: the many worlds of causal diamonds!
[Sagredo:]{} I wish I could share in your satisfaction, but I am uncomfortable. Let me describe my concerns, and perhaps you will be able to address them.
[Salviati:]{} I, too, have been unable to shake off a sense that that this is not the whole story—that we should do better. I would be most interested to hear your thoughts, Sagredo.
[Sagredo:]{} It’s true, as Simplicio says, that things “happen” when we trace over degrees of freedom that leave the causal diamond. Pure states become a density matrix, or to put it in Bohr’s language, the wavefunction describing the interior of the diamond collapses. But how do we know that the coherence will not be restored? What prevents things from “unhappening” later on?
[Simplicio:]{} According to Bohr, the irreversible character of observation is due to the large classical nature of the apparatus.
[Salviati:]{} And decoherence allows us to understand this rather vague statement more precisely: the apparatus becomes entangled with an enormous environment, which is infinite for all practical purposes.
[Sagredo:]{} But even a large apparatus is a quantum system, and in principle, the entanglement can be undone. The irreversibility of decoherence is often conflated with the irreversibility of thermodynamics. A large system of many degrees of freedom is very unlikely to find its way back to a re-cohered state. However, thermodynamic irreversibility is an idealization that is only true for infinite systems. The irreversibility of decoherence, too, is an approximation that becomes exact only for infinite systems.
[Simplicio:]{} But think how exquisite the approximation is! In a causal diamond containing our own history, the boundary area becomes as large as billions of light years, squared, or $10^{123}$ in fundamental units. As you know, I have studied all the ancients. I learned that the maximum area along the boundary of a past light-cone provides a bound on the size $N$ of the Hilbert space describing everything within the light-cone: $N\sim \exp(10^{123})$ [@Bou00a]. And elsewhere I found that re-coherence occurs on a timescale $N^N\sim \exp[\exp(10^{123})]$ [@Zur03]. This is much longer than the time it will take for our vacuum to decay [@KKLT; @FreLip08] and the world to end in a crunch. So why worry about it?
[Sagredo:]{} It’s true that re-coherence is overwhelmingly unlikely in a causal diamond as large as ours. But nothing you said convinces me that the probability for things to “unhappen” is exactly zero.
[Salviati:]{} To me, it is very important to be able to say that some things really do happen, irreversibly and without any uncertainty. If this were not true, then how could we ever make sense of predictions of a physical theory? If we cannot be sure that something happened, how can we ever test the prediction that it should or should not happen?
[Sagredo:]{} That’s it—this is what bothered me. The notion that things really happen should be a fundamental principle, and the implementation of fundamental principles in a physical theory should not rely solely on approximations. So let me define this more carefully in terms of a definition and a postulate.
#### Definition I
Consider an instance of decoherence (or “collapse of a wave function”) in a Hilbert space ${\cal H}_S$, which occurs as a result of entanglement with another Hilbert space ${\cal H}_E$. The event will be said to [*happen*]{} if the entanglement between ${\cal H}_E$ and ${\cal H}_S$ is irreversible; and the system $S$ can then be treated as if it was in one of the pure states that constitute the basis that diagonalizes the density matrix obtained by tracing over the Hilbert space ${\cal H}_E$.
#### Postulate I
Things [*happen*]{}.\
In other words, there exist some entanglements in Nature that will not be reversed with any finite probability.
[Simplicio:]{} Let me see if I can find an example that satisfies your postulate. Suppose that an apparatus is in continuous interaction with some environment. Even an interaction with an single environmental photon can record the event. If the photon disappears to infinity so that no mirror can ever reflect it back, then the event has [*happened*]{}.
[Sagredo:]{} Your example makes it sound like irreversible decoherence is easy, but I don’t think this is true. For example, in the finite causal diamonds we considered, there is no “infinity” and so nothing can get to it.
[Salviati:]{} Sagredo’s postulate is in fact surprisingly strong! Any dynamically closed system (a system with finite entropy) cannot satisfy the postulate, because recurrences are inevitable. This not a trivial point, since it immediately rules out certain cosmologies. Stable de Sitter space is a closed system. More generally, if we consider a landscape with only positive energy local minima, the recurrence time is controlled by the minimum with the smallest cosmological constant. So the recurrence time is finite, and nothing [*happens*]{}. Anti de Sitter space is no better. As is well known, global AdS is a box with reflecting walls. At any finite energy it has finite maximum entropy and also gives rise to recurrences.
[Simplicio:]{} I can think of a cosmology that satisfies Postulate I, along the lines of my previous example. I will call it “S-matrix cosmology”. It takes place in an asymptotically flat spacetime and can described as a scattering event. The initial state is a large number of incoming stable particles. The particles could be atoms of hydrogen, oxygen, carbon, etc. The atoms come together and form a gas cloud that contracts due to gravity. Eventually it forms a solar system that may have observers doing experiments. Photons scatter or are emitted from the apparatuses and become irreversibly entangled as they propagate to infinity in the final state. The central star collapses to a black hole and evaporates into Hawking radiation. Eventually everything becomes outgoing stable particles.
[Sagredo:]{} It is true that there are some things that [*happen*]{} in your S-matrix cosmology. But I am not satisfied. I think there is a larger point that Salviati made: we would like a cosmology in which it is possible to give a precise operational meaning to quantum mechanical predictions. The notion that things unambiguously [*happen*]{} is necessary for this, but I now realize that it is not sufficient.
[Salviati:]{} Now, I’m afraid, you have us puzzled. What is wrong with the S-matrix cosmology?
[Sagredo:]{} Quantum mechanics makes probabilistic predictions; when something [*happens*]{}, each possible outcome has probability given by the corresponding diagonal entry in the density matrix. But how do we verify that this outcome really happens with predicted probability?
[Salviati:]{} Probabilities are frequencies; they can be measured only by performing an experiment many times. For example, to test the assertion that the probability for “heads” is one half, you flip a coin a large number of times and see if, within the margin of error, it comes up heads half of the time. If for some reason it were only possible to flip a coin once there would be no way to test the assertion reliably. And to be completely certain of the probability distribution, it would be necessary to flip the coin infinitely many times.
[Simplicio:]{} Well, my S-matrix cosmology can be quite large. For example, it might contain a planet on which someone flips a quantum coin a trillion times. Photons record this information and travel to infinity. A trillion outcomes [*happen*]{}, and you can do your statistics. Are you happy now?
[Sagredo:]{} A trillion is a good enough approximation to infinity for all practical purposes. But as I said before, the operational testability of quantum-mechanical predictions should be a fundamental principle. And the implementation of a fundamental principle should not depend on approximations.
[Salviati:]{} I agree. No matter how large and long-lived Simplicio makes his S-matrix cosmology, there will only be a finite number of coin flips. And the cosmology contains many “larger” experiments that are repeated even fewer times, like the explosion of stars. So the situation is not much better than in real observational cosmology. For example, inflation tells us that the quadrupole anisotropy of the CMB has a gaussian probability distribution with a variance of a few times $10^{-5}$. But it can only be measured once, so we are very far from being able to confirm this prediction with complete precision.
[Sagredo:]{} Let me try to state this more precisely. Quantum mechanics makes probabilistic predictions, which have the following operational definition:
#### Definition II
Let $P(i)$ be the theoretical probability that outcome $i$ [*happens*]{} (i.e., $i$ arises as a result of irreversible decoherence), given by a diagonal entry in the density matrix. Let $N$ be the number of times the corresponding experiment is repeated, and let $N_i$ be the number of times the outcome $i$ [*happens*]{}. The sharp prediction of quantum mechanics is that $$P(i)=\lim_{N\to \infty}{N_i \over N}~.
\label{eq-nin}$$\
[Salviati:]{} Do you see the problem now, Simplicio? What bothers us about your S-matrix cosmology is that $N$ is finite for any experiment, so it is impossible to verify quantum-mechanical predictions with arbitrary precision.
[Simplicio:]{} Why not proliferate the S-matrix cosmology? Instead of just one asymptotically flat spacetime, I will give you infinitely many replicas with identical in-states. They are completely disconnected from one another; their only purpose is to allow you to take $N\to \infty$.
[Salviati:]{} Before we get to the problems, let me say that there is one thing I like about this proposal: it provides a well-defined setting in which the many worlds interpretation of quantum mechanics appears naturally. For example, suppose we measure the $z$-component of the spin of an electron. I have sometimes heard it said that when decoherence occurs, the world into two equally real branches. But there are problems with taking this viewpoint literally. For example, one might conclude that the probabilities for the two branches are necessarily equal, when we know that in general they are not.
[Simplicio:]{} Yes, I have always found this confusing. So how does my proposal help?
[Salviati:]{} The point is that in your setup, there are an infinite number of worlds to start with, all with the same initial conditions. Each world within this collection does not split; the collection $S$ itself splits into two subsets. In a fraction $p_i$ of worlds, the outcome $i$ happens when a spin is measured. There is no reason to add another layer of replication.
[Simplicio:]{} Are you saying that you reject the many worlds interpretation?
[Salviati:]{} That depends on what you mean by it. Some say that the many worlds interpretation is a theory in which reality is the many-branched wavefunction itself. I dislike this idea, because quantum mechanics is about observables like position, momentum, or spin. The wavefunction is merely an auxiliary quantity that tells you how to compute the probability for an actual observation. The wavefunction itself is not a measurable thing. For example the wavefunction $\psi(x)$ of a particle cannot be measured, so in particular one cannot measure that it has split. But suppose we had a system composed of an infinite number of particles, all prepared in an identical manner. Then the single particle wavefunction $\psi(x)$ becomes an observable for the larger system. For example, the single particle probability density $\psi^{\ast}(x)\psi(x)$ is a many-particle observable: $$\psi^{\ast}(x)\psi(x) = \lim_{N\to \infty} {1 \over N} \sum_{i=1}^N \delta(x_i -x)
\label{eq-multiop}$$
[Simplicio:]{} I see. Now if an identical measurement is performed on each particle, each single particle wavefunction splits, and this split wavefunction can be measured in the many-particle system.
[Salviati:]{} To make this explicit, we could make the individual systems more complicated by adding a detector $d$ for each particle. Each particle-detector system can be started in the product state $\psi_0(x) \chi_0(d)$. Allowing the particle to interact with the detector would create entanglement, i.e., a wavefunction of the form $\psi_1(x) \chi_1(d)+\psi_2(x) \chi_2(d)$. But the branched wave function cannot be measured any better than the original one. Now consider an unbounded number of particle-detector pairs, all starting in the same product state, and all winding up in entangled states. It is easy to construct operators analogous to Eq. (\[eq-multiop\]) in the product system that correspond to the single system’s wave function. So you see, that in the improved S-matrix cosmology there is no reason to add another layer of many-worlds.
[Sagredo:]{} That’s all very nice, but Simplicio’s proposal does not help with making quantum mechanical predictions operationally well-defined. You talk about many disconnected worlds, so by definition it is impossible to collect the $N\to \infty$ results and compare their statistics to the theoretical prediction.
[Simplicio:]{} I see. By saying that quantum mechanical predictions should be operationally meaningful, you mean not only that infinitely many outcomes [*happen*]{}, but that they are all accessible to an observer in a single universe.
[Sagredo:]{} Yes, it seems to me that this requirement follows from Salviati’s fundamental principle that predictions should have precise operational meaning. Let me enshrine this in another postulate:
#### Postulate II
Observables are [*observable*]{}.\
By observables, I mean any Hermitian operators whose probability distribution is precisely predicted by a quantum mechanical theory from given initial conditions. And by [*observable*]{}, I mean that the world is big enough that the observable can be measured infinitely many times by irreversible entanglement.
[Salviati:]{} Like your first postulate, this one seems quite restrictive. It obviously rules out the infinite set of S-matrix cosmologies, since products of field operators in different replicas are predicted but cannot be measured by any observers in any one cosmology. And it gives us additional reasons to reject Simplicio’s earlier suggestion: the single S-matrix cosmology contains observables such as the total number of outgoing particles, which cannot even [*happen*]{}, since there is no environment to measure them.
[Simplicio:]{} Well, we were quite happy yesterday with the progress we had made on making decoherence objective in the causal diamond. But these postulates of yours clearly cannot be rigorously satisfied in any causal diamond, no matter how large. Perhaps it’s time to compromise? Are you really sure that fundamental principles have to have a completely sharp implementation in physical theory? What if your postulates simply cannot be satisfied?
[Salviati:]{} Yesterday, we did not pay much attention to causal diamonds that end in regions with vanishing cosmological constant. Perhaps this was a mistake? In such regions, the boundary area and entropy are both infinite. I may not have not thought about it hard enough, but I see no reason why our postulates could not be satisfied in these “hat” regions.
[Sagredo:]{} Even if they can, there would still be the question of whether this helps make sense of the finite causal diamonds. I care about this, because I think we live in one of them.
[Salviati:]{} I understand that, but let us begin by asking whether our postulates might be satisfied in the hat.
Irreversible decoherence and infinite repetition in the hat {#sec-hats}
-----------------------------------------------------------
In the eternally inflating multiverse there are three types of causal diamonds. The first constitute a set of measure zero and remain forever in inflating regions. The entropy bound for such diamonds is the same as for the lowest-$\Lambda$ de Sitter space they access, and hence is finite. The second type are the diamonds who end on singular crunches. If the crunches originated from the decay of a de Sitter vacuum then the entropy bound is again finite [@HarSus10]. Finally there are causal diamonds that end up in a supersymmetric bubble of zero cosmological constant. A timelike geodesic that enters such a bubble will remain within the diamond for an infinite time. It is convenient to associate an observer with one of these geodesics, called the Census Taker.
The Census Taker’s past light-cone becomes arbitrarily large at late times. It asymptotes to the null portion of future infinity called the “hat”, and from there extends down into the non-supersymmetric bulk of the multiverse (Fig. \[fig-biglittle\]).
If the CT’s bubble nucleates very late, the entire hat appears very small on the conformal diagram and the CT’s past light-cone looks similar to the past light-cone of a point on the future boundary of ordinary de Sitter space. However, this is misleading; the entropy bound for the hat geometry is not the de Sitter entropy of the ancestor de Sitter space. It is infinite.
According to the criterion of Ref. [@HarSus10], the existence of light-sheets with unbounded area implies the existence of precise observables. A holographic dual should exist by which these observables are defined. The conjecture of Ref. [@FreSek06]—the FRW/CFT duality—exhibits close similarities to AdS/CFT. In particular, the spatial hypersurfaces of the FRW geometry under the hat are open and thus have the form of three-dimensional Euclidean anti-de Sitter space. Their shared conformal boundary—the space-like infinity of the FRW geometry—is the two-dimensional “rim” of the hat. As time progresses, the past light-cone of the Census Taker intersects any fixed earlier time slice closer and closer to the boundary. For this reason the evolution of the Census Taker’s observations has the form of a renormalization group flow in a two-dimensional Euclidean conformal field theory [@SekSus09].
We will return to the holographic dual later, but for now we will simply adopt the conjecture that exact versions of observables exist in the hat. We can then ask whether hat observables are [*observable*]{}, in the sense of Postulate II above. In particular, this would require that things [*happen*]{} in the hat, in the sense of Postulate I. We do not understand the fundamental description of the hat well enough to prove anything rigorously, but we will give some arguments that make it plausible that both postulates can indeed be satisfied. Our arguments will be based on conventional (bulk) observables.
Consider an interaction at some event $X$ in the hat region that entangles an apparatus with a photon. If the photon gets out to the null conformal boundary, then a measurement has [*happened*]{} at $X$, and postulate I is satisfied. In general the photon will interact with other matter, but unless this interaction takes the form of carefully arranged mirrors, it will not lead to recoherence. Instead will enlarge the environment, and sooner or later a massless particle that is entangled with the event at $X$ will reach the null boundary. For postulate I to be satisfied it suffices that there exist [*some*]{} events for which this is the case; this seems like a rather weak assumption.
By the FRW symmetry [@CDL] of the hat geometry, the same measurement [*happens*]{}, with the same initial conditions, at an infinite number of other events $X_i$. Moreover, any event in the hat region eventually enters the Census Taker’s past light-cone. Let $N(t)$ be the number of equivalent measurements that are in the Census Taker’s past light-cone at the time $t$ along his worldline, and let $N_i(t)$ be the number of times the outcome $i$ [*happens*]{} in the same region. Then the limit in Eq. (\[eq-nin\]) can be realized as $$p_i= \lim_{t\to \infty} \frac{N_i(t)}{N(t)}~,$$ and Postulate II is satisfied for the particular observable measured.
A crucial difference to the S-matrix cosmology discussed in the previous subsection is that the above argument applies to any observable: if it happens once, it will happen infinitely many times. It does not matter how “large” the systems are that participate in the interaction. Because the total number of particles in the hat is infinite, there is no analogue of observables such as the total number of outgoing particles, which would be predicted by the theory but could not be measured. This holds as long as the fundamental theory predicts directly only observables in the hat, which we assume. Thus, we conclude that Postulate II is satisfied for all observables in the hat.
Since both postulates are satisfied, quantum mechanical predictions can be operationally verified by the Census Taker to infinite precision. But how do the hat observables relate to the approximate observables in causal diamonds that do [*not*]{} enter hats, and thus to the constructions of Sec. \[sec-nonhat\]? We will now argue that the non-hat observables are approximations that have exact counterparts in the hat.
Black hole complementarity and hat complementarity {#sec-complementarity}
--------------------------------------------------
In this subsection, we will propose a complementarity principle that relates exact Census Taker observables defined in the hat to the approximate observables that can be defined in other types of causal diamonds, which end in a crunch and have finite maximal area. To motivate this proposal, we will first discuss black hole complementarity [@SusTho93] in some detail.
Can an observer outside the horizon of a black hole recover information about the interior? Consider a black hole that forms by the gravitational collapse of a star in asymptotically flat space. Fig. \[fig-bh\])
shows the spacetime region that can be probed by an infalling observer, and the region accessible to an observer who remains outside the horizon. At late times the two observers are out of causal contact, but in the remote past their causal diamonds have considerable overlap.
Let $A$ be an observable behind the horizon as shown in Fig. \[fig-baf\].
$A$ might be a slightly smeared field operator or product of such field operators. To the freely falling observer the observable $A$ is a low energy operator that can be described by conventional physics, for example quantum electrodynamics.
The question we want to ask is whether there is an operator outside the horizon on future light-like infinity, that has the same information as $A$. Call it $A_{\rm out}.$ By that we mean an operator in the Hilbert space of the outgoing Hawking radiation that can be measured by the outside observer, and that has the same probability distribution as the original operator $A$ when measured by the in-falling observer.
First, we will show that there is an operator in the remote past, $A_{\rm in}$ that has the same probability distribution as $A$. We work in the causal diamond of the infalling observer, in which all of the evolution leading to $A$ is low energy physics. Consider an arbitrary foliation of the infalling causal diamond into Cauchy surfaces, and let each slice be labeled by a coordinate $t$. We may choose $t=0$ on the slice containing $A$.
Let $U(t)$ be the Schrödinger picture time-evolution operator, and let $|\Psi(t)\rangle$ be the state on the Cauchy surface $t$. We can write $|\Psi(0)\rangle$ in terms of the state at a time $-T$ in the remote past, $$|\Psi(0)\rangle = U(T)|\Psi(-T)\rangle~.$$ The expectation value of $A$ can be written in terms of this early-time state as $$\langle \Psi(-T)| U^{\dag}(T) A U(T)|\Psi(-T)\rangle~.$$ Thus the operator $$A_{\rm in}=U^{\dag}(T) A U(T)$$ has the same expectation value as $A$. More generally, the entire probability distributions for $A$ and $A_{\rm in}$ are the same. Let us take the limit $T\to \infty$ so that $A_{\rm in}$ becomes an operator in the Hilbert space of incoming scattering states.
Since the two diamonds overlap in the remote past, $A_{\rm in}$ may also be thought of as an operator in the space of states of the outside observer. Now let us run the operator forward in time by the same trick, except working in the causal diamond of the outside observer. The connection between incoming and outgoing scattering states is through the S-matrix. Thus we define $$A_{\rm out} = S A_{\rm in} S^{\dag}$$ or $$A_{\rm out} =\lim_{T\to \infty} S U^{\dag}(T) A U(T) S^{\dag}
\label{eq-aout}$$ The operator $A_{\rm out}$, when measured by an observer at asymptotically late time, has the same statistical properties as $A$ if measured behind the horizon at time zero.
The low energy time development operator $U(T)$ is relatively easy to compute, since it is determined by integrating the equations of motion of a conventional low energy system such as QED. However, this part of the calculation will not be completely precise, because it involves states in the interior if the black hole, which have finite entropy bound. The S-matrix should have a completely precise definition but is hard to compute in practice. Information that falls onto the horizon is radiated back out in a completely scrambled form. The black hole horizon is the most efficient scrambler of any system in nature.
This is the content of black hole complementarity: observables behind the horizon are not independent variables. They are related to observables in front of the horizon by unitary transformation. The transformation matrix is $\lim_{T\to \infty}[U(T) S^{\dag}]$. It is probably not useful to say that measuring $A_{\rm out}$ tells us what happened behind the horizon [@BouFre06a]. It is not operationally possible to check whether a measurement of $A$ and $A_{\rm out}$ agree. It is enough for us that the every (approximate) observable behind the horizon has a (precise) complementary image among the degrees of freedom of the Hawking radiation that preserves expectation values and probabilities.
What is the most general form of operators $A$ inside the black hole that can be written in the form of Eq. (\[eq-aout\]), as an operator $A_{\rm out}$ on the outside? Naively, we might say any operator with support inside the black hole can be so represented, since any operator can be evolved back to the asymptotic past using local field theory. But this method is not completely exact, and we know that there must be situations where it breaks down completely. For example, by the same argument we would be free to consider operators with support both in the Hawking radiation and in the collapsing star and evolve them back; this would lead us to conclude that either information was xeroxed or lost to the outside. This paradox is what led to the proposal that only operators with support inside someone’s causal patch make any sense. But that conclusion has to apply whether we are inside or outside the black hole; the infalling observer is not excepted. For example, we should not be allowed to consider operators at large spacelike separation near the future singularity of the black hole. The semiclassical evolution back to the asymptotic past must be totally unreliable in this case.[^11]
We conclude that there are infinitely many inequivalent infalling observers, with different endpoints on the black hole singularity. Approximate observables inside the black hole must have the property that they can be represented by an operator with support within the causal diamond of [*some*]{} infalling observer. Any such operator can be represented as an (exact) observable of the outside observer, i.e., as an operator $A_{\rm out}$ acting on the Hawking radiation on the outside.
This completes our discussion of black hole complementarity. Following Refs. [@FreSus04; @SekSus09], we will now conjecture a similar relation for the multiverse. The role of the observer who remains outside the black hole will be played by the Census Taker; note that both have causal diamonds with infinite area. The role of the many inequivalent infalling observers will be played by the causal diamonds that end in crunches, which we considered in Sec. \[sec-patch\], and which have finite area. The conjecture is
#### Hat Complementarity
Any (necessarily approximate) observable in the finite causal diamonds of the multiverse can be represented by an exact observable in the Census Taker’s hat.\
More precisely, we assume that for every observable $A$ in a finite causal diamond, there exists an operator $A_{\rm hat}$ with the same statistics as $A$. $A$ and $A_{\rm hat}$ are related the same way that $A$ and $A_{\rm out}$ are in the black hole case. See Fig. \[fig-hatcomp\] for a schematic illustration.
Hat complementarity is motivated by black hole complementarity, but it does not follow from it rigorously. The settings differ in some respect. For example, a black hole horizon is a quasi-static Killing horizon, whereas the Census Taker’s causal horizon rapidly grows to infinite area and then becomes part of the conformal boundary. Correspondingly, the radiation in the hat need not be approximately thermal, unlike Hawking radiation. And the argument that any operator behind the horizon can be evolved back into the outside observer’s past has no obvious analogue for the Census Taker, since his horizon can have arbitrarily small area and hence contains very little information at early times; and since the global multiverse will generally contain regions which are not in the causal future of the causal past of any Census Taker. Here, we adopt hat complementarity as a conjecture.
Next, we turn to the question of [*how*]{} the information about finite causal diamonds shows up in the precise description of the outside observer or the Census Taker.
The global multiverse in a hat {#sec-ct}
------------------------------
An important difference between the black hole and the multiverse is that the information in the Hawking radiation is finite, whereas the number of particles in the hat is infinite. The observer outside the black hole, therefore, receives just enough information to be able to recover the initial state, and thus the approximate observables inside the black hole. On the other hand, the $O(3,1)$ symmetry of the FRW universe implies that the entropy accessible to the Census Taker is infinite. Since the number of quantum states in the finite causal diamonds that end in crunches is bounded, this implies an infinite redundancy in the Census Taker’s information. We will now argue that this redundancy is related to the reconstruction of a global multiverse from causal diamonds, described in Sec. \[sec-everett\], in which an infinite number of finite causal diamonds are used to build the global geometry.
Black holes scramble information. Therefore, an observer outside a black hole has to wait until half the black hole has evaporated before the first bit of information can be decoded in the Hawking radiation [@Pag93; @SusTho93]. After the whole black hole has evaporated, the outside observer has all the information, and his information does not increase further. Can we make an analogous statement about the Census Taker?
If the radiation visible to the Census Taker really is complementary to the causal diamonds in the rest of the multiverse, it will surely be in a similarly scrambled form. For hat complementarity to be operationally meaningful, this information must be organized in a “friendly” way, i.e., not maximally scrambled over an infinite-dimensional Hilbert space. Otherwise, the census taker would have to wait infinitely long to extract information about any causal patch outside his horizon.[^12] This would be analogous to the impossibility of extracting information from less than half of the Hawking radiation. An example of friendly packaging of information is not hard to come by (see also [@HayPre07]). Imagine forever feeding a black hole with information at the same average rate that it evaporates. Since the entropy of the black hole is bounded it can never accumulate more than $S$ bits of information. Any entering bit will be emitted in a finite time even if the total number of emitted photons is infinite.
We will assume that this is also the case in the hat. Then we can ask whether the Census Taker’s cutoff translates to a multiverse cutoff, hopefully something like the light-cone time cutoff of Sec. \[sec-dual\].
To understand the Census Taker’s cutoff, we start with the metric of open FRW space with a vanishing cosmological constant. We assume the FRW bubble nucleated from some ancestor de Sitter vacuum. $$ds^2 = a(T)^2 ( -dT^2 + d{\cal H}_3^2)$$ where ${\cal H}_3$ is the unit hyperbolic geometry (Euclidean AdS) $$d{\cal H}_3^2 = dR^2 + \sinh^2 {R} \ d\Omega_2^2,$$ and $T$ is conformal time. The spatial hypersurfaces are homogeneous with symmetry $O(3,1)$ which acts on the two-dimensional boundary as special conformal transformations. Matter in the hat fills the noncompact spatial slices uniformly and therefore carries an infinite entropy.
If there is no period of slow-roll inflation in the hat then the density of photons will be about one per comoving volume. We take the Census Taker to be a comoving worldline (see Fig. \[fig-hatcomp\]). The number of photons in the Census Taker’s causal past is $$N_{\gamma} \sim e^{2T_{CT}}.$$ In this formula $T_{CT}$ is the conformal time from which the Census Taker looks back. $N_{\gamma}$ represents the maximum number of photons that the Census Taker can detect by the time $T_{CT}.$
The Census Taker’s cutoff is an information cutoff: after time $T_{CT}$ a diligent Census Taker can have gathered about $e^{2T_{CT}}$ bits of information. If the de Sitter entropy of the ancestor is $S_a$, then after a conformal time $T_{CT} \sim \log S_a$ the Census Taker will have accessed an amount of information equal to the entropy of the causal patch of the ancestor. Any information gathered after that must be about causal diamonds in the rest of the multiverse, in the sense that it concerns operators like $A$ that are beyond the horizon.
Over time, the Census Taker receives an unbounded amount of information, larger than the entropy bound on any of the finite causal diamonds beyond the hat. This means that the Census Taker will receive information about each patch history over and over again, redundantly. This is reminiscent of the fact that in our reconstruction of the global picture, every type of causal diamond history occurs over and over again as the new diamonds are inserted in between the old ones. This is obvious because we used infinitely many diamonds to cover the global geometry, and there are only finitely many histories that end in a crunch or on an eternal endpoint.
A particular history of a causal diamond has a larger or smaller maximum area, not a fixed amount of information. But in our reconstruction of the global multiverse, each generation of new causal diamonds (the set of diamonds starting at the green squares at equal time $t$ in Fig. \[fig-patches\]) contains all possible histories (at least for sufficiently late generations, where the number of diamonds is large). Therefore the amount of information in the reconstructed global geometry grows very simply, like $2^t$. This redundant information should show up in the hat organized in the same manner. Thus, it is natural to conjecture that the information cutoff of the Census Taker is dual, by complementarity, to the discretized light-cone time cutoff implicit in our reconstruction of a global spacetime from finite causal diamonds.
We would like to thank T. Banks, B. Freivogel, A. Guth, D. Harlow, P. Hayden, S. Leichenauer, V. Rosenhaus, S. Shenker, D. Stanford, E. Witten, and I. Yang for helpful discussions. This work was supported by the Berkeley Center for Theoretical Physics, by the National Science Foundation (award numbers 0855653 and 0756174), by fqxi grant RFP2-08-06, and by the US Department of Energy under Contract DE-AC02-05CH11231.
[^1]: For reviews, see [@Zur03; @Sch03]. For a pedagogical introduction, see [@PreNotes].
[^2]: We could explicitly include an observer who becomes correlated to the apparatus through interaction with the environment, resulting in an entangled pure state of the form $a |0 \rangle _S\otimes |0 \rangle _A \otimes |0 \rangle _E \otimes |0 \rangle _O + b |1 \rangle _S\otimes |1 \rangle _A \otimes |1 \rangle _E \otimes |1 \rangle _O$. For notational simplicity we will subsume the observer into $A$.
[^3]: This point has long been emphasized by Banks and Fischler [@Ban00; @Fis00b; @Ban01].
[^4]: However, the above discussion has implications for any global geometry in which observers fall out of causal contact at late times, including crunching universes and black hole interiors. Suppose all observers were originally causally connected, i.e., their past light-cones substantially overlap at early times. Then the different classes of decoherent histories that may be experienced by different observers arise from differences in the amount, the identity, and the order of the degrees of freedom that the observer must trace over.
[^5]: A causal patch can be viewed as the upper half of a causal diamond. In practice the difference is negligible but strictly, the global-local duality holds for the patch, not the diamond. Here we use it merely to motivate the construction of the following subsection, which does use diamonds.
[^6]: It is worth noting that the light-cone time cutoff was not constructed with this equivalence in mind. Motivated by an analogy with the UV/IR relation of the AdS/CFT correspondence [@GarVil08], light-cone time was formulated as a measure proposal [@Bou09] before the exact duality with the causal diamond was discovered [@BouYan09].
[^7]: In this aspect our viewpoint differs from [@Nom11].
[^8]: We defer to future work the interesting question of whether further constraints should be imposed on the statistics of this distribution. For example, for rational values of $\bar\Gamma$, the assignment of pure states to diamonds could be made in a periodic fashion, or at random subject only to the above constraint on relative fractions.
[^9]: Note that this construction determines the initial conditions for all but a finite number of diamonds. In a more general setting, it would select initial conditions in the dominant vacuum. We thank Ben Freivogel and I-Sheng Yang for pointing out a closely related observation.
[^10]: Although not in the context of the multiverse, Banks and Fischler have suggested that an interlocking collection of causal diamonds with finite dimensional Hilbert spaces can be assembled into the global structure of de Sitter space [@BanFis01b].
[^11]: The notion of causality itself may become approximate inside the black hole. However, this does not give us licence to consider operators at large spacelike separation inside the black hole. Large black holes contain spatial regions that are at arbitrarily low curvature and are not contained inside any single causal patch. By the equivalence principle, if we were permitted to violate the restriction to causal patches inside a black hole, then we would have to be allowed to violate it in any spacetime.
[^12]: We are thus claiming that things [*happen*]{} in the past light-cone of the Census Taken at finite time. The corresponding irreversible entanglement leading to decoherence must be that between the interior and the exterior of the Census Taker’s past light-cone on a suitable time-slice such as that shown in Fig. \[fig-hatcomp\]. Because the timeslice is an infinite FRW universe, the environment is always infinite. The size of the “system”, on the other hand, grows without bound at late times, allowing for an infinite number of decoherent events to take place in the Census Taker’s past. Thus the postulates of Sec. \[sec-sagredo\] can be satisfied.
|
---
abstract: 'The mechanism of spin-phonon coupling (SPC) and possible consequencies for the properties of high-$T_C$ copper oxides are presented. The results are based on ab-initio LMTO band calculations and a nearly free-electron (NFE) model of the band near $E_F$. Many observed properties are compatible with SPC, as for the relation between doping and $\vec{q}$ for spin excitations and their energy dependence. The main pseudogap is caused by SPC and waves along \[1,0,0\], but it is suggested that secondary waves, generated along \[1,1,0\], contribute to a ’waterfall’ structure. Conditions for optimal $T_C$, and the possiblities for spin enhancement at the surface are discussed.'
author:
- 'T. Jarlborg'
title: 'Properties of high-$T_c$ copper oxides from band models of spin-phonon coupling.'
---
Introduction.
=============
The normal state properties of high-$T_c$ copper oxides show many unusual properties like pseudogaps, stripe-like charge/spin modulations with particular energy and doping dependencies, Fermi-surface (FS) “arcs” in the diagonal direction, ’kinks’ and ’waterfalls’ (WF) in the band dispersions, manifestations of isotope shifts, phonon softening and so on [@tran]-[@vig]. Band results for long ’1-dimensional’ (1-D) supercells, calculated by the Linear Muffin-Tin Orbital (LMTO) method and the local spin-density approximation (LDA), show large spin-phonon coupling (SPC) within the CuO plane of these systems [@tj1]. This means that an antiferromagnetic (AFM) wave of the correct wave length and the proper phase is stronger when it coexists with the phonon [@tj3]. The LMTO results have been used to parametrize the strength of potential modulations coming from phonon distortions and spin waves of different length [@tj5]. These parameters have been used in a nearly free electron (NFE) model in order to visualize the band effects from the potential modulations in 2-D. Many properties are consistent with SPC, as have been shown previously [@tj1]-[@tj7].
Calculations and Results.
=========================
Ab-initio LMTO band calculations based on the local density approximation (LDA) are made for La$_{(2-x)}$Ba$_x$CuO$_4$ (LBCO), with the use of the virtual crystal approximation (VCA) to La-sites to account for doping, $x$. Calculations for long supercells, mostly oriented along the CuO bond direction, are used for modeling of phonon distortions and spin waves [@tj5]. The calculations show that pseudogaps (a ’dip’ in the density-of-states, DOS) appear at different energies depending on the wave lengths of the phonon/spin waves. This is consistent with a correlation between doping and wave length, and with phonon softening in doped systems [@tj3].
The difficulty with ab-initio calculations is that very large unit cells are needed for realistic 2D-waves. Another shortcoming is that the original Brillouin zone is folded by the use of supercells, which makes interpretation difficult. The band at $E_F$ is free-electron like, with an effective mass near one, and the potential modulation and SPC can be studied within the nearly free-electron model (NFE) [@tj6]. The AFM spin arrangement on near-neighbor (NN) Cu along \[1,0,0\] corresponds to a potential perturbation, $V(\bar{x}) = V_q^t exp(-i\bar{Q} \cdot \bar{x})$ (and correspondingly for $\bar{y}$). A further modulation ($\bar{q}$) leads to 1D-stripes perpendicular to $\bar{x}$ (or “checkerboards” in 2-D along $\bar{x}$ and $\bar{y}$), with a modification; $V(\bar{x})
= V_q^t exp(-i(\bar{Q}-\bar{q}) \cdot \bar{x})$, and the gap moves from the zone boundary to ($\bar{Q}-\bar{q}$)/2. [@tj6].
The NFE model reproduce the qualitative results of the full band (1-D) calculation. In 2-D it leads to a correlation between doping and the amplitude of $V_q^t$, because the gap (at $E_F$) opens along $(k_x,0)$ and $(0,k_y)$, but not along the diagonal [@tj6]. The combined effect is that the dip in the total DOS (at $E_F$) will not appear at the same band filling for a small and a wide gap. The $q$ vs. $x$ behavior for a spherical NFE band with $m^*$ close to 1, and parameters $V^t_q$ (for one type of phonon) from ref. [@tj6] show a saturation, see Fig.1. This is quite similar to what is observed [@yam]. The reason is that no checkerboard solutions are found for larger doping, but unequal $q_x/2$ (fixed near 0.11) and $q_y/2$ produce realistic solutions. The DOS at $E_F$, $N$, is lowest within the pseudogap. A gap caused by spin waves, disappears at a temperature $T^*$ when thermal excitations can overcome the gap, and the spin wave can no longer be supported. Therefore, the spin-waves are most important for the pseudogap (even though phonons are important via SPC), and $T^*$ is estimated to be 1/4th of the spin part of $V^t_q$ [@tj6]. The opposite $x$-variations of $T^*$ and $N$ (note that $\lambda \propto N V_q^2$) provides an argument for optimal conditions for superconductivity for intermediate $x$ [@tj6]. Moreover, the pseudogap competes with superconductivity at underdoping, since dynamic SPC would be their common cause. However, there is a possibility to raise $N$ and $T_C$ by creating an artificial, static pseudogap through a periodic distribution of dopants or strain. Two parameters, the periodicity and the strength of the perturbing potential, should be adjusted so that the peak in the DOS above or below the static pseudogap coincides with $E_F$.
![The variation of magnetic modulation vector $q/2$ (Circles), density-of-states in the pseudogap (+-signs $N(E_F)/50$ in states per cell/Ry/spin), and $T^*/10000$ (x-signs in K), as function of doping $x$ in a NFE model with parameters from ref. [@tj6]. Two different $q$-vectors along $q_x$ and $q_y$ are needed for doping larger than about 0.12 as indicated by the filled and open circles. []{data-label="fig0"}](nfe55.ps){height="7.0cm" width="8.0cm"}
The degree of SPC is different for different phonons. The total $V_q^t$ with contributions from phonons and spin waves, calculated from LMTO and information from phonon calculations for Nd$_2$CuO$_4$ [@chen], are 17, 18, 23 and 22 mRy at energies centered around 15 (La), 25 (Cu), 50 (plane-O) and 60 meV (apical-O), respectively [@tj7]. The results for $x$=0.16 are shown in fig. 2 together with experimental data [@vig; @tran1]. The points below 70 meV are for the coupling to the 4 types of phonons. The spectrum is shaped like an hour-glass with a “waist” at intermediate energy with largest SPC for plane-O. The solutions for energies larger than 70 meV are independent of phonons and the exact $(q,\omega)$ behavior is more uncertain [@tj7]. Less doping implies larger $V^t_q$ and longer waves. All $\vec{q}$ become smaller and the waist becomes narrower, as can be verified in LBCO for $x=1/8$ [@tran1], and recently in lightly doped La$_{1.96}$Sr$_{0.04}$CuO$_4$ [@mat]. However, the spin modulation in the latter case is in the diagonal direction. Heavier O-isotopes will decrease the frequencies for the phonons and the coupled spin waves, and move the waist to lower E.
![Filled circles: Calculated $q-\hbar\omega$ relation from the 2D-NFE model and the parameters $V_q^t$ for doping $x=$0.16. The solution without SPC at the largest energy, is less precise. Broken line: Approximate shape of the experimental dispersion as it is read from figure 3c in the work of Vignolle [*et al*]{} [@vig] for La$_{2-x}$Sr$_x$CuO$_4$ at $x=0.16$. Thin semi-broken line: Experimental dispersion read from the data by Tranquada [*et al*]{} [@tran1] on LBCO at lower doping, $x=0.125$. []{data-label="fig1"}](nfeqxvt.ps){height="7.0cm" width="8.0cm"}
Another odd feature is the WF-dispersion of the band below $E_F$ in the diagonal direction, seen by ARPES [@chang]. The suggestion here is that this feature comes from a gap below $E_F$ in the diagonal direction. An inspection of the potential for stripe modulations along \[1,0,0\] reveals that the potential becomes modulated also along \[1,1,0\], albeit in a different fashion. The potential is slowly varying like the absolute value of $sine$-functions with different phase along different rows. No NN-AFM potential shifts are found along \[1,1,0\], and the dominant Cu-d lobes of wave function for $\vec{k}$ along \[1,1,0\] are oriented along \[1,1,0\] and not along the bond direction. Various arguments for the effective periodicity, partly based on these conditions, indicate that a gap should appear at about 1/3 of the distance between $\Gamma$ and the $M$ point when the doping is near 0.16. The effective $V_q$ should be less than half of the amplitude along \[1,0,0\]. The result is shown in Fig. 3. The k-position of the gap and the extreme values of the gap energies ($\sim$ 0.5-1 eV below $E_F$) are not too far from what is seen experimentally [@chang], but again, the quantitative power of the NFE-model is limited. It is not clear if the vertical part of the band dispersion can be observed. A vertical line is connecting the states above and below the gap in Fig. 3, which could be justified for an imperfect gap away from the zone boundary.
![Thin lines: LMTO band structure for LBCO between $M$ and $\Gamma$. Broken line: the FE fit, and the heavy line the NFE solution with a gap. $E_F$ is at zero in undoped LBCO, and at the thin broken line for $x \sim 0.15$. []{data-label="fig2"}](wf.ps){height="7.0cm" width="8.0cm"}
The dynamics is important if SPC mediates superconductivity. But static, stripe like features are identified by surface tunneling spectroscopy (STM) [@davis]. Impurities and defects near the surface might be important, but also the surface itself could modify the conditions for SPC. The latter hypothesis is investigated in LMTO calculations which simulate the surface through insertion of two layers of empty spheres between the outermost LaO-layers. These calculations consider 3 and 5 layers of undoped La$_2$CuO$_4$, and 3 layers of a doped LBCO with and without phonon distortion in a cell of length 4$a_0$ in the CuO bond direction. The SPC remains in the surface layer. The effective doping is in all cases found to increase close to the surface, which has 0.1-0.2 more electrons/Cu than the Cu in the interior, and the magnetic moment is 2-3 times larger in the surface layer. The moments disappear without field, but a calculation for 3 layers of La$_2$CuO$_4$ with a narrower separating layer, has stable AFM moments $\pm 0.06 \mu_B$ per Cu within the surface layer, and the local DOS on the Cu at the surface drops near $E_F$. In addition, also the apical-O nearest to the surface acquires a sizable moment. This calculation is simplified, with a probable interaction across the empty layer, but it shows that static AFM surface configurations are very close to stability.
Conclusion
==========
Band calculations show that SPC is important for waves along \[1,0,0\] or \[0,1,0\], with secondary effects in the diagonal direction. Many properties, like pseudogaps, phonon softening, dynamic stripes, correlation between $\bar{q}$ and $x$, smearing of the non-diagonal part of the FS, and abrupt disappearance of the spin fluctuations at a certain $T^*$, are possible consequences of SPC within a rather conventional band [@tj5; @tj6]. Different SPC for different phonons leads to a hour-glass shape of the $(q,\omega)$-spectrum with the narrowest part for the modes with strongest coupling. The much discussed WF-structure in the diagonal band dispersion could be a result of a secondary potential modulation in this direction and a gap below $E_F$. Static potential modulations within the CuO-planes, such as for superstructures, could compensate the pseudogap and enhance $N(E_F)$ and $T_C$. Spin waves become softer through interaction with phonons and near the surface. These LDA results show a tendency for static spin waves at the surface.
[10]{}
J.M. Tranquada, B.J. Sternlieb, J.D. Axe, Y. Nakamura and S. Uchida, Nature [**375**]{}, 561 (1995) and references therein.
A. Damascelli, Z.-X. Shen and Z. Hussain, Rev. Mod. Phys. [**75**]{}, 473, (2003) and references therein.
J. Chang, M. Shi, S. Pailhes, M. M[å]{}nson, T. Claesson, O. Tjernberg, A. Bendounan, L. Patthey, N. Momono, M. Oda, M. Ido, C. Mudry and J. Mesot, cond-matt arXiv:0708.2782 (2007).
M. Matsuda, M. Fujita, S. Wakimoto, J.A. Fernandez-Baca, J.M. Tranquada and K. Yamada, cond-matt arXiv:0801:2254v1 (2008).
T. Fukuda, J. Mizuki, K. Ikeuchi, K. Yamada, A.Q.R. Baron and S. Tsutsui, Phys. Rev. B[**71**]{}, 060501(R), (2005).
G.M. Zhao, H. Keller and K. Conder, J. Phys.: Cond. Mat. [**13**]{}, R569, (2001).
G.-H. Gweon, T. Sasagawa, S.Y. Zhou, J. Graf, H. Takagi, D.-H. Lee and A. Lanzara, Nature [**430**]{}, 187, (2004).
K. Yamada, C.H. Lee, K. Kurahashi, J. Wada, S. Wakimoto, S. Ueki, H. Kimura, Y. Endoh, S. Hosoya, G. Shirane, R. J. Birgenau, M. Greven, M.A. Kastner and Y.J. Kim, Phys. Rev. B[**57**]{}, 6165, (1998).
J.M. Tranquada, H. Woo, T.G. Perring, H. Goka, G.D. Gu, G. Xu, M. Fujita and K. Yamada, Nature [**429**]{}, 534 (2004).
B. Vignolle, S.M. Hayden, D.F. McMorrow, H.M. Rönnow, B. Lake and T.G. Perring, Nature Physics [**3**]{}, 163, (2007).
T. Jarlborg, Phys. Rev. B[**64**]{}, 060507(R), (2001).
T. Jarlborg, Phys. Rev. B[**68**]{}, 172501 (2003).
T. Jarlborg, Physica C[**454**]{}, 5, (2007).
T. Jarlborg, Phys. Rev. B[**76**]{}, 140504(R), (2007).
T. Jarlborg, cond-matt arXiv:0804.2403, (2008).
H. Chen and J. Callaway, Phys. Rev. B[**46**]{}, 14321, (1992).
Y. Kohsaka, C. Taylor, K. Fujita, A. Schmidt, C. Lupien, T. Hanaguri, M. Azuma, H. Esaki, H. Takagi, S. Uchida and J.C. Davis, Science [**315**]{}, 1380 (2007).
|
[On geometry behind Birkhoff theorem\
]{} *Pavol Ševera Dept. of Theoretical Physics, Comenius University,\
84215 Bratislava, Slovakia*
Area as the affine parameter
============================
Suppose $N$ is a 2dim spacetime, i.e. a surface with a metric tensor of signature $(1,1)$. It is fairly easy to find the isotropic geodesics of $N$ – we just integrate the isotropic directions. It is perhaps more interesting to find affine parameters for these geodesics: one can use the area of a strip between two infinitesimally close integral curves: $$\epsfxsize=7cm \epsfbox{isogeo.eps}$$
To prove this fact we use the following useful characterization of geodesics in a pseudo-Riemannian manifold $M$ (for the purpose of teaching of general relativity, it appears as a convenient definition): [*If we choose coordinates near a curve $\ga$ so that the metric tensor is a constant plus $O(r^2)$, where $r$ is (say) the Euclidean distance to $\ga$, then $\ga$ is a geodesic iff it is a straight line.*]{} Now our claim that area can be used as the affine parameter is clear, since it is surely so for constant metric tensor.
We shall use our result in the next section to prove Birkhoff theorem, but now, as a digression, we mention some other elementary applications. We can use it conveniently to check isotropic geodesic completeness of 2dim spacetimes. Consider, for example, the Eddington-Finkelstein metrics: $$ds^2=-(1-{2m\over r})du^2+2dudr.$$ We prove that the geodesic $\ga$ on the following picture is incomplete to the left; we do not use the exact form of the metrics, only the fact that it is invariant w.r.t. horizontal translations and that the geodesics converge on the left (on the picture, $u$ is the horizontal coordinate):
$$\epsfxsize=7cm \epsfbox{efin.eps}$$
Indeed, the green area between $\ga$ and $\bar\ga$ is equal to the red area: to see it, just take the green triangle and translate it to the right. The red area is finite, q.e.d.
As another example, consider a $2n$-gon with isotropic sides. Widen each of its sides to an infinitesimally narrow strip between two isotropic curves, and compute the expression $$\mu={A_1 A_3\dots A_{2n-1}\over A_2 A_4\dots A_{2n}},$$ where $A_i$ is the area at the $i$’th corner, as on the picture: $$\epsfxsize=7cm \epsfbox{isogon.eps}$$
$\mu$ is clearly independent of the choice of the widening of the sides: if we widen the $i$’th side in a different way, $A_i$ and $A_{i+1}$ get multiplied by the same number, so that $\mu$ doesn’t change. We leave it as an excercise to the reader to prove that $\mu$ is actually the result of the parallel transport along the polygon, so that it is equal to the exponential of $\pm$ the integral of the curvature inside the polygon.
Birkhoff theorem
================
Suppose $M$ is a 4dim spacetime on which $SO(3)$ acts by isometries, so that all the orbits are spheres. Birkhoff theorem states that under some assumption on the Ricci tensor, there is a 1-parameter group of isometries of $M$, commuting with $SO(3)$.
As one easily sees, $M$ is of the form $M=N\times S^2$, where $N$ is a surface, and $$ds^2_M=ds^2_N+r^2ds^2_{S^2},$$ where $r>0$ is a function on $N$ and $ds^2_{M,N,S^2}$ are the metrics on $M$, $N$ and the unit sphere. Let $R$ and $\ric$ be the Riemann and Ricci tensor on $M$, and $\om_N$ the area form on $N$. Finally, let $X_r$ be the Hamiltonian vector field on $N$ generated by $r$ with symplectic form $\om_N$, i.e. $$dr=\om_N(\cdot,X_r).$$
[*Birkhoff theorem: If $\ric(v,v)=0$ for any isotropic $v$ tangent to $N$, then $X_r$ is a Killing vector field on $M$.*]{}
The proof is split into two lemmas:
[*Lemma 1: If $N$ is a 2dim spacetime then a vector field $w$ on $N$ is conformal iff for any isotropic geodesic $\ga$, $\om_N(\dot{\ga},w)$ is constant along $\ga$.*]{}
[*Lemma 2: Under the same assumptions as in Birkhoff theorem, if $\ga$ is any isotropic geodesic in $N$ and $s$ is its affine parameter, then $dr/ds=\cst$ along $\ga$.*]{}
[*Proof of the theorem:*]{} We have to prove that the flow of $X_r$ preserves $r$ and $ds^2_N$. First we show that $X_r$ is conformal on $N$. It follows from the lemmas, since $$\cst=dr/ds=\langle\dot{\ga},dr\rangle=\om_N(\dot{\ga},X_r).$$
It remains to check that $X_r$ preserves $\om_N$ and $r$, but it certainly does, since it is a Hamiltonian vector field q.e.d.
[*Proof of Lemma 1:*]{} This follows immediately from our result in §1. Indeed, $w$ is conformal iff its flow transports isotropic curves to isotropic curves. As we noticed, the area between $\ga$ and an infinitesimally close curve $\bar{\ga}$ is a linear function of $s$ iff $\bar{\ga}$ is isotropic q.e.d.
[*Proof of Lemma 2:*]{} If $\ga$ is an isotropic geodesic in $N$ and $P\in S^2$ then $\ga\times
\{P\}$ is a geodesic in $M$. It follows from the $O(2)$-symmetry of isometries of $S^2$ preserving $P$: a geodesic is uniquely determined by its velocity at a point, so that if the velocity is $O(2)$ invariant, the geodesic must be pointwise $O(2)$-invariant, i.e. it lies in $N\times\{P\}$.
If $Q$ is a different point in $S^2$ then $\ga\times\{Q\}$ is also a geodesic; therefore, if we take $Q$ infinitesimally close to $P$ we get that any constant vector $a\in T_PS^2$ satisfies the Jacobi (geodesic deviation) equation $$\ddot{a}+R(a,\dot{\ga})\dot{\ga}=0.$$ On the other hand, the parallel transport along $\ga\times\{P\}$ must be $O(2)$-equivariant, i.e. it acts as a multiple of the identity on $T_PS^2$. It also preserves lengths, so that $a/r$ is parallel, and $$\ddot{a}=(ra/r)\ddot{\;}=\ddot{r}a/r.$$
To prove that $\ddot{r}=0$ it remains to show that if $v$ is an isotropic vector tangent to $N$ and $a$ is a vector tangent to $S^2$, then $R(a,v)v=0$. By definition, $\ric(v,v)$ is the trace of the linear map $A$ defined by $A(w)=R(w,v)v$. From $O(2)$ symmetry we see that $A(a)=\la a$ for some number $\la$ (independent of $a$); on the other hand, $A$ restricted to $TN$ is nilpotent, since $A(v)=0$ and $A(w)$ is orthogonal to $v$ for any $w$, hence it is a multiple of $v$. Therefore $0=\mbox{\it Tr}A=2\la$, i.e $A(a)=0$, q.e.d.
|
---
abstract: 'We present a detailed analysis of the vibrational spectrum and heat capacity of suspended mesoscopic dielectric plates, for various thickness-to-side ratios at sub-Kelvin temperatures. The vibrational modes of the suspended cavity are accurately obtained from the three-dimensional (3D) elastic equations in the small strain limit and their frequencies assigned to the cavity phonon modes. The calculations demonstrate that the heat capacity of realistic quasi-2D phonon cavities approach the linear dependence on $T$ at sub-Kelvin temperatures. The behavior is more pronounced for the thinnest cavities, but takes place also for moderately thick structures, with thickness-to-side ratios $\gamma$=0.1 to 0.2. It is also demonstrated that the heat capacity of the suspended phonon cavities is invariant under the product of the temperature (T) with a characteristic lateral dimension (L) of the sample. The present results establish a lower bound for the heat capacity of suspended mesoscopic structures and indicate the emergence of the quantum mechanical regime in the dynamics of bounded phonon cavities.'
author:
- 'A. Gusso$^{1,2}$[^1] and Luis G. C. Rego$^1$'
title: Heat capacity of suspended phonon cavities
---
Introduction
============
Suspended nanostructures have become relevant elements for both basic research and technology. Micro-electromechanical systems (MEMS), such as cantilevers, gears and membranes, already find widespread use in several technological applications [@handbook]. At the same time, current developments in surface nanomachining render possible the controlled fabrication of a large variety of suspended nanostructures [@Cleland; @Ekinci] having, in particular, an extremely weak thermal coupling with the environment. As a consequence, ultrasensitive bolometers [@Yung02] and calorimeters [@Fon05; @Bourgeois05] have been developed with unprecedented sub-attojoule resolution, for operation in the $T \lesssim 5$ Kelvin temperature range, envisaging the possibility of measuring the heat capacity of nano-objects and, eventually, even single molecules [@Roukes99]. In the realm of fundamental research, the operation of nanoelectromechanical structures (NEMS) is finally approaching the quantum regime [@Blencowe; @Knobel03; @Lahaye04]. The construction of suspended solid state quantum logic gates [@Armour02; @Cleland04] is amid the applications anticipated for such structures, since the electron-phonon interaction, which is a source of decoherence and dissipation for both quantum dot qubits [@Gorman05; @Hayashi03] and single-electron transistors (SET) [@Weig04], can be controlled in them [@Tobias; @chaos1; @chaos2; @Glavin].
In fact, for most of the cases mentioned above the devices are operated at sub-Kelvin temperatures; the requirement of ultracold temperatures being specially severe for the operation of quantum logic gates. For instance, in recent experimental realization, quantum dot charge-qubits [@Gorman05; @Hayashi03] and suspended SET [@Weig04] have been operated at a base temperature of 20 mK. It is therefore reasonable to expect that suspended nanostructures comprising such quantum devices as well as ultrasensitive bolometers and calorimeters will be functioning at temperatures $T \lesssim 1$K.
Despite the interest, there is not yet a comprehensive theory for the electron-phonon interaction in suspended nanostructures [@Fon02; @Qu05]. A central issue of the problem is the difficulty of rigorously describing the low temperature acoustic phonon spectrum in suspended nano-devices, which is fundamental for determining [*(i)*]{} the electron-phonon interaction with its many consequences for the device operation and [*(ii)*]{} the device’s thermal properties, such as its thermal conductivity and heat capacity. At sub-Kelvin temperatures, the formalisms adopted for bulk materials may not produce correct results for the phonon spectrum of suspended nanostructures because the wavelength and mean free path of the dominant phonons can be bigger than the physical dimensions of the structure. Moreover, there are no general analytical solutions for the vibrational modes of bounded suspended plates [@Liew95].
Motivated by these circumstances this work presents a detailed study of the phonon spectrum and the heat capacity ($C_V$) of suspended rectangular dielectric nanostructures of various thicknesses at sub-Kelvin temperatures. The vibrational modes of the suspended cavity are accurately calculated from the three-dimensional (3D) elastic equations in the small strain limit and the obtained frequencies assigned to the cavity phonon modes. After obtaining a reliable phonon spectrum, with convergence assured for a few thousand cavity modes, the heat capacity of isolated suspended mesoscopic phonon cavities having 3D and quasi-2D character is investigated. For such systems, the calculations demonstrate that the temperature dependence of $C_V$ approaches the linear regime in sub-Kelvin temperatures, the effect being more pronounced for quasi-2D nanostructures. Nonetheless, a simple model of plane waves yields a phonon spectrum in good agreement with the 3D elastic model for the very thick suspended nanostructures. A dimensional analysis of the free vibrational modes also reveal that the heat capacity of the rectangular phonon cavities has the scale invariant form, that is, $C_V$ is invariant with respect to the product of the temperature with a characteristic lateral dimension. The present results indicate that the low temperature heat capacity of quasi-2D suspended nanostructures may have been underestimated and, therefore, sets a lower bound for their heat capacity.
Theoretical Formulation
=======================
We consider suspended nanostructures of rectangular geometry, with lateral dimensions defined by $L_x$ and $L_y$, and thickness $L_z$. Such a choice is motivated by the fact that several recent experiments [@Yung02; @Tighe97; @Schwab00], probing thermal and electrical properties of suspended nanostructures, have utilized square or rectangular plates with thickness-to-side ratios ($\gamma = L_z/L_y$) including 0.1 \[\], 0.04 \[\], and 0.015 \[\]. The first structure is considered to be a moderately thick plate, while the last two cases are examples of thin plates. Because the suspended nanostructures are usually made of non-crystalline materials, like poly-silicon and amorphous SiN (Silicon Nitride), in this work the phonon cavities are taken to be homogeneous and isotropic rectangular structures.
Because at sub-Kelvin temperatures the dominant phonon modes are long wavelength acoustic ones, with a mean free path that exceeds the dimensions of the structure [@Yung02; @Tighe97; @Schwab00], we resort to the elasticity theory to obtain the phonon spectrum of the suspended nanostructures. In this limit the phonons correspond to the free vibrational modes of the cavity [@chaos2], as determined by the elastic theory of solids [@Graff]. The continuum elasticity model has been successfully used to describe the properties of propagating phonons in beams [@beams; @Santamore], thin membranes [@Tobias; @slabs], and arrays of nanomechanical resonators [@Photiadis; @Zalalutdinov].
To secure the correct description of the thermal properties of suspended nanostructures of a few $\mu
m^2$ in area in the sub-Kelvin temperature regime, at least a few thousand vibrational modes have to be calculated with confidence. A variety of methods intended to calculate the free vibrations of thick plates have been developed [@Liew95]. Since the simplified models like the Classical Plate Theory (CPT) [@Leissa] are adequate only for the lowest modes of thin plates, a three-dimensional analysis of the free vibrations of the cavity is necessary. In general such methods utilize the Rayleigh-Ritz formalism to determine the displacement field, which is represented as a series of orthogonal polynomials [@Liew; @Zhou]. In this work we follow the procedure developed by Zhou et al. [@Zhou] due to its simplicity and generality as well as for producing very accurate natural frequencies. For the sake of completness, the method is summarized next.
For the problem of the free vibrations of isotropic structures in the small strain approximation, the kinetic ($\textsf{T}$) and strain ($\textsf{U}$) elastic energy functionals for the displacement field $\mathbf{u}(\vec{r},t)=\mathbf{U}(\vec{r})e^{i\omega t}$ can be written as $$\begin{aligned}
\textsf{T} &=& \frac{\rho}{2} \int \left[ \left(\frac{\partial u_x}{\partial t}\right)^2 +
\left(\frac{\partial u_y}{\partial t}\right)^2 + \left(\frac{\partial u_z}{\partial t}\right)^2 \right] dv \label{T} \\
\textsf{U} &=& \frac{E}{2(1+\nu)} \int \left(\frac{\nu \Lambda^2_1}{1-2\nu} + \Lambda_2 +
\frac{\Lambda_3}{2} \right) dv \ , \label{V}\end{aligned}$$ where $\rho$ is the mass density, $E$ is the Young’s modulus and $\nu$ the Poisson’s ratio. The $\Lambda$ quantities in the strain energy term are: $\Lambda_1= \sum_i \varepsilon_{ii}$, $\Lambda_2 =
\sum_i \varepsilon_{ii}^2$ and $\Lambda_3 = \sum_{i<j} \varepsilon_{ij}$, for $i,j=(x,y,z)$, with the components of the strain $\varepsilon_{ii} =\partial_i u_i$ and $\varepsilon_{ij} =
\partial_j u_i + \partial_i u_j$. It is convenient to normalize the coordinates with respect to the dimensions of the plate, defining the dimensionless variables $\xi=2x/L_x$, $\eta=2y/L_y$ and $\zeta=2z/L_z$ in the interval \[-1,1\]. The time-independent displacement field $\mathbf{U}(\vec{r})$ is then written as a sum of orthogonal Chebyshev polynomials multiplied by boundary functions $F_{\delta}(\xi,\eta)$ $$\begin{aligned}
U_{\mathrm{x}}(\xi,\eta,\zeta) &=& F_\mathrm{x}(\xi,\eta) \sum_{i,j,k} A_{ijk}P_i(\xi)P_j(\eta)P_k(\zeta) \\
U_{\mathrm{y}}(\xi,\eta,\zeta) &=& F_\mathrm{y}(\xi,\eta) \sum_{i,j,k} B_{ijk}P_i(\xi)P_j(\eta)P_k(\zeta) \\
U_{\mathrm{z}}(\xi,\eta,\zeta) &=& F_\mathrm{z}(\xi,\eta) \sum_{i,j,k}
C_{ijk}P_i(\xi)P_j(\eta)P_k(\zeta) \ , \label{displacements}\end{aligned}$$ with the summations beginning from zero. The functions $P_n(\chi)$ are Chebyshev polynomials of the first kind and degree $n$, defined by the relation $$\begin{aligned}
P_n(\chi) = \cos\left[n\arccos(\chi)\right] \ ,\label{Chebyshev}\end{aligned}$$ with $n$ a non-negative integer. The boundary functions $F_\mathrm{x}(\xi,\eta)$, $F_\mathrm{y}(\xi,\eta)$ and $F_\mathrm{z}(\xi,\eta)$ have the general form $F_{\delta}(\xi,\eta) =
f^1_{\delta}(\xi)f^2_{\delta}(\eta)$, with $\delta=\mathrm{x},\mathrm{y},\mathrm{z}$. For our purposes the boundary conditions of interest are FF (free-free), CC (clamped-clamped) and CF/FC, which correspond, respectively, to the functions $f^1_{\delta}(\xi) \equiv f^2_{\delta}(\eta) = 1$, $f^1_{\delta}(\xi) \equiv f^2_{\delta}(\eta) = 1-\eta^2$, and $f^1_{\delta}(\xi) \equiv
f^2_{\delta}(\eta) = 1 \pm \eta$.
Substituting the series representation for $\mathbf{u}(\vec{r},t)=\mathbf{U}(\vec{r})e^{i\omega t}$ into Eqs. (\[T\]) and (\[V\]), one obtains $\textsf{T}_{max}$ and $\textsf{U}_{max}$, which are the maximum values of $\textsf{T}$ and $\textsf{U}$ during a vibratory cycle. The frequency determinant is formulated by minimizing the functional $\textsf{U}_{max}-\textsf{T}_{max}$ with respect to each of the coefficients $\{A\}$, $\{B\}$ and $\{C\}$, to produce the 3D elastic equations of motion $$\begin{aligned}
\left(\left[K\right] - \Omega^2\left[M\right]\right) \left(\begin{array}{c}
\left\{A\right\} \\
\left\{B\right\} \\
\left\{C\right\} \\
\end{array}\right) = 0 \ , \label{eigen}\end{aligned}$$ where $\Omega = \omega L_x \sqrt{\rho/E}$ is a dimensionless parameter and $\omega$ is the free vibration frequency, to be assigned to the phonons. In Eq. (\[eigen\]), $\left[K\right]$ and $\left[M\right]$ denote the symmetric stiffness matrix and the block diagonal mass matrix, respectively, which can be found in explicit form in Ref. \[\]. Another useful dimensionless parameter associated with the frequency is $\Delta =
\left(\Omega/\lambda\gamma\pi^2\right)\sqrt{12(1-\nu^2)}$, with $\lambda = L_x/L_y$ and $\gamma =
L_z/L_y$, which yields the normalized frequencies for the family of all rectangular plates with the same aspect ratios ($\lambda$ and $\gamma$) and elastic constants ($E$ and $\nu$).
An important aspect of the present analysis is the reliability of the phonon spectrum to be used in the evaluation of the thermodynamical properties of the nanostructure. For that reason the convergence of the highest frequencies was limited to be within 5%. The spectrum span can be extended by increasing the amount of basis functions $P_n(\chi)$ used in the representation of $\mathbf{U}(\vec{r})$. In the instance of the thickest square plate to be considered, with $\gamma = 0.5$, we have used $n_x=n_y=29$ and $n_z=13$ Chebyshev polynomials, yielding approximately 12000 reliable modes. For the thinnest plate, $\gamma = 0.02$, the best results were obtained for $n_x=n_y=51$ and $n_z=4$ that yielded 4000 reliable phonon modes. The number of reliable frequency modes will determine the maximum temperature ($T_{\rm
max}$) for which the heat capacity can be calculated with confidence.
----------------- ------------------------- --------------------- -------
Material $\rho$(g/cm$^3$) $E$(GPa) $\nu$
GaAs [@aGaAs] 5.1 71 0.32
Si [@aSi] 2.3 170 0.22
SiN [@SiN] 3.1 285 0.20
SiC [@SiC] 3.0 400 0.20
----------------- ------------------------- --------------------- -------
: \[constants\] Physical constants of relevant materials in the amorphous phase.
The dependence of the phonon frequencies on the material parameters is such that higher frequencies are obtained for stiff and light materials. Table \[constants\] contains the values of the mass density, Young’s modulus and Poisson’s ratio for materials of relevance for the fabrication of NEMS. This work investigates nanostructures made of amorphous silicon carbide (a-SiC) because of its high rigidity and widespread use in the fabrication of suspended NEMS.
The calculated spectra exhibit significant dependence on the dimensions of the nanostructure, as illustrated in Figure \[spectra\] for the first 2500 vibrational modes of free standing a-SiC mesoscopic structures. The structures have the same lateral dimensions $L_x=L_y=L=2\ \mu m$ but different thickness-to-side ratios: $\gamma$= 0.02 (solid), 0.05 (dashed) and 0.1 (dot-dashed). At the lower part of the spectrum the frequencies are higher for the thick plates, however, the behavior is reversed as the mode index $\alpha$ increases. The frequencies are also inversely proportional to the area of the plate. Moreover, by a numerical analysis of the eigenfrequencies it was observed that the vibrational spectrum of the cavities can be very well described by the form $\omega = \omega_0
\alpha^{\phi}$ in two limiting cases: for the quasi-2D phonon cavities ($\gamma \leq 0.02$) the fitting yields $\phi \lesssim 1$, whereas for the three-dimensional (thick) phonon cavities ($\gamma
> 0.2$) one obtains $0.4 < \phi < 0.5$ . Between the two cases, [*i.e.*]{} for moderately thick nanostructures, the frequencies cannot be well described by a single power curve.\
\
![Natural vibration frequencies, as a function of the mode index $\alpha$, calculated by the 3D method for a free standing square ($\lambda = 1$) a-SiC nanostructure of sides $L = 2 \mu m$ and $\gamma=L_z/L$ equal to 0.02 (solid), 0.05 (dashed) and 0.1 (dot-dashed).[]{data-label="spectra"}](spectrum.eps){width="8cm"}
In the following we examine the heat capacity of the phonon cavities, as predicted by the 3D analysis. For the sake of comparison, we also utilize a basic model to describe the confined phonons, which comprises some of the reductionist features commonly found in the literature [@Roukes99; @Anghel99]. Its main assumption is that the phonons can be described by plane waves with three independent polarizations: one longitudinal and two transverse. In addition, as the dimensions of the structure become sufficiently small, [*i.e.*]{} comparable to the mean free path of the phonons, these become standing waves satisfying the appropriate boundary conditions. The method is here designated Bounded Plane Wave Model (BPWM). For the nanostructures under consideration and because of the very low temperature, it is assumed that the phonons form standing waves in all three directions. In comparison with the 3D analysis, different predictions for the specific heat are expected on the basis of the BPWM, owing to its naive representation of the phonon modes in suspended nanostructures. However, despite the simplicity, it will be shown that the BPWM can describe the heat capacity of thick phonon cavities quite well.
According to the BPWM, the phonon spectrum of a freely suspended nanostructure is easily obtained from the wave vectors $$\begin{aligned}
\kappa_{lmn}^2 = \pi^2 \left[ \left(
\frac{l}{L_x} \right)^2 +\left( \frac{m}{L_y} \right)^2 +\left( \frac{n}{L_z} \right)^2 \right]\ ,\end{aligned}$$ with $l$, $m$ and $n$ integers. The frequencies for the longitudinal and transverse modes are given by $\omega^l_{lmn} = v^l \kappa_{lmn}$ and $\omega^t_{lmn} = v^t \kappa_{lmn}$, with the sound velocities of the a-SiC obtained from the elastic constants of the material: $v^l = 12,170$ m/s and $v^t = 7,450$ m/s.
Heat Capacity of Suspended Phonon Cavities
==========================================
Having calculated the displacement modes $\mathbf{U}_\alpha(\vec{r})$ and the associated eigenfrequencies $\omega_\alpha$, corresponding to the free vibrations of the plate, the quantum mechanical phonon modes of the cavity are obtained by the standard quantization procedure [@chaos2]. As a result, we ascribe the energy $\mathcal{U} = \sum_{\alpha} (n_\alpha +1/2) \hbar
\omega_\alpha$ to the phonon system and calculate the quantum mechanical heat capacity of the phonon cavity as $$\begin{aligned}
C_V(T)
= \frac{\partial \cal{U}}{\partial T} = \frac{\partial}{\partial T} \sum_\alpha
\frac{\hbar\omega_\alpha}{\exp{(\hbar \omega_\alpha/k_BT)}-1} \ , \label{CT}\end{aligned}$$ with $n_\alpha$ given by Planck’s distribution. Since the harmonic regime complies with the small strain limit that is assumed in the present derivations, the constant-volume specific heat ($c_v$) must equal the constant-pressure specific heat ($c_p$) [@Ashcroft]. Moreover, for bulk metallic samples it is generally found that the low temperature specific heat varies as $c_v = AT + BT^3$, comprising the electron and phonon contributions, respectively.
![a) Specific heat ($c_v = C_V/V$) as a function of temperature obtained from the 3D analysis for a free standing a-SiC square cavity of lateral dimensions $L= 2 \ \mu m$ and $\gamma$ = 0.02, 0.05, 0.1, 0.2 and 0.5, in that order from top to bottom. b) The heat capacity ($C_V$) for the quasi-2D $\gamma$ = 0.02 (solid) and fully 3D $\gamma$ = 0.5 (dashed). The gray curves are predictions from the BPWM (refer to the text).[]{data-label="C_2mu"}](SH_2.eps){width="6.5cm"}
Next we show predictions for the heat capacity obtained through the 3D analysis as well as results gained by the simplified method. The phonon cavities to be considered are free standing a-SiC square plates ($\lambda = 1$) with lateral dimensions $L = 2\ \mu m$ and having different thickness-to-side ratios, namely $\gamma$ = 0.02, 0.05, 0.1, 0.2 and 0.5. $C_V$ is calculated for temperatures $T
\leqslant T_{\rm max}$, where $T_{\rm max}$ is the maximum temperature that allows reliable results to be obtained with the available phonon modes. That is, if $T > T_{\rm max}$ additional modes must be included in the calculation of $C_V$ since the occupation of the high energy modes is increases. Figure \[C\_2mu\](a) presents the specific heat $c_v = C_V/V$ as a function of temperature for phonon cavities of different thicknesses, as obtained through the 3D analysis. For temperatures $T \lesssim
100$ mK the calculations reveal more than an order of magnitude difference between the quasi-2D ($\gamma \leqslant 0.05$) and the three-dimensional ($\gamma \gtrsim 0.2$) phonon cavities. Figure \[C\_2mu\](b) shows the heat capacity ($C_V(T)$) for two limiting cases, represented by the quasi-2D ($\gamma = 0.02$) and fully three-dimensional ($\gamma = 0.5$) suspended nanostructures. Results gained from the BPWM are also shown by the gray curves. Particularly, the BPWM predicts very good results for the three-dimensional cavities, but seriously underestimates the heat capacity of the quasi-2D structures.
![$p_C(T)$ for free standing a-SiC square plates with $L= 2\ \mu$m. In the upper panel the predictions of the 3D analysis for different $\gamma$, as indicated by the labels. In the lower panel $p_C$ for the quasi-2D ($\gamma$ = 0.02) and fully 3D ($\gamma$ = 0.5) cases. The gray curves are the results gained from the BPWM.[]{data-label="pC_fig"}](fig3.eps){width="8cm"}
Throughout the analysis we have considered thin as well as thick suspended nanostructures. That raises the question, how is the system’s dimensionality reflected on the behavior of $C_V(T)$? In Ref. \[\] the dependence of the specific heat on the dimensionality of the system was investigated with a model similar to the BPWM. It was shown that the relation $C_V \propto T^d$ should hold for a confined phonon gas in the low temperature limit, with $d$ as the system’s dimensionality, supporting the inaccurate notion that $C_V \propto T^2$ for quasi-2D phonon cavities at sub-Kelvin temperatures. Here we perform such an analysis and demonstrate instead that the heat capacity of realistic quasi-2D phonon cavities approaches the linear dependence $C_V \propto T$ in the low temperature limit. For that purpose consider the quantity $$\begin{aligned}
p_C(T) = T \frac{\partial (\ln C_V(T))}{\partial T} \ ,\end{aligned}$$ which provides the temperature dependence of $C_V(T)$. For instance, if the heat capacity is given by $C_V \propto T^{\alpha}$, we have simply $p_C = \alpha$.
Figure \[pC\_fig\] presents the calculated values of $p_C(T)$ for free standing square plates with lateral dimensions $L = 2 \mu m$ and the thickness-to-side ratios previously considered, ranging through the quasi-2D to the fully 3D cases. The upper panel shows that $C_V$ approaches the linear $C_V
\propto T$ behavior at temperatures $T \lesssim 0.3\ K$, particularly in the case of the thinnest cavities with $\gamma$ = 0.02 and 0.05. Such a result is expected to hold for strict 2D systems like graphene, although the effect has been predicted also for the specific heat of a 2D array of nanomechanical resonators [@Photiadis]. In fact, the sub-$T^2$ behavior is observed here even in the case of the moderately thick cavities with $\gamma$ = 0.1 and 0.2, at lower temperatures. As the temperature increases $p_C$ tends to 3, indicating that the thickness of the cavity becomes much larger than the dominant phonon wavelength; on the other hand, for vanishing small temperatures, lower than the fundamental vibrational energies, the phonon cavity behaves as a 0D system. In this case the specific heat decreases exponentially and $p_C$ diverges with $T^{-1}$. The $C_V \propto T$ behavior is commonly associated with quasi-1D systems like single-wall nanotubes (SWNT) at low temperatures [@SWNT]. However, as the temperature decreases beyond $T \lesssim 0.5 K$ a sublinear behavior is observed n such systems [@Lasjaunias]. The effect is ascribed to the overwhelming contribution of the flexural modes to $C_V$, since those modes present $\omega(q) \propto q^2$ and consequently $C_V \propto
T^{1/2}$ \[\].
The lower panel of Figure \[pC\_fig\] compares the results obtained from the 3D analysis with those of the simple BPWM. According to earlier calculations, both methods produce similar results for fully three-dimensional structures, but the BPWM yields the wrong $p_C \thickapprox 2$ value for quasi-2D structures. In the last case, it has been verified that the simple classical plate theory (CPT) for flexural modes, which reduces the dimensions of the problem from three to two by incorporating some of the plate’s characteristics such as bending moments [@Leissa; @Graff], yields a close estimate for the temperature dependence of $C_V(T)$. The CPT fails, however, as the temperature raises above $T\gtrsim \hbar \pi v_s/(L_z k_B)$, $v_s$ being the sound velocity, because longitudinal and torsional modes begin to contribute significantly to $C_V$. Because of the significant difference between the predictions made by the 3D analysis and the BPWM for thin nanostructures, $p_C$ may be a convenient observable to experimentally determine the emergence of coherent quantum mechanical dynamics in mesoscopic phonon cavities.
An additional property of the suspended phonon cavities is the scale invariant character of their heat capacity, described as $C_V = \mathcal{F}(TL)$, where $\mathcal{F}$ represents the functional in Eq. (\[CT\]). Namely, $C_V$ is invariant regarding the product of the temperature ($T$) with a characteristic lateral dimension ($L$) of the structure. For the sake of clarity we consider a square cavity, but the same result can be derived for rectangular cavities with $L_x = \lambda L_y$, or triangular ones. First notice that the phonon frequency can be written as $$\begin{aligned}
\omega_\alpha = \frac{\pi^2 \gamma}{L} \ \Delta_\alpha \sqrt{\frac{E}{12\rho(1-\nu^2 )}} \
.\label{omega}\end{aligned}$$ Thus $\omega_\alpha \propto 1/L$, for plates of a given thickness-to-side ratio $\gamma$. The dimensionless parameter $\Delta_\alpha$ is also a function of $\gamma$ and $\lambda$, therefore independent of the absolute dimensions of the plate. Then, from the definition of the heat capacity, Eq. (\[CT\]), with the derivative and summation operations commuted, it is easily verified that a transformation that leaves $\omega_\alpha/T \propto 1/(LT)$ invariant does not change the heat capacity. Consequently, the results that have been presented for square phonon cavities of lateral dimension $L = 2 \mu m$ can be generalized for congruent cavities of arbitrary size $L'$, with the temperature re-scaled to $T' = (L/L')T$.
Different types of suspended cavities were also investigated, such as bridges (CCFF) and cantilever-like (CFFF) structures, yielding results in qualitative agreement with those previously illustrated, for both the vibrational spectrum and the specific heat. It is observed that the parameter $p_C$ shows a tendency towards the value 1 for elongated structures. For instance, in the case of $\gamma = 0.1$ and $\lambda = L_x/L_y = 4$ we obtained $p_c \thickapprox 1.5$ at $T = T_{\rm max}$.
The hitherto calculations of the heat capacity of phonon cavities have not taken into account the additional degrees of freedom comprised by impurities, disorder and surface defects, etc. that will be responsible for an increase of $C_V$. In particular, the specific heat of bulk noncrystalline solids exhibits an anomalous linear variation with the temperature [@Zeller] for $T <$ 1 K. The present results, however, set a lower bound for the specific heat of such dielectric nanostructures.
Conclusions
===========
We presented a detailed investigation of the vibrational spectrum and the heat capacity of suspended dielectric mesoscopic structures of various thicknesses at sub-Kelvin temperatures. More than 4000 frequency modes of the cavity were accurately obtained from the 3D elastic equations in the small strain regime. It is therefore demonstrated that the low temperature heat capacity of realistic quasi-2D phonon cavities have an approximate linear dependence on $T$, a result that contradicts estimates obtained by simple models. The sub-$T^2$ variation of the heat capacity is observed even for the moderately thick mesoscopic structures. The results show the importance of a fully 3D analysis based on the elastic equations of suspended plates, bridges and cantilevers, for the correct determination of their thermal properties. Finally, the sub-$T^2$ effect evidences the quantum mechanical nature of the phonon cavity dynamics and sets a lower bound for their specific heat. The reported results should have special interest for suspended nanostructures intended to be part of solid state quantum devices.
Acknowledgments
===============
The authors acknowledge financial support from CNPq/Brasil and funding provided by [*Projeto Universal*]{} - CNPq. We thank W. Figueiredo and M.E.G. da Luz for comments and suggestions.
[10]{}
B. Bushan (editor), [*Springer Handbook of Nanotechnology*]{} (Springer, Berlin,2004).
A. N. Cleland, [*Foundations of Nanomechanics*]{} (Springer-Verlag, 2002).
K.L. Ekinci and M.L. Roukes, Rev. Sc. Inst. [**76**]{}, 061101 (2005).
C. S. Yung, D. R. Schmidt, and A. N. Cleland, Appl. Phys. Lett. [**81**]{}, 31 (2002).
W. Chung Fon, Keith. C. Schwab, John M. Worlock, and Michael L. Roukes, Nano Lett. [**5**]{}, 1968 (2005).
O. Bourgeois, S.E. Skipetrov, F. Ong, and J. Chaussy, Phys. Rev. Lett. [**94**]{}, 057007 (2005).
M. L. Roukes, Physica B [**263-264**]{}, 1 (1999).
R.G. Knobel and A.N. Cleland, Nature [**424**]{}, 291 (2003).
M.D. LaHaye, O. Buu, B. Camarota, K.C. Schwab, Science [**304**]{}, 74 (2004).
M. Blencowe, Phys. Rep. [**395**]{}, 159 (2004).
A. D. Armour, M. P. Blencowe, and K. C. Schwab, Phys. Rev. Lett. [**88**]{}, 148301 (2002).
A.N. Cleland and M.R. Geller, Phys. Rev. Lett. [**93**]{}, 070501 (2004).
T. Hayashi, T. Fujisawa, H.D. Cheong, Y.H. Jeong, and Y. Hirayama, Phys. Rev. Lett. [**91**]{}, 226804 (2003); T. Fujisawa, T. Hayashi and Y. Hirayama, J. Vac. Sci. Tech. B [**22**]{}, 2035 (2004).
J. Gorman, E.G. Emiroglu, D. G. Hasko, and D. A. Williams, Phys. Rev. Lett. [**95**]{}, 090502 (2005).
E.M. Weig [*et al.*]{}, Phys. Rev. Lett. [**92**]{}, 046804 (2004).
S. Debald, T. Brandes, and B. Kramer, Phys. Rev. B [**66**]{}, 041301(R) (2002).
L. G. C. Rego, A. Gusso, and M. G. E. da Luz, J. Phys. A: Math. Gen. [**38**]{}, L639 (2005).
A. Gusso, M. G. E. da Luz, and L. G. C. Rego, Phys. Rev. B [**73**]{}, 035436 (2006).
B.A. Glavin, V.I. Pipa, V.V. Mitin, and M.A. Stroscio, Phys. Rev.B [**65**]{}, 205315 (2002).
W. Fon, K.C. Schwab, J.M. Worlock, and M.L.Roukes, Phys. Rev. B [**66**]{}, 045302, (2002); S. Barman and G.P. Srivastava, Phys. Rev. B [**73**]{}, 205308 (2006).
S.X. Qu, A.N. Cleland, and M.R. Geller, Phys. Rev. B [**72**]{}, 224301 (2005).
A comprehensive literature review of the problem can be found in: L.M. Liew, Y. Xiang, and S. Kitipornchai, J. Sound and Vib. [**180(1)**]{}, 163 (1995).
T. S. Tighe, J. M. Worlock, and M. L. Roukes, Appl. Phys. Lett. [**70**]{}, 2687 (1997).
K. Schwab et al., Nature [**404**]{}, 974 (2000).
Karl F. Graff, [*Wave Motion in Elastic Solids*]{}, (Dover, NY, 1975).
L. G. C. Rego and G. Kirczenow, Phys. Rev. Lett. [**81**]{}, 232 (1998).
D.H. Santamore and M.C. Cross, Phys. Rev. B [**66**]{}, 144302 (2002).
T. Kühn, D.V. Anghel, J.P. Pekola, M. Manninen, and Y.M. Galperin, Phys. Rev. B [**70**]{}, 125425 (2004).
D.M. Photiadis, J.A. Bucaro, and X. Liu, Phys. Rev. B [**73**]{}, 165314 (2006).
M.K. Zalalutdinov et al., Appl. Phys. Lett. [**88**]{}, 143504 (2006).
G. F. Elsbernd and A. W. Leissa, [*Developments in Theoretical and Applied Mechanics*]{} [**4**]{}, 19 (1970).
D. Zhou, Y.K. Cheung, F.T.K. Au, and S.H. Lo, Int. J. of Solids Structures [**39**]{}, 6339 (2002).
K.M. Liew, K.C. Hung, and M.K. Lim, Int. J. Solids Structures [**30**]{}, 3357 (1993); J. Appl. Mech. [**62**]{}, 159 (1995).
M. C. Rigdway, C. J. Glover, G. J. Foran, and K. M. Yu, J. Appl. Phys. [**83**]{}, 4610 (1998);\
Ingvar Ebbsjö [*et al.*]{}, J. Appl. Phys. [**87**]{}, 7708 (2000).
W. N. Sharpe Jr., B. Yuan, R. Vaidyanathan, and R. L. Edwards, in Proceedings of the 10th IEEE International Workshop on Microelectromechanical Systems, 424 (1997).
A. Khan, J. Philip, and P. Hess, J. Appl. Phys. [**95**]{}, 1667 (2004).
M. A. El Khakani, M. Chaker, M. E. O´Hern, and W. C. Oliver, J. Appl. Phys. [**82**]{}, 4310 (1997);\
R. F. Wiser, M. Tabib-Azar, M. Mehregany, and C. A. Zorman, J. Microelectromech. Syst. [**14**]{}, 579 (2005).
D. V. Anghel and M. Manninen, Phys. Rev. B [**59**]{}, 9854 (1999).
N.W. Ashcroft and N.D. Mermin, [*Solid State Physics*]{} (Saunders College, Philadelphia, 1976).
W. Yi, L. Lu, Z. Dian-lin, Z.W. Pan, and S.S. Xie, Phys. Rev. B [**59**]{}, R9015 (1999); J. Hone, B. Batlogg, Z. Benes, A.T. Johnson, and J.E. Fischer, Science [**289**]{}, 1730 (2000).
J.C. Lasjaunias, K. Biljakovic, Z. Benes, J.E. Fischer, and P. Monceau, Phys. Rev. B [**65**]{}, 113409 (2002).
B.A. Glavin, Phys. Rev. Lett. [**86**]{}, 4318 (2001).
V.N. Popov, Phys. Rev. B [**66**]{}, 153408 (2002).
R.C. Zeller and O. Pohl, Phys. Rev. B [**4**]{}, 2029 (1971).
[^1]: Present address: Depto. de Ciências Exatas e Tecnológicas, Universidade Estadual de Santa Cruz, CEP 45662-000, Ilhéus-BA
|
---
abstract: |
We have experimentally studied the electronic $3p\leftarrow 3s$ excitation of Na atoms attached to $^3$He droplets by means of laser-induced fluorescence as well as beam depletion spectroscopy. From the similarities of the spectra (width/shift of absorption lines) with these of Na on $^4$He droplets, we conclude that sodium atoms reside in a “dimple” on the droplet surface and that superfluid-related effects are negligible. The experimental results are supported by Density Functional calculations at zero temperature, which confirm the surface location of Na, K and Rb atoms on $^3$He droplets. In the case of Na, the calculated shift of the excitation spectra for the two isotopes is in good agreement with the experimental data.
PACS 68.10.-m, 68.45.-v, 68.45.Gd
address: |
$^*$Departament ECM, Facultat de Física.\
Universitat de Barcelona, E-08028 Barcelona, Spain.\
$^\dag$INFM (Udr Padova and DEMOCRITOS National Simulation Center, Trieste);\
Dipartimento di Fisica “G. Galilei”, Università di Padova I-35131 Padova, Italy\
$^\ddag$Fakultät für Physik, Universität Bielefeld, D-33615 Bielefeld, Germany\
author:
- |
R. Mayol$^*$, F. Ancilotto$^\dag$, M. Barranco$^*$, O. Bünermann$^\ddag$,\
M. Pi$^*$, and F. Stienkemeier$^\ddag$
title: Alkali Atoms Attached to $^3$He Nanodroplets
---
INTRODUCTION
============
Detection of laser-induced fluorescence (LIF) and beam depletion (BD) signals upon laser excitation provides a sensitive spectroscopic technique to investigate electronic transitions of chromophores attached to $^4$He droplets.[@stienke2] While most of atomic and molecular dopants submerge in helium, alkali atoms (and alkaline earth atoms to some extent[@Sti:1997b]) have been found to reside on the surface of $^4$He droplets, as evidenced by the much narrower and less shifted spectra when compared to those found in bulk liquid $^4$He.[@scoles; @ernst1; @stienke3; @Ernst:2001a] This result has been confirmed by Density Functional (DF)[@anci1] and Path Integral Monte Carlo (PIMC)[@nakayama] calculations, which predict surface binding energies of a few Kelvin, in agreement with the measurements of detachment energy thresholds using the free atomic emissions.[@KKL] The surface of liquid $^4$He is only slightly perturbed by the presence of the impurity, which produces a “dimple” on the underlying liquid. The study of these states can thus provide useful information on surface properties of He nanodroplets complementary to that supplied by molecular-beam scattering experiments.[@Dal98; @har01] Hence, alkalis on the surface of helium droplets are ideal probes to investigate the liquid–vacuum interface as well as droplet surface excitations. Microscopic calculations of $^3$He droplets are scarce.[@panda; @gua00] The properties of $^3$He droplets doped with some inert atoms and molecular impurities have been addressed within the Finite Range Density Functional (FRDF) theory,[@gar98] that has proven to be a valuable alternative to Monte Carlo methods which are notoriously difficult to apply to Fermi systems. Indeed, a quite accurate description of the properties of inhomogeneous liquid $^4$He at zero temperature ($T$) has been obtained within DF theory,[@prica] and a similar approach has followed for $^3$He (see Ref. and Refs. therein).
RESULTS
=======
The experiments we report have been performed in a helium droplet machine used earlier for LIF and BD studies, and is described elsewhere.[@Sti:1997b] Briefly, helium gas is expanded under supersonic conditions from a cold nozzle forming a beam of droplets traveling freely under high vacuum conditions. The droplets are doped downstream employing the pick-up technique: in a heated scattering cell, bulk sodium is evaporated in such a way that, on average, a single metal atom is carried by each droplet. Since electronic excitation of alkali-doped helium droplets is eventually followed by desorption of the chromophore, BD spectra can be registered by a Langmuir-Taylor surface ionization detector.[@Sti:2000b] Phase-sensitive detection with respect to the chopped laser or droplet beam was used. For that reason the BD signal (cf. Fig. \[exp\_spectra\]), i.e. a decrease in intensity, is directly recorded as a positive yield. For these experiments, a new droplet source was built to provide the necessary lower nozzle temperatures to condense $^3$He droplets.
![Beam depletion spectra of Na atoms attached to $^3$He/$^4$He nanodroplets. The vertical lines indicate the positions of the two fine structure components of the Na gas-phase $3p\leftarrow 3s$ transition.[]{data-label="exp_spectra"}](fig1.eps){width="10cm"}
For the spectroscopic measurements presented in the following, we have set the source pressure to 20bar and the nozzle temperature to 11K for $^3$He, and to 15K for $^4$He. These conditions are expected to result in comparable mean cluster sizes around 5000 atoms per droplet.[@Toe:unpublished; @har01] In Fig. \[exp\_spectra\] the absorption spectrum of Na atoms attached to $^3$He nanodroplets is shown in comparison to Na-doped $^4$He droplets.
The outcome of the spectrum of Na attached to $^3$He nanodroplets is very similar to the spectrum on $^4$He droplets. The asymmetrically broadened line is almost unshifted with respect to the gas-phase absorption. This absence of a shift immediately confirms the surface location because atoms embedded in bulk superfluid helium are known to evolve large blue-shifts of the order of a couple of hundreds of wavenumbers and much more broadened absorption lines.[@Takahashi:1993] A blue shift is a consequence of the repulsion of the helium environment against the spatially enlarged electronic distribution of the excited state (“bubble effect”). The interaction towards the $^3$He droplets appears to be slightly enhanced, evidenced by the small extra blue shift of the spectrum compared to the $^4$He spectrum. In a simple picture this means that more helium atoms are contributing or, in other words, a more prominent “dimple” interacts with the chromophore. The upper halves of the spectra are almost identical, when shifting the $^3$He spectrum by $7.5\pm 1$cm$^{-1}$ to lower frequencies.
![Equidensity lines in the $x-z$ plane showing the stable state of an alkali atom (cross) on a He$_{2000}$ droplet. The 9 inner lines correspond to densities $0.9 \rho_0$ to $0.1 \rho_0$, and the 3 outer lines to $10^{-2} \rho_0$, $10^{-3} \rho_0$, and $10^{-4} \rho_0$ ($\rho_0=0.0163$ Å$^{-3}$ for $^3$He, and 0.0218 Å$^{-3}$ for $^4$He). []{data-label="fig2"}](fig2.eps){width="8cm"}
FRDF calculations at $T=0$ confirm the picture emerging from the measurements, i.e. the surface location of Na on $^3$He nanodroplets causing a more pronounced “dimple” than in $^4$He droplets. We have investigated the stable configurations of an alkali atom on both $^3$He and $^4$He clusters of different sizes. The FRDF’s used for $^3$He and $^4$He are described in Refs. and . The large number of $^3$He atoms we are considering allows to use the extended Thomas-Fermi approximation.[@TF] The presence of the foreign impurity is modeled by a suitable potential obtained by folding the helium density with an alkali-He pair potential. We have used the potentials proposed by Patil[@patil] to describe the impurity-He interactions. Fig. \[fig2\] shows the equilibrium configuration for alkali atoms adsorbed onto He$_{2000}$ clusters. Comparison with the stable state on the $^4$He$_{2000}$ cluster shows that, in agreement with the experimental findings presented before, the “dimple” structure is more pronounced in the case of $^3$He, and that the alkali impurity lies [*inside*]{} the surface region for $^3$He and [*outside*]{} the surface region for $^4$He (the surface region is usually defined as that comprised between the radii at which $\rho=0.1 \rho_0$ and $\rho=0.9 \rho_0$, where $\rho_0$ is the He saturation density[@Dal98; @har01; @TF]). This is due to the lower surface tension of $^3$He as compared to that of $^4$He, which also makes the surface thickness of bulk liquid and droplets larger for $^3$He than for $^4$He.[@Dal98; @har01]
The deformation of the surface upon alkali adsorption is characterized by the “dimple” depth, $\xi$, defined as the difference between the position of the dividing surface at $\rho \sim \rho_0/2$, with and without impurity, respectively.[@anci1] For $^3$He we have found $\xi \sim$4.4, $\sim$4.1, and $\sim$4.3 Å, for Na, K and Rb, respectively. The corresponding values for $^4$He are $\xi \sim$2.3, $\sim$2.3, and $\sim$2.0 Å, respectively.
![ Solvation energies as a function of the number of atoms in the droplet. []{data-label="fig3"}](fig3.eps){width="8cm"}
Solvation energies defined as $S_{Ak}=E({\rm Ak}@{\rm He}_N)-E({\rm
He}_N)$ have also been calculated and are shown in Fig. \[fig3\]. The value $S_{Na} \sim -12$ K has been obtained within FRDF theory for Na adsorbed on the [*planar*]{} surface of $^4$He (Ref. ), which corresponds to the $N=\infty$ limit. A detailed discussion on solvation energies and AkHe-exciplex formation on helium nanodroplets will be presented elsewhere.
Finally, we have obtained the shift between the $^3$He and $^4$He spectra in Fig. \[exp\_spectra\] within the Frank-Condon approximation, i.e. assuming that the “dimple” shape does not change during the Na excitation. The shift is calculated within the model given in Ref. , evaluating Eq. 6 therein, both for $^3$He and $^4$He. We have used the excited state A $^2\Pi$ and B $^2\Sigma$ potentials of Ref. because their Na-He ground state potential is very similar to the Patil potential we have used to obtain the equilibrium configurations. For $N=2000$, we find that the $^3$He spectrum is blue-shifted with respect to the $^4$He one by 6.4cm$^{-1}$, in good agreement with the experimental value of $7.5\pm 1$cm$^{-1}$ as extracted from Fig. \[exp\_spectra\].
Our results thus show that alkali adsorption on $^3$He droplets occurs in very much the same way as in the case of $^4$He, i.e., the adatom is located on the surface, though in a slightly more pronounced “dimple”. The similarities in the experimental spectra are certainly remarkable for two apparently very different fluids, one normal and the other superfluid, and clearly indicate that superfluidity does not play any substantial role in the processes described here (we recall that while $^4$He droplets, which are detected at an experimental $T$ of $\sim$ 0.38 K, are superfluid, these containing only $^3$He atoms, even though detected at a lower $T$ of $\sim$ 0.15K, do not exhibit superfluidity[@grebenev]). This is likely a consequence of the very fast time scale characterizing the Na electronic excitation compared to that required by the He fluid to readjust. The excitation occurs in a “frozen” environment and the only significant difference between $^3$He and $^4$He is due to the different structure of the“dimple”, which accounts for the small shift in their spectra observed in the experiments and found in our calculations as well.
ACKNOWLEDGMENTS {#acknowledgments .unnumbered}
===============
We thank Flavio Toigo for useful comments. This work has been supported by grants MIUR-COFIN 2001 (Italy), BFM2002-01868 from DGI (Spain), and 2001SGR-00064 from Generalitat of Catalunya as well as the DFG (Germany).
[99]{}
F. Stienkemeier and A.F. Vilesov, J. Chem. Phys. [**115**]{}, 10119 (2001).
F. Stienkemeier, F. Meier, and H. O. Lutz, J. Chem. Phys. [**107**]{}, 10816 (1997); Eur. Phys. J. D [**9**]{}, 313 (1999).
F. Stienkemeier et al., Z. Phys. D [**38**]{}, 253 (1996).
F. Stienkemeier et al., J. Chem. Phys. [**102**]{}, 615 (1995);
C. Callegari et al., J. Phys. Chem. [**102**]{}, 95 (1998).
F. Brühl, R. Trasca, and W. Ernst, J. Chem. Phys. [**115**]{}, 10220 (2001).
F. Ancilotto et al., Z. Phys. B [**98**]{}, 323 (1995).
A. Nakayama and K. Yamashita, J. Chem. Phys. [**114**]{}, 780 (2001).
J. Reho et al., Faraday Discuss **108**, 161 (1997).
F. Dalfovo, J. Harms, and J.P. Toennies, Phys. Rev. B [**58**]{}, 3341 (1998).
J. Harms et al., Phys. Rev. B [**63**]{}, 184513 (2001).
V.R. Pandharipande, S.C. Pieper, and R.B. Wiringa, Phys. Rev. B [**34**]{}, 4571 (1986).
R. Guardiola, Phys. Rev. B [**62**]{}, 3416 (2000).
F. Garcias et al., J. Chem. Phys. [**108**]{}, 9102 (1998); [*ibid.*]{} [**115**]{}, 10154 (2001).
F. Dalfovo et al., Phys. Rev. B [**52**]{}, 1193 (1995).
F. Stienkemeier et al., Rev. Sci. Instr. [**71**]{}, 3480 (2000).
J. Harms and J. P. Toennies, unpublished results.
Y. Takahashi et al., Phys. Rev. Lett. [**71**]{}, 1035 (1993).
M. Barranco et al., Phys. Rev. B [**56**]{}, 8997 (1997).
R. Mayol et al., Phys. Rev. Lett. [**87**]{}, 145301 (2001).
S. Stringari and J. Treiner, J. Chem. Phys. [**87**]{}, 5021 (1987).
S.H. Patil, J. Chem. Phys. [**94**]{}, 8089 (1991).
S.I. Kanorsky al., Phys. Rev. B [**50**]{}, 6296 (1994).
S. Grebenev, J.P. Toennies, and A.F. Vilesov, Science [**279**]{}, 2083 (1998).
|
It is well-known that environment may decohere a quantum system immersed in it, making this system to lose its coherence and to transit from a pure state to a mixed state \[ 1\]. Based on the quantum dynamic approach of this environment influences \[2\], which was developed from the Hepp-Colemen (HC) theory of quantum measurement and its clarified physical representation \[3\], the decoherence phenomenon is investigated for the neutrino flavor oscillation \[4\] in this paper.
Neutrino oscillation is a subject of much intense theoretical and experimental research. In comparison with the prediction from the standard solar model(SSM), the deficit of solar neutrinos observed in the earth can probably be explained by neutrino oscillations with a (mass)$^2$ difference $%
\Delta m^2\sim 10^{-6}{\rm eV}^2$ \[4\]. Two possible mechanisms were further presented to enhance the vacuum neutrino oscillations. As a matter enhancement effect of oscillations, the Mikheyev-Smirnov-Wolfenstein (MSW)mechanism \[ 5\], was invoked when the neutrinos pass through the sun adiabatically and non-adiabatically \[6\]. Initialed by Ellis, Hagelin, Nanopoulos and Srednick (EHNS) in 1983 \[7\], there was another mechanism ( now called EHNS mechanism) to modify neutrino oscillations. Similar to the quantum decoherence in generic problem of environment influence\[1-3\], the EHNS mechanism makes a transition of neutrino from a pure state to a mixed state due to some unknown couplings of neutrinos to the environment surrounding them. Such an environment may be the background field caused by Hawking$^{^{\prime }}$s quantum gravity effect, like the evaporation of black holes \[8\]. With irreversible elements characterized by phenomenological parameters, the original EHNS mechanism suggested an effective von Neumann equation of density matrix. Its solutions modify the survival probability of neutrino oscillation . Following this work some significant efforts were made by different authors in preserving the linearity, locality, conservation of probability and unitarity \[9\]. Notice that the EHNS parameters were determined to some extent either by analyzing the experimental data of CP violation in $K^0-\overline{K^0}$ system \[10\] or based on further theoretical considerations, such as the string theory with an Einstein-Yang-Mills black hole in four dimensions\[11\].
These investigations reflect the successful aspects of the EHNS mechanism in solving the solar neutrino problem. However, as the phenomenological motion equation of density matrix concerns quantum irreversible process, it apparently violates quantum mechanics since the Schroedinger equation is reversible with time-reversal symmetry. As a correct physical theory to solve the solar neutrino problem, the EHNS mechanism has to be consistent with quantum mechanics. To remove this confliction in this paper we show that rationality of the EHNS mechanism is related to the reduction of quantum mechanics for the “universe” , a total system formed by the neutrino system plus environment. The physical essence of our observation is that the couplings of neutrino to the EHNS mechanism results in quantum decoherence of the flavor eigenstates of neutrino, which are coherent superpositions of neutrino mass eigenstates if there is not an external influence. Mathematically, this physical process is described by a reduced density matrixes of neutrino system by tracing out over the variables of the environment. The calculation of the reduced density matrixes explicitly leads to a modified formula of survival probability of neutrino oscillation. It not only introduces extra parameters to neutrino oscillation, but also implies a novel dynamic effect that the oscillating phenomena of neutrino may exists even without a mass difference in free neutrino.
Methodologically, the starting point of the present study is the quantum dynamic approach \[2\] for decoherence problem(or wave function collapse, WFC) in quantum measurement. We recall that, in the traditional theory of quantum measurement , the WFC postulate is only an extra postulate added to the ordinary quantum mechanics. Under this postulate, once we measure an observable and obtain a definite value $a_k$ the state of system must collapse into the corresponding eigen-state $|k\rangle $ from a coherent superposition $|\phi \rangle =\sum_kc_k|k\rangle \langle k|$. Through density matrix this process is described by a quantum decoherence process $%
\rho =|\phi \rangle \langle \phi |\rightarrow $ $\hat{\rho}%
=\sum_k|c_k|^2|k\rangle \langle k|$ from a pure state to a mixed state. In quantum dynamic approach, both the measured system and the measuring apparatus obey Schroedinger equation and the dynamic evolution governed by their interaction is supposed to result in quantum decoherence for the reduced dynamics of the measured system under certain conditions, such as the macroscopic limit that the measuring apparatus contains an infinite number of subsystems. Notice that the environment coupling to neutrino usually can be considered as a bath of infinite degrees of freedom. For quantum decoherence, there is a strong resemblance between the neutrino system plus the environment and the measured system plus the measuring apparatus. So we can study quantum decoherence effect in neutrino oscillation by making use of the quantum dynamic approach developed in treating quantum measurement. In the following our discussion will focus on the case of two generations, but it can be easily generalized to the case of three generations.
The neutrino weak interaction (flavor) eigenstates $\nu _e$ and $\nu _\mu $ do not coincide with their mass eigenstates $\nu _{1\text{ }}$and $\nu _2$ with masses $m_1$ and $m_2$. Through a vacuum mixing angle $\theta $, the neutrinos mixture is described as a two dimensional rotation $$\nu _e=\nu _1\cos \theta +\nu _2\sin \theta$$ $$\nu _\mu =-\nu _1\sin \theta +\nu _2\cos \theta \quad$$ Without the interaction between neutrinos and other objects, the vacuum neutrino oscillation result from free Hamiltonian $$H_n=E_1\bar{\nu}_1\nu _1+E_2\bar{\nu}_2\nu _2 \label{2}$$ As a result, the survival probability that an electron-neutrino remain as itself is: $$p_{\nu _e\rightarrow \nu _e}(t)\simeq 1-\sin ^2(2\theta )\sin ^2[\frac{%
\delta m^2t}{4E}] \label{3}$$ Here $\delta E=E_2-E_1,$ $\delta m^2=m_2^2-m_1^2$ and we have considered masses of neutrinos are so small that their energy and momentum are very close. The above well-known result was obtained completely from the coherent superposition Eq.(1) of pure states $\nu _{1\text{ }}$and $\nu _2$ and it will be certainly modified if the mixed states of neutrinos were introduced from fluctuating interactions.
Now, we consider the coupling of neutrinos to environment, which may be a background provided by the certain gravity effect such as the Hawking’s evaporation of black holes. In general , without referring to the concrete forms of the environment Hamiltonian $H_G(q)$ of dynamic variables $q$ , we can roughly write down the interaction Hamiltonian in the weakly -coupling limit
$$H_I=[\lambda _1\bar{\nu}_1\nu _1+\lambda _2\bar{\nu}_2\nu _2]q \label{4}$$
which is linear for both the density projections $\bar{\nu}_i\nu _i(i=1,2)$ of neutrinos and the environment variable $q$. The coupling constants $%
\lambda _i$ $(i=1.2)$satisfying $\lambda _1\neq \lambda _2$ implies that the couplings to environment have different strengths for different neutrinos $\nu _1$ and $\nu _2$. For instance, in the quantum gravity background, perhaps there is $\lambda _i\propto m_i^2.$ Up to the first order perturbation for quantum dissipation problem , Caldeira and Leggett \[12\] have pointed out that, any environment weakly coupling to a system may be approximated by a bath of oscillators with linear interaction in the weakly coupling condition that “each environmental degree of freedom is only weakly perturbed by its interaction with the system”. For both quantum decoherence \[2,14\]as well as quantum dissipation \[12,13\] , we have modeled a general environment with the linear coupling Hamiltonians similar to Eq.(4) and discussed the universality of such modeling scheme.
Because the total system, governed by the total Hamiltonian $H=H_n+H_G+H_I$ , is closed, the quantum mechanics can work well to describe its dynamic evolution. Suppose the neutrino is initially in a flavor state $\nu _e$ with a pure density matrix $$\rho _{\nu _e}=\bar{\nu}_e\nu _e=\left(
\begin{array}{cc}
\cos ^2\theta & \cos \theta \sin \theta \\
\cos \theta \sin \theta & \sin ^2\theta
\end{array}
\right) \label{5}$$ and the environment in a general initial state of density matrix $\rho _G,$the initial condition for the total system should be : $$\rho (t=0)=\rho _{\nu _e}\otimes \rho _G \label{6}$$ By considering the quantum non-demolition features of the interaction , i.e., $[H_I,H_n]=0,$ $[H_I,H_G]\neq 0,$the total density matrix can be directly computed as
$$\rho (t)=\cos ^2\theta \cdot \bar{\nu}_1\nu _1\otimes U_1(t)\rho
_GU_1^{\dagger }(t)+$$ $$+\sin ^2\theta \cdot \bar{\nu}_2\nu _2\otimes U_2(t)\rho _GU_2^{\dagger }(t)]
\label{7}$$ $$+\cos \theta \sin \theta \cdot \bar{\nu}_1\nu _2\otimes U_1(t)\rho
_GU_2^{\dagger }(t)\exp (i\delta Et)+H.c.$$ where $$U_i(t)=\exp \{-i[H_G(q)+\lambda _iq]t\}\quad (i=1,2) \label{8}$$ for $i=1,2$ are the effective evolution operators of environment. They show that the back-actions on environment exerted by $\nu _1$ and $\nu _2$ respectively are different for different coupling strengths $\lambda
_i(i=1,2)$. Namely, the environment can distinguishes different mass eigenstates of neutrino.
As our main interest is only in the neutrino oscillation rather than the motion of the environment, we trace out over the variable of environment to obtain the reduced density matrix of the neutrino system. It is
$$\rho _n(t)=\cos ^2\theta \cdot \bar{\nu}_1\nu _1++\sin ^2\theta \cdot \bar{%
\nu}_2\nu _2$$ $$+F(t)\cos \theta \sin \theta \cdot \bar{\nu}_1\nu _2\exp (i\frac{\delta m^2t%
}{2E})+H.c. \label{9}$$ of the neutrino system. Here, the decoherence factor $$F(t)=Tr_g[U_2^{\dagger }(t)U_1(t)\rho _G] \label{10}$$ characterizes the extent that the neutrino system loses its coherence. The reduced density matrix $\rho _n(t)$ determines the survival probability that a electron-neutrino remain in itself is $$p_{\nu _e\rightarrow \nu _e}(t)=Tr_n[\bar{\nu}_e\nu _e\rho _n(t)]$$ $$=1-\frac 12\sin ^2(2\theta )\{1-\cos [\delta Et+\alpha (t)]|F(t)|\}
\label{11}$$ where the real number $\alpha (t)$ is the phase of $F(t).$ It is easy to prove that $|F(t)|$ is less than 1 since $Tr[\rho _G]=1$ and $U_i(t)$ (i=1,2) are unitary. In the above concise formula (11) for neutrino oscillation, the influences of environment are simply summed up to a decoherence factor $F(t)$, which is time-dependent while energy-independent. If the difference $\lambda _1-\lambda _2$ in couplings of $\nu _1$ and $\nu
_2$ to environment does not depend on $\delta m$, the difference in their masses, the result (11) directly shows a novel fact that, even neutrino is massless or electron- and muon- neutrinos have a same mass, it is possible to see the oscillational behavior between electron- and muon- neutrino. Here, the neutrino oscillation is caused by the phase $\alpha (t)$ of the decoherence factor, that is $$p_{\nu _e\rightarrow \nu _e}(t)=1-\frac 12\sin ^2(2\theta )(1-\cos [\alpha
(t)]|F(t)|) \label{12}$$ As the above derived temporal phenomenon is independent of the energy of neutrino flux, it can not solve the present solar neutrino problem with energy -dependence spectrum .
To get one step further we consider two extreme cases. When $|F(t)|=1,$ or $%
F(t)=\exp [i\alpha (t)]$ for real $\alpha ,$ the system still in a pure state $$\rho _n(t)=\bar{\phi}(t)\phi (t):$$ $$\phi (t)=\nu _1\exp (iE_1t-i\frac \alpha 2)\cos \theta +\nu _2\exp (iE_2t+i%
\frac{\alpha (t)}2)\sin \theta \label{13}$$ In this case the influences of environment on neutrino oscillation only result in a time-dependent phase shift $\frac{\alpha (t)}2,i.e.,$ $$p_{\nu _e\rightarrow \nu _e}(t)=1-\sin ^2(2\theta )\sin ^2[\frac{\delta m^2t%
}{4E}+\frac{\alpha (t)}2] \label{14}$$ As a signature of the environment influence, this phase shift only affects the spectrum distribution $P(E)=p_{\nu _e\rightarrow \nu _e}(t)$ in the higher energy region. For an extreme case with $F(t)=0,$ the coherence in neutrino system is completely lost and the neutrinos transit into a completely mixed state $$\rho _n(t)=\cos ^2\theta \cdot \bar{\nu}_1\nu _1++\sin ^2\theta \cdot \bar{%
\nu}_2\nu _2 \label{15}$$ with vanishing off-diagonal elements. It leads to a constant transition probability$\frac 12\sin ^2(2\theta )$ from $\nu _e$ to $\nu _\mu $ and the corresponding survival probability that $\nu _e$ remain in itself is $1-%
\frac 12\sin ^2(2\theta ).$ Notice that these results only depend on the vacuum mixing angle $\theta $ and do not fit the neutrino spectrum varying with the energy of neutrino.
To get the idea of explicit form of $F(t)$ we illustrate the determination of decoherence factor in the harmonic oscillator model of environment \[3,14\]. Let $a_j^{+}$and $a_j$ be the creation and annihilation operators for the j$^{,}$th harmonic oscillator in environment. The Hamiltonian of environment takes the form $H=\sum_{j=1}^N\hbar \omega _ja_j^{+}a$ and its interaction with neutrino can be modeled as a linear coupling: $$H_I=[\lambda _1\bar{\nu}_1\nu _1+\lambda _2\bar{\nu}_2\nu
_2]\sum_{j=1}^N\hbar g_j(a_j^{+}+a_j) \label{16}$$ Let the environment be initially in the vacuum state $|0\rangle
_e=|0_1\rangle \otimes |0_2\rangle \otimes \cdot \cdot \cdot \otimes
|0_N\rangle $ where $|0_j\rangle $is the ground state of the j$^{,}$th single harmonic oscillator, the corresponding decoherence factor $$F(t)\equiv F(N,t)=\prod_{j=1}^N\ \langle 0_j|U_2^{^j\dagger
}(t)U_1^j(t)|0_j\rangle \equiv \prod_{j=1}^NF_j(t)$$ can be obtained by solving the Schroedinger equations of $U_i^j(t)$ ($i=1,2$) governed by the Hamiltonians of forced harmonic oscillators $H_i^j=\hbar
\omega _ja_j^{+}a_j+\lambda _ig_j(a_j^{+}+a_j).$ The result is $$|F(t)|=\exp [-\sum_{j=1}^N\frac{2g_j^2(\delta \lambda )^2}{\omega _j{}^2}%
\sin ^2(\frac{\omega _jt}2)]$$ $$\alpha (t)=\sum_{j=1}^N\frac{g_j^2\delta \lambda ^2}{\omega _j{}}[t+\frac{%
\sin (\omega _jt)}{\omega _j}] \label{17}$$ where $\delta \lambda =\lambda _2-\lambda _1,\delta \lambda ^2=\lambda
_2^2-\lambda _1^2.$The decoherence time is decided by the norm part of $%
F(N,t),$ which is the same as that from the two level subsystems model of environment in the weakly coupling limit \[2\]. Especially, when the environment consists of identical particles, we have $\omega _j=\omega $ and $g_j=g.$ In the van Hove limit of weakly-coupling , i.e., $g\rightarrow
0,N\rightarrow \infty $ , but $g\sqrt{N}\rightarrow $a finite constant $G,$a simple result for decoherence factor is obtained as $|F(t)|=\exp [-(\delta
\lambda )^2\frac{2G^2}{\omega {}^2}\sin ^2(\frac{\omega t}2)]$ and $\alpha
(t)=\delta \lambda ^2[\frac{G^2}{\omega {}^2}t+\frac{\sin (\omega t)}\omega
].$ Then, an explicit formula of environment-modifying neutrino oscillation is obtained as $$p_{\nu _e\rightarrow \nu _e}(t)=1-\frac 12\sin ^2(2\theta )\times$$ $$\{1-\cos [(\frac{\delta m^2}{4E}+\frac{G^2\delta \lambda ^2}{2\omega {}^2}%
)t+\delta \lambda ^2\frac{\sin (\omega t)}{2\omega }]\times$$ $$\exp [-(\delta \lambda )^2\frac{2G^2}{\omega {}^2}\sin ^2(\frac{\omega t}2%
)]\} \label{18}$$ In the low-frequency case of environment ($\omega \rightarrow 0),$denote $%
\beta =\frac 12(\delta \lambda )^2G^2,$ we have an exponential decay $%
|F(t)|=\exp [-\beta t]$ and $\alpha (t)=\frac \beta {\omega {}^2}t\cdot $ So $$p_{\nu _e\rightarrow \nu _e}(t)\rightarrow 1-\frac 12\sin ^2(2\theta )\times$$ $$\{1-\cos [(\frac{\delta m^2}{4E}+\frac \beta {\omega {}^2})t]\}\times \exp
[-\beta t]. \label{19}$$
In conclusion, we have considered the quantum decoherence problem in neutrino flavor oscillation based on our dynamic approach for quantum measurement. The phenomenon of transition of neutrinos from a pure state to a mixed state, similar to EHNS mechanism, has been understood to some extent in the view of the ordinary quantum mechanics. Our study leads to the modified neutrino oscillations with two additional time-dependent parameters. For specified environments, they show that the oscillating phenomena of neutrino still exist even without a mass difference in free neutrinos. However, we have not considered yet the influences of quantum decoherence on the MSW mechanism and the quantum dissipation effects due to the couplings to environment in off-diagonal form $[g\bar{\nu}_1\nu _2+H.c]q$ \[12, 13\]. These effects perhaps are crucial to the final solution to solar neutrino problem, and the method used by this paper can be generalized to handle them without any difficult in principle. The main problem in present is that the explicit calculation of the decoherence factor $F(t)$ depends the details of the environment and its couplings to neutrinos in certain extent, but up to date we have not a complete knowledge yet about a complex environment. If the environment is provided by the Hawking$^{,}$s evaporation of black holes in gravity field, there is not yet an universally acknowledged scheme to quantize gravity. Regardless of this problem, the present paper at least suggests a general rule to microscopically analyze the decoherence modification of neutrino oscillation. Once the Hamiltonians of the environment and its coupling to neutrinos, even in the weakly-coupling case, were provided the decoherence factor can be explicitly calculated to fit the datum in the solar neutrino problem.
0.3cm
Acknowledgment: The authors wishe to thanks T.H. Ho, X.Q.Li, Y.L.Wu and C.H.Chang for many helpful discussions. This work .is supported in part by the NFS China
$^1$electronic address: suncp@itp.ac.cn;
$^2$internet www site: http:// www.itp.ac.cn/suncp
W.H.Zurek, Phys.Today, [**44(10)**]{}, 36 (1991).
C. P. Sun, Phys. Rev. A, [**48**]{}, 878 (1993); C. P. Sun, X. X. Yi, and X. J. Liu, Fortschr. Phys. , [**43**]{}, 585(1995); C. P. Sun. H.Zhan and X.F.Liu, Phys. Rev. A, [**57**]{}, in presss (1998); C. P. Sun, [*in*]{} [*Quantum Coherence and Decoherence*]{}, ed. by K. Fujikawa and Y. A. Ono, pp. 331-334, (Amsterdam: Elsevier Sciense Press, 1996); C. P. Sun, Chin. J. Phys. , [**32**]{}, 7(1994);X. J. Liu and C. P. Sun, Phys. Lett. A, [**198**]{}, 371 (1995); C. P. Sun, X. X. Yi, S. R. Zhao, L. Zhang and C. Wang, Quantum Semiclass. Opt. [**9**]{}, 119 (1997).
K. Hepp, Hev. Phys. Acta, [**45**]{}, 237 (1972); J. S. Bell, Hev. Phys. Acta, [**48**]{}, 93 (1975); M. Cini, Nuovo Cimento,[**73B**]{}, 27 (1983); M. Namik and S. Pascazio, Phys. Rev., A[**44**]{}, 39 (1991); H. Nakazato and S. Pascazo, Phys. Rev. Lett., [**70**]{}, 1(1993).
S. M. Bilenky and B. Pontecorvo, Phys. Rep. [**41**]{}, 225 (1978).
L. Wolfenstein, Phys. Rev. D [**17,**]{} 2369(1978) 2369; S. P. Mikheyev and A. Yu. Smirnov, Sov. J. Nucl. Phys. [**42,** ]{} 913(1985); Nuovo Cimento [**9C,**]{} 17(1986) . For an excellent review see also T. K. Kuo and J. Pantaleone, Rev. Mod. Phys. [**61,** ]{} 937 (1989).
C.P.Sun, Phys.Rev[**.D.38**]{}, 2908(1988); P.I. Krastev and A.Yu. Smirnov, Phys. Lett. [**338B,** ]{}282 (1994) .
J.Ellis, J.S.Hagelin, D.V.Nanopoulos and M.Srednicki, Nucl.Phys.[**B 241,** ]{}381(1984).
S.W.Hawking, Commun. Math.Phys. [**43,** ]{}199(1975); Nature [**248**]{}, 30, (1974).
T.Banks, M.E.Peskin and L.Susskind, Nucl.Phys.[**B 244,** ]{}125 (1984); B. Reznik, Phys. Rev. Lett. [**76,** ]{}119 (1996).
CPLEAR Collaboration and J. Ellis et al. Phys. Lett. [**B 364**]{}, 239(1995); J. Ellis, N. E. Mavromatos and D.V.Nanopoulos, Phys Lett.[**B**]{} [**293**]{}, 142 (1992); P. Huet and E. Peskin, Nucl. Phys.[**B 434,** ]{}3(1995) and the refrences therein.
J .Ellis, N. E. Mavromatos and D.V.Nanopoullos, Phys.Lett. [**B**]{} [**293,** ]{}37(1992); J. Ellis, N. E. Mavromatos, E. Winstanley and D. V. Nanopoulos, Mod. Phys. Lett. [**A12,** ]{}243 (1997).
A. O. Caldeira and A. J. Leggett, [*Ann. Phys*]{}. (N.Y.), [**149,** ]{}374(1983); A. J. Leggett, S. Chakravarty, A. T. Dosey, M. P. A. Fisher and W. Zwerger, Rev. Mod. Phys, [**59**]{}, 187(1987);C. Gardiner, [*Quantum Noise*]{}, (Berlin, Springer, 1991).
L. H. Yu and C. P. Sun, Phys. Rev. A, [**49,**]{} 592 (1994); C. P. Sun and L. H. Yu, Phys. Rev. A, [**51**]{}, 1845 (1995); C. P. Sun, Y. B. Gao, H. F. Dong and S. R. Zhao, Phys. Rev. E. [**57,** ]{} 3900 (1998)
W. G. Unruh, Phys. Rev. A [**51**]{}, 992 (1995).
|
---
abstract: 'Two dimensional gas chromatography ([GC$\times$GC]{}) plays a central role into the elucidation of complex samples. The automation of the identification of peak areas is of prime interest to obtain a fast and repeatable analysis of chromatograms. To determine the concentration of compounds or pseudo-compounds, templates of blobs are defined and superimposed on a reference chromatogram. The templates then need to be modified when different chromatograms are recorded. In this study, we present a chromatogram and template alignment method based on peak registration called [BARCHAN]{}. Peaks are identified using a robust mathematical morphology tool. The alignment is performed by a probabilistic estimation of a rigid transformation along the first dimension, and a non-rigid transformation in the second dimension, taking into account noise, outliers and missing peaks in a fully automated way. Resulting aligned chromatograms and masks are presented on two datasets. The proposed algorithm proves to be fast and reliable. It significantly reduces the time to results for [GC$\times$GC]{}analysis.'
author:
- |
Camille Couprie, Laurent Duval, Maxime Moreaud,\
Sophie Hénon, Mélinda Tebib, Vincent Souchon
title: |
[BARCHAN]{}: Blob Alignment for\
Robust CHromatographic ANalysis
---
> Published in Journal of Chromatography A (J. Chrom. A.), 2017, 1484, pages 65–72, Virtual Special Issue RIVA 2016 (40th International Symposium on Capillary Chromatography and 13th GCxGC Symposium)
>
> <http://dx.doi.org/10.1016/j.chroma.2017.01.003>
> **[Keywords]{}: Comprehensive two-dimensional gas chromatography; [GC$\times$GC]{}; Data alignment; Peak registration; Automation; Chemometrics**
Introduction
============
First introduced in 1991 by Phillips *et al.* [@Liu_Z_1991_j-chromatogr-sci_comprehensive_tdgcuoctmi], comprehensive two-dimensional gas chromatography ([GC$\times$GC]{}) has become in the past decade a highly popular and powerful analytical technique for the characterization of many complex samples such as food derivatives, fragrances, essential oils or petrochemical products [@Adahchour_M_2008_j-chrom-a_recent_dactdgc; @Meinert_C_2012_j-angew-chem-int-ed_new_dssctdgc; @Seeley_J_2012_j-chrom-a_recent_afcmgc; @Cortes_H_2009_j-sep-sci_comprehensive_tdgcr]. In the field of oil industry, [GC$\times$GC]{}gives an unprecedented level of information [@Vendeuvre_C_2005_j-chrom-a_characterization_mdctdgcgcgcpapvsamd] thanks to the use of two complementary separations combining different selectivities. It is very useful in the understanding of catalytic reactions or in the design of refining process units [@Bertoncini_F_2013_book_gas_c2dgcpirs; @Nizio_K_2012_j-chrom-a_comprehensive_msap].
From an instrumental point of view, much progress has been made since the early nineties on both hardware and modulation systems [@Edwards_M_2011_j-anal-bioanal-chem_modulation_ctdgc20yi]. Many modulator configurations are depicted in the literature or nowadays sold by manufacturers. With the use of leak-free column unions, many of these systems have become robust, easy to use, without cryogenic fluids handling while providing high resolution. Within a series of several consecutive injections, almost no significant shifts in retention times are observed and repeatability of [GC$\times$GC]{}experiments is nowadays a minor problem. However, reproducibility of [GC$\times$GC]{}results for detailed group-type analysis on complex mixtures is still a great challenge due to column aging, trimming or to slight differences in column features. This results in shifts on retention times that can affect the proper quantification of a single compound, a group of isomers or pseudo-compounds. Experimental retention time locking (RTL) procedures have been proposed to counterbalance shifts on retention times but these procedures must be repeated regularly [@Mommers_J_2011_j-chrom-a_retention_tlpctdgc]. On the way to routine analysis for [GC$\times$GC]{}, data treatment has therefore become the preferred option to reduce the time to results [@Vendeuvre_C_2007_j-ogst_comprehensive_tdgcdcpp; @Murray_J_2012_j-chrom-a_qualitative_qactdgc; @Reichenbach_S_2012_j-chrom-a_features_ntcsactdc; @Zeng_Z_2014_j-trac-trends-anal-chem_interpretation_ctdgcduac].
A common way of treating [GC$\times$GC]{}data is to quantify compounds according to their number of carbon atoms and their chemical families by dividing the 2D chromatographic space into contiguous regions that are associated to a group of isomers. This treatment benefits from the group-type structure of the chromatograms and from the roof-tile effect for a set of positional isomers. For example, for a classical diesel fuel, up to or regions (often referred to as blobs) may be defined. Due to the lack of robustness in retention times, this step often requires human input and is highly time-consuming when moving from an instrument to another or when columns are getting degraded. Several hours may be necessary to correctly recalibrate a template of a few hundreds of blobs on a known sample. This operator-dependent step causes variability in quantitative results which is detrimental to reproducibility. In that goal, 2D-chromatogram alignment methods, consisting in modifying a recently acquired chromatogram to match a reference one, have been a quite active research area.
In this paper, we propose a new algorithm called [BARCHAN]{}[^1] which aims at aligning [GC$\times$GC]{}chromatograms. It relies on a first peak selection step and then considers the alignment of the two point sets as a probability density estimation problem. This algorithm does not require the placement of anchor points by the user.
Material and methods
====================
Datasets and [GC$\times$GC]{}methods
------------------------------------
The straight-run gas-oil sample named GO-SR which is used in this study was provided by IFP Energies nouvelles and was analyzed on different experimental set-ups. Its boiling point distribution ranges from .
Dataset 1 was built by considering two [GC$\times$GC]{}chromatograms obtained on two different experimental set-ups in the same operating conditions with cryogenic modulation. These [GC$\times$GC]{}experiments were carried out with an Agilent 7890A chromatograph (Santa Clara, California, USA) equipped with a split/splitless injector, a LN2 two-stage 4 jets cryogenic modulation system from LECO (Saint-Joseph, Michigan, USA) and an FID. The two evaluated column sets were composed of a first 1D apolar 1D HP-PONA column (, , , J&W, Folson, USA) and a mid-polar BPX-50 column (, , , SGE, Milton Keynes, United Kingdom) connected together with Siltite microunions from SGE. Experiments were run with a constant flow rate of , a temperature program from () to at , a offset for hot jets and a modulation period. of neat sample were injected with a split ratio.
Dataset 2 includes a reference chromatogram obtained in the previous conditions and a chromatogram of the same sample obtained with a microfluidic modulation system. These [GC$\times$GC]{}data were obtained on a Agilent 7890B chromatograph equipped a split/splitless injector, a Griffith [@Griffith_J_2012_j-chrom-a_reversed-flow_dfmctdgc] type modulation system supplied by the Research Institute for Chromatography (Kortrijk, Belgium) and a FID. The modulation system consists in two Agilent CFT plates (a purged three-way and a two-way splitter) connected to an accumulation capillary. Separation was performed on a DB-1 (, , ) 1D column and a DB-17HT (, , , J&W) 2D column. The modulation period was set to whereas the oven programming and injection conditions were similar to the ones previously described.
Software
--------
[BARCHAN]{}is implemented in C and Matlab. In-house platform INDIGO runs it through a user-friendly interface while the proprietary 2DChromsoftware creates template masks (.idn files) and 2D images from [GC$\times$GC]{}data.
Calculations
------------
The quality of the alignments obtained with [BARCHAN]{}was evaluated by two different ways. The correlation coefficient CC [@DeBoer_W_2014_j-chrom-a_two-dimensional_spac] as well as the Structural Similarity index SSIM [@Wang_Z_2004_j-ieee-tip_image_qaevss] between the reference chromatogram and the other one were computed. They directly match global image intensities, without feature analysis. Calculation details for CC and SSIM are provided in the supplementary material. These results were obtained on a restricted area of interest defined by the user. A second indicator to evaluate the quality of the alignment is the match quality between the [BARCHAN]{}adjusted template and a fully manually registered template. In practice, this featural similarity index is performed by comparing quantitative results obtained on chemical families with the template mask and with the [BARCHAN]{}optimized mask.
Theory
======
Related works
-------------
We may distinguish two classes of alignment methods: the ones that are directly performed on the full chromatographic signal, and the others which require a prior peak selection step. In the first class, the works of [@VanMispelaar_V_2003_j-chrom-a_quantitative_atcctdgc] and [@Pierce_K_2005_j-anal-chem_comprehensive_tdrtaaecactdsd] look for shifts minimizing a correlation score between signals. In [@Hollingsworth_B_2006_j-chrom-a_comparative_vctdgc], an affine transformation is assumed between the two chromatograms to register. The recent work of de Boer [@DeBoer_W_2014_j-chrom-a_two-dimensional_spac] looks for a warping function parametrized with splines that transforms the chromatogram to be registered into a chromatogram aligned with the reference. Low-degree polynomial alignment is proposed in [@Reichenbach_S_2015_j-anal-chem_alignment_ctdgcdscd]. Full image registration [@Zitova_B_2003_j-image-vis-comput_image_rms] is however limited for applications in [GC$\times$GC]{}because of the variability in chromatograms: positions of peaks in the two chromatograms could be similar, but this is not the case of their intensities. Therefore, the majority of alignment methods choose to first extract peaks in the reference and target chromatograms to only register the informative parts of chromatographic images.
Thus, among approaches dedicated to chromatogram alignment, the work of [@VanMispelaar_V_2005_j-chrom-a_classification_hscoudsctdgcmt] (focused on quantitative analysis) deduces local peaks displacements by correlation computations in slightly shifted blocks surrounding peaks. Variations of peak patterns in different experimental conditions (e.g. temperature) is studied in [@Ni_M_2005_j-chrom-a_peak_pvrctdgca], and exhibits satisfactory results for estimating an affine transformation. Similarly, [@Reichenbach_S_2009_j-chrom-a_smart_tppmctdlc] also models rigid transformations for LC$\times$LC (2D liquid chromatography) template alignment. However, these hypotheses appear to be too restrictive in a general setting. Therefore [@Zhang_D_2008_j-anal-chem_two-dimensional_cowaagcgcmsd] extended the space of possible deformations by looking for a warping function that transforms signals. Correlation Optimized Warping (COW) is judged effective by [@VanNederkassel_A_2006_j-chrom-a_comparison_taca] that compares three different registration approaches, including target peak alignment (TPA) and semi-parametric time warping (STW) for one specific analysis. However, COW is still not satisfactory when incomplete separation and co-elution problems exist as pointed out by [@Parastar_H_2012_j-chem-int-lab-syst_comprehensive_tdgcgcgcrtscmubpacosmcr]. Instead, the latter uses bilinear peak alignment in addition to COW to correct for progressive within run retention time shifts on the second chromatographic dimension. In [@Weusten_J_2012_j-anal-chim-acta_alignment_csgcgcmsfucm], the alignment is performed after embedding the chromatograms surfaces into a three-dimensional cylinder, and the parametrization of the transform employs polynomials. The DIstance and Spectrum Correlation Optimization (DISCO) alignment method of [@Wang_B_2010_j-anal-chem_disco_dscoatdgctofmsbm], extended in [@Wang_B_2012_incoll_disco2_cpaatdgctofms], uses an elaborate peak selection procedure followed by interpolation to perform the alignment. The approach from [@Kim_S_2011_j-bmc-bioinformatics_smith-waterman_pactdgcms] also performs peak alignment via correlation score minimization using dynamic programming, comparing favorably to DISCO. Finally, the work of [@Gros_J_2012_j-anal-chem_robust_aatdc] performs an assessment of different [GC$\times$GC]{}alignment methods with a new one. Their method requires a manual placement of matching peaks pairs, then the registration is performed differently on each axis: linear deformations along one dimension, and a neighbor based interpolation in a Voronoi diagram defined using the alignment anchor points for the other dimension. The linear constraint is relevant because one dimension displacements are independent of the other dimension elution times. The requirement of user-defined alignment points is robust to large variations in the reference and target chromatogram, at the expense of time-consuming markers placement.
(n11) at (0,0) ; (n12) ; (n20) at (3.35,-2.5) ; (n30) at (3.35,-3.5) [Feature point extraction]{}; (n40) at (3.35,-4.6) ; (n50) at (3.35,-6) ; (n61) ; (n62) ; (n11.south) – (n20.north west); (n12.south) – (n20.north east); (n20.south) – (n30.north) (n30.south) – (n40.north) (n40.south) – (n50.north) ;
(n50) |- (3.35,-7) -| (3.35,-7) – (n62);
(n50) |- (3.35,-7) -| (3.35,-7) – (n61);
[BARCHAN]{}methodology
----------------------
A schematic view for the principles of [BARCHAN]{}is depicted in the flowchart from Figure \[fig:flowchart\]. First, [GC$\times$GC]{}chromatograms of the sample to analyze and the reference 2D image are loaded as images files. Then, the user is provided with a brush to surround, in a user interface, the area of interest on both reference and new 2D chromatograms (see Figure \[fig:areas\]). Peaks are extracted in those areas (Section \[sec:hmax\]). Only one centroid per local maximum is retained for the point set registration in order to diminish computation times and to prevent bias for large peaks. Datasets are then assimilated to centroids of a Gaussian Mixture Model (GMM) and a weighted noise model is added. Advantage is taken from recent progresses in point set registration, using a probabilistic and variational approach [@Myronenko_A_2010_j-ieee-tpami_point_srcpd]. This choice is motivated by the fact that a complex transformation must be modeled while remaining robust to noise and outliers. In this context, GMMs (Gaussian Mixture Models) are particularly efficient at reconstructing missing data, which is especially convenient when selected peaks in one point cloud are not included in the other one. Finally, model parameters are optimized to yield registered results. Two types of results are produced:
- if a template mask for the reference chromatogram exists, the transformation of the template points leading to a registered template mask is computed.
- an aligned chromatogram may also be produced by computing the transformation of a grid defined as the coordinates of every pixel in an image, and interpolating the target image values at the transformed coordinates.
Details on the calculations for every step are provided in the next paragraphs.
Feature point extraction\[sec:hmax\]
------------------------------------
Despite the good behavior of the employed registration algorithm regarding noise and outliers, it is desirable to extract the most resembling point sets. Therefore, preliminary [GC$\times$GC]{}enhancement [@Ning_X_2014_j-chemometr-intell-lab-syst_chromatogram_bedusbeads; @Samanipour_S_2015_j-chrom-a_analyte_qctdgcambcpdmeers] proves useful.
Inherent to the [GC$\times$GC]{}experimentation procedure, fragments of the stationary phase are frequently lost by the column resulting into the presence of hyperbolic lines in the chromatogram. Their differentiation from the real peaks is difficult to automate because of possible overlaps with the chromatogram peaks of interest. Therefore, in our treatments, a rough area of interest is delimited by an operator, taking approximately ten seconds.
Rather than using second or third derivatives of the chromatogram [@Fredriksson_M_2009_j-sep-sci_automatic_pfmlcmsdgsdf], which require non-trivial parameters to set, we employ the approach of [@Bertoncini_F_2013_book_gas_c2dgcpirs p. 97–106] and extract the $h-$maxima of the chromatograms. Simply put, all local maxima having a height greater than a scalar $h$ are extracted. Starting from an input signal $f$ from $\R^d$ to $\R$, the positions of the $h$-maxima may be obtained via a morphological opening by reconstruction, noted $\gamma^{\mbox{rec}}(f,f-h)$. More specifically, this operation is defined as the supremum of all geodesic dilations of $f-h$ by unit balls in $f$. More details are provided in [@Bertoncini_F_2013_book_gas_c2dgcpirs] and a scheme is displayed in Figure \[fig:hmax\].
![Detection of $h$-maxima: only one peak is detected in this example (dotted line).\[fig:hmax\]](figure02hmaxima){width="40.00000%"}
Data alignment model
--------------------
To guarantee results where the first point set is similar to the registered point set, while being robust to noise and outliers, we choose to employ a probabilistic approach. Supposing that the first point set $X$ follows a normal distribution, the Coherent Point Drift method [@Myronenko_A_2010_j-ieee-tpami_point_srcpd] seeks to estimate the probability density that explains the data $X$ as a weighted sum of Gaussians initially centered by the second point set $Y$.
We introduce our notations as follows. The first point set of size $N\times2$, corresponding to the coordinates of $N$ peaks extracted in the target chromatogram, is denoted $X=\{X_1, \ldots, X_N\}$. The second point set $Y=\{Y_1, \ldots, Y_M\}$ of size $M\times2$ corresponds to the peak coordinates in the reference chromatogram and is assimilated to centroids of a GMM. Each component $X_i$ is a vector composed of two coordinates denoted $X^{(1)}_i$ and $X^{(2)}_i$. The vector $X^{(i)}$ denotes the $i^{\scriptsize{\mbox{th}}}$ line of matrix $X$. Adding a weighted noise model to the GMM probability density function leads to: $$p(X_n)= \frac{w}{N} + \sum_{m=1}^{M} \frac{1-w}{2 M \pi \sigma^2} \exp \left( -\frac{\|X_n-T(Y_m)\|^2}{2\sigma^2}\right)$$ where the first term takes into account uniform noise weighted by the parameter $w$ fixed between 0 and 1, $\sigma$ is a variance parameter to estimate, and $T$ is the point cloud transform to estimate. In this work, motivated by a failure of global rigid transformation attempts on our data, we modeled two different transforms across the two dimensions. We assume that a rigid displacement occurs along the $y$-axis second very short column, similarly to [@Gros_J_2012_j-anal-chem_robust_aatdc], and non-rigid transformations are allowed on the $x$-axis first normal length column. The underlying assumption is a relative anisotropy of the data: two separate pixels in the vertical direction are distant by a much smaller time interval than those aligned horizontally. The $x$-axis is thus potentially subject to more important nonlinear distortions. Thus, we model the transformation $T$ of point cloud $Y$ as: $$\begin{aligned}
T(Y^{(1)}) & =sY^{(1)}+t,\\
T(Y^{(2)}) & = Y^{(2)}+G W,
\end{aligned}$$ where $s$ and $t$ are real numbers, respectively a scale and a translation parameter to estimate, and $W$ is a vector of length $M$ of non-rigid displacements to estimate. The matrix $G \in R^{M\times M}$ is a symmetric matrix defined element-wise by:
$$G_{ij} = \exp^{- \frac{\|Y_i-Y_j\|}{2\beta} },$$
where $\beta$ is a positive scalar. The minimization of the non-negative likelihood leads to the minimization of: $$E_1(\sigma, W, s,t)= - \sum_{n=1}^N \log p(X_n).$$ A regularization of the weights $W$, enforcing the motion to be smooth, is necessary for the non-rigid registration, resulting into the following variational problem: $$\min_{\sigma, W, s,t} E = E_1(\sigma, W, s,t) +\frac{\lambda}{2} \operatorname{\ensuremath{Tr}}(W^{\top} G W),$$ where $\operatorname{\ensuremath{Tr}}$ denotes the trace operator of a matrix. The estimation of parameters $w$, $\beta$ and $\lambda$ is discussed in generic terms in [@Yuille_A_1989_j-ijcv_mathematical_amct] and [@Myronenko_A_2010_j-ieee-tpami_point_srcpd]. [BARCHAN]{}inherits a similar strategy, within the proposed combined rigid/non-rigid registration procedure. The parameter $w\in [0\,,\,1]$, related to the noise level, is first determined by visual inspection on ten regularly-spaced values. Albeit found to be the most determinant, our chromatograms sharing about the same signal-to-noise ratio, this value is kept constant in all our experiments. For other data types, multiple figures illustrating different registrations with varying amounts of noise and outliers with an appropriate choice of $w$ are presented in [@Myronenko_A_2010_j-ieee-tpami_point_srcpd]. The determination of the other parameters and $\lambda$ is also discussed in [@Yuille_A_1989_j-ijcv_mathematical_amct]. We have set them to $\beta=2$ and $\lambda=2$ as by default in [@Myronenko_A_2010_j-ieee-tpami_point_srcpd]. Slight changes did not affect the registration results sensitively.
Optimization
------------
We employ the Expectation-Maximization (EM) algorithm [@Dempster_A_1977_j-r-stat-soc-b-stat-methodol_maximum_lidema] that alternates between:
- the E step: we compute the probability $P$ of correspondence for every couple of points.
- the M step: we estimate the parameters $\sigma, s, t,$ and $W$. To that goal, we compute the partial derivative of $E$ with respect to $\sigma, s, t,$ and $W$ and set them to zero leading to an estimate of every parameter. Details are provided in the supplementary material, as well as the final algorithm itself.
Results and discussion
======================
The areas of interest for both dataset 1 and 2 were defined so that every compound present in the sample is taken into account while limiting the number of peaks due to the bleeding (Figure \[fig:areas\]). The detected peaks appear as small blue dots on both chromatograms whereas the selected areas are colored in green and delimited with a purple line. Peaks were extracted with a height parameter $h$ from Section \[sec:hmax\] equal to 120 and 60 for dataset 1 and 2, respectively.
(image) at (0,0) [![image](figure03areasinterest){width="95.00000%"}]{};
(n11) at (0.44,0.73) [bleeding]{}; (0.405,0.002) – (0.498,0.002) – (0.498,0.6) – (0.405,0.002) ; (0.25,1-0.006) – (0.498,1-0.006) – (0.498,0.93) – (0.25,1-0.006) ; (n11.south) – (0.46,0.49); (n11.north) – (0.45,0.93);
(n12) at (0.9,0.9) [bleeding]{}; (1-0.0055,0.02) – (1-0.0055,0.85) – (0.96,0.75) – (0.93,0.09) – (0.87,0.09) – (0.87,0.02) – (1-0.0055,0.02) ; (n12.south) – (0.94,0.6);
(a10) at (0.37,0.18) [Saturates]{}; (a11) at (0.37,0.28) [$^1\!A$]{}; (a12) at (0.37,0.39) [$^2\!A$]{}; (a13) at (0.37,0.51) [$^3\!A$]{};
(a20) at (0.85,0.18) [Saturates]{}; (a21) at (0.85,0.28) [$^1\!A$]{}; (a22) at (0.85,0.39) [$^2\!A$]{}; (a23) at (0.85,0.51) [$^3\!A$]{};
plot \[smooth\] coordinates [(0.35,0.2+0.030) (0.2,0.18+0.04) (0.1,0.16+0.032) (0.03,0.09+0.032)]{}; plot \[smooth\] coordinates [(0.35,0.33) (0.25,0.35) (0.18,0.39) (0.12,0.43) (0.08,0.45) ]{}; plot \[smooth\] coordinates [(0.35,0.45) (0.30,0.49) (0.18,0.7) ]{};
plot \[smooth\] coordinates [(0.35+0.48,0.2+0.027) (0.2+0.48,0.2+0.027) (0.1+0.48,0.16+0.026) (0.52,0.09+0.026)]{}; plot \[smooth\] coordinates [(0.35+0.48,0.32) (0.25+0.48,0.34) (0.18+0.48,0.38) (0.6,0.4) ]{}; plot \[smooth\] coordinates [(0.35+0.48,0.43) (0.30+0.48,0.47) (0.7,0.6) ]{};
Three types of transformations were evaluated: rigid transformations on both the $x$- and the $y$-axis, non-rigid transformations on both axes and [BARCHAN]{}transformation (non-rigid transformation on $x$-axis, rigid on $y$-axis). They are compared with the [Curfit2D]{}algorithm [@DeBoer_W_2014_j-chrom-a_two-dimensional_spac]. Significant changes in scores, especially for the CC index, suggest a better alignment of two chromatograms for dataset 1 with [BARCHAN]{}. However, small variations in these global indices demonstrate the need for a closer inspection of the results.
No registr. Rigid [@Myronenko_A_2010_j-ieee-tpami_point_srcpd] Non-rigid [@Myronenko_A_2010_j-ieee-tpami_point_srcpd] [Curfit2D]{}[@DeBoer_W_2014_j-chrom-a_two-dimensional_spac] [BARCHAN]{}
-- ------ ------------- ---------------------------------------------------- -------------------------------------------------------- ------------------------------------------------------------- -------------
CC
SSIM
CC
SSIM
[cc]{} ![image](figure04anotransformation){width="50.00000%"}& ![image](figure04brigid){width="50.00000%"}\
No transformation (centered cloud points) & Rigid\
\
![image](figure04cnonrigid){width="50.00000%"}& ![image](figure04dbarchan){width="50.00000%"}\
Non-rigid & [BARCHAN]{}
Figure \[fig:point-set\] shows the optimization results for the three tested transformations on dataset 1 thanks to scatterplots [@Anscombe_F_1973_j-american-statistician_graphs_sa]. Blue circles correspond to extracted peaks from the reference chromatogram whereas red crosses represent the extracted and transformed peaks for the new [GC$\times$GC]{}chromatogram. These images show that a fully rigid transformation (Figure \[fig:point-set\], top-right) does not allow a good match between the reference chromatogram and the new one. A better agreement is obtained with the [BARCHAN]{}algorithm and the fully non-rigid transformation. However, when looking into details in some specific areas of the 2D chromatogram where the number of extracted peaks is highly different between the reference and the new image (see red boxes at the bottom of Figure \[fig:point-set\]), [BARCHAN]{}algorithm outperforms the fully non-rigid approach. The interest of [BARCHAN]{}over the fully non-rigid approach is also shown on the transformation of template masks (see supplementary material). Whereas [BARCHAN]{}algorithm leads to a coherent transformation of the template mask including for blobs in the upper right part of the chromatogram which are extrapolated, the fully non-rigid deformation is not relevant.
To illustrate the changes modeled by [BARCHAN]{}on the chromatograms, the reference and the new chromatograms from dataset 1 are displayed in Figure \[fig:defo\], as well as the resulting aligned chromatogram.
-------------------------------------------------------- -------------------------------------------------------------- ------------------------------------------------------------
![image](figure05abarchandeformnew){width="31.00000%"} ![image](figure05bbarchandeformreference){width="31.00000%"} ![image](figure05cbarchandeformbarchan){width="31.00000%"}
New. Reference. [BARCHAN]{}transformed.
-------------------------------------------------------- -------------------------------------------------------------- ------------------------------------------------------------
The featural efficiency of the chromatogram alignment was evaluated from a more informative quantitative point of view on dataset 1. Three different ways of integrating the newly acquired chromatogram with 2DChromwere tested: 1) an hundred-percent manual adjustment (MA) procedure during which the user has moved every point of the template reference mask to make it match with the new data with only simple local or global translation tools; 2) the sole application of [BARCHAN]{}alignment algorithm on the raw data and 3) the combination of [BARCHAN]{}with light manual editing. The modified mask, after transformation from flowchart in Figure \[fig:flowchart\] with [BARCHAN]{}, is displayed on Figure \[fig:reference\] for both datasets 1 and 2, together with the reference template mask on the reference analysis. Concerning dataset 1, it is clearly visible that the new analysis differs from the reference analysis despite the use of the same chromatographic conditions: the new data are slightly shifted to the left and 2D retention times are higher mainly due to lower elution temperatures from the 1D column. Realignment of the template mask however looks satisfactory, with an overall good match between the readjusted mask and the analysis. The same conclusions can be drawn for dataset 2 even if the changes between the reference chromatogram and the new one are huge as these data were not obtained with same type of modulation system. This tends to show that the algorithm is robust and is able to handle large deviations between reference and new data.
------------------------------------------------------------------------- --------------------------------------------------------------------- ---------------------------------------------------------------------
![image](figure06areferenceboxzoom){width="33.00000%"} ![image](figure06bdata1boxzoom){width="33.00000%"} ![image](figure06cdata2boxzoom){width="33.00000%"}
![image](figure06dreferencezoom1){width="33.00000%" height="20.00000%"} ![image](figure06edata1zoom1){width="33.00000%" height="20.00000%"} ![image](figure06fdata2zoom1){width="33.00000%" height="20.00000%"}
![image](figure06greferencezoom2){width="33.00000%" height="14.00000%"} ![image](figure06hdata1zoom2){width="33.00000%" height="14.00000%"} ![image](figure06idata2zoom2){width="33.00000%" height="14.00000%"}
Reference. Dataset 1 ($h = 120$). Dataset 2 ($h = 60$).
------------------------------------------------------------------------- --------------------------------------------------------------------- ---------------------------------------------------------------------
Results for the quantification of chemical families are reported in Table \[tab:2\]. These are compared with reference data previously obtained on GO-SR sample during an intra-laboratory reproducibility study on two different chromatographs with two different users so as to take into account both instrumental and user variability.
----------------- ----------- --------- ------------- -------------------
Chemical family Reference 100% MA [BARCHAN]{} [BARCHAN]{}$+$ MA
n-
i-
Analysis time
----------------- ----------- --------- ------------- -------------------
The [BARCHAN]{}transformation leads to coherent quantitative results for every chemical family except for normal and iso-paraffins (n- and i- respectively) and to a lesser extent naphthenes (). The quantification of $n$-paraffins is underestimated while iso-paraffins are overestimated because of slight misalignments of the template mask as depicted on the bottom of Figure \[fig:reference\] and on Figure \[fig:misalignment\]. Indeed, some blobs identified in the reference mask as $n$-paraffins or naphthenes are only a few modulation periods wide as they correspond to single compounds. Small deviations in the alignment procedure impact the accurate quantification of these blobs. An additional manual fitting is therefore required to satisfactorily correct the transformed integration mask for these specific compounds. It consists in manually moving the blob points of these small blobs to make them perfectly match with the measured individual peaks. Movements are generally smaller than one or two pixels on the first dimension and minor on the second dimension. This overall procedure is typically applied to 20 to 40 blobs for a classical gas-oil template mask and requires a few minutes.
(image) at (0,0) [![\[fig:misalignment\] Misalignment for thin blobs on $n$-paraffins.](figure07misalignment "fig:"){width="49.00000%"}]{};
(0.21,0.22) circle\[radius=0.4cm\]; (0.34,0.29) ellipse (0.06 and 0.05); (0.51,0.35) ellipse (0.07 and 0.055); (0.7,0.4) ellipse (0.07 and 0.055);
(i10) at (0.21,0.12) [$i$-C10]{}; (i11) at (0.35,0.29-0.10) [$i$-C11]{}; (i12) at (0.50,0.34-0.09) [$i$-C12]{}; (i13) at (0.68,0.4-0.12) [$i$-C13]{};
(0.275,0.235) circle\[radius=0.12cm\]; (0.42,0.325) circle\[radius=0.12cm\]; (0.6,0.375) circle\[radius=0.12cm\]; (0.775,0.425) circle\[radius=0.12cm\];
(n10) at (0.275,0.235+0.13) [$n$-C10]{}; (n11) at (0.42,0.325+0.12) [$n$-C11]{}; (n12) at (0.6,0.375+0.11) [$n$-C12]{}; (n13) at (0.775,0.425+0.11) [$n$-C13]{};
When looking at the data analysis time required to correctly apply a sophisticated template mask on a new chromatogram, the complexity of the GO-SR sample and of the complex mask with its blobs implies several hours of work for an experienced user with a non-automated procedure. With an anchor point based approach, at the very least similar points in both chromatograms would need to be defined. It would result into a processing time approaching one hour. In contrast, the processing time for dataset 1 was of two minutes, including the peak selection step. Nevertheless, depending of the samples complexity, their range of differences, and the quality of the chromatographic acquisition, the resulting masks may still require light post-processing modifications. In this case, we verified that defining typically five anchor points in an interactive registration post-processing step was enough to get a result as good as a fully manually operated one. Time saving is therefore still significant compared to manual procedures.
Conclusion
==========
We present in this paper a 2D-chromatogram and template alignment named [BARCHAN]{}. It is based on three key ingredients: 1) a peak registration step which is performed on both the reference and the target 2D chromatograms; 2) two different types of transforms: a non-rigid one on the first chromatographic dimension and a rigid one on the second; 3) the use of the probabilistic Coherent Point Drift motion estimation strategy, that is proven to be robust to noise and outliers. It results into an overall procedure that is an order of magnitude faster than the competing user-interactive alignment algorithms, with an accuracy as good as manual registration while guarantying a better reproducibility. This fast procedure may have a great interest when changing [GC$\times$GC]{}configurations or when translating [GC$\times$GC$-$MS]{}template masks on other [GC$\times$GC]{}analysis ([GC$\times$GC$-$FID]{}analysis for example). Finally, feature point selection may benefit from the Bayesian peak tracking recently proposed in [@Barcaru_A_2016_j-anal-chim-acta_bayesian_ptnpamgcgcc].
Acknowledgments {#acknowledgments .unnumbered}
===============
The authors would like to thank Dr de Boer for his help with [Curfit2D]{}.
[41]{} \[1\][\#1]{} \[1\][`#1`]{} urlstyle \[1\][doi:\#1]{} \[1\][doi:]{}\[2\][\#2]{}
, , , () () , .
, , , , () () .
, , , () () , .
, , () , .
, , , , , () () , .
, , , , , , , () () , .
, , (Eds.), , , .
, , , , () , .
, , , , () () , .
, , , , , , , () () , .
, , , , , , () () , .
, , () , .
, , , , , () , .
, , , , , , () , .
, , , , , , () , .
, , , () , .
, , , , , () () , .
, , , , , , () () , .
, , , , , () () , .
, , , , , () () , .
, , , , , , , , , () () , .
, , , () () , .
, , , , , , () () , .
, , , , , , () () , .
, , , , , () () , .
, , , , , () () , .
, , , , , () () , .
, , , , () , .
, , , , , () () , .
, , , , , , , , () () , .
, , , , , , in: , , , (Eds.), , vol. of **, , , , .
, , , , , () () , .
, , , , , , () , .
, , , () () , .
, , , , () , .
, , , , , , () , .
, , , , , () () , .
, , , () () , .
, , , , () () .
, , () .
, , , , () , .
[^1]: The name is inspired by wind-produced crescent-shaped sand dunes (*barkhan* or *barchan*) reminiscent of 2D chromatogram shapes.
|
---
abstract: 'We present a method to derive implicit solvent models of electrolyte solutions from all-atom descriptions; providing analytical expressions of the thermodynamic and structural properties of the ions consistent with the underlying explicit solvent representation. Effective potentials between ions in solution are calculated to perform perturbation theory calculations, in order to derive the best possible description in terms of charged hard spheres. Applying this method to NaCl solutions yields excellent agreement with the all-atom model, provided ion association is taken into account.'
author:
- 'John Jairo $^{1,2,3}$'
- 'Jean-François $^{1,2,3}$'
- 'Mathieu Salanne$^{1,2}$'
- 'Olivier Bernard$^{1,2}$'
- 'Marie Jardat$^{1,2}$'
- 'Pierre Turq$^{1,2}$'
title: 'Models of electrolyte solutions from molecular descriptions: The example of NaCl solutions'
---
Since the pioneering works of Debye, Hückel, and Onsager, electrolyte solutions have been commonly described by continuous solvent models, for which the McMillan-Mayer theory [@McMillan45] provides a rigorous statistical-mechanical foundation. Within that level of description, simple phenomenological models such as the primitive model (PM), for which the ions are assimilated to charged hard spheres [@Barthel], can lead to explicit formulas for the thermodynamic and structural properties (e.g., with the help of the mean spherical approximation (MSA) [@Blum1] or the binding MSA (BIMSA) [@Blum95]). These models are the most practical to use [@Dufreche05], since they allow for a direct link between the experimental measurements and the microscopic parameters of the system. Nevertheless, they ignore the molecular structure of the solvent. Consequently, they cannot properly account for the complex specific effects of the ions, which appear in numerous biological, chemical, and physical interfacial phenomena [@Jungwirth06; @Kunz04], without further developments.
An alternative procedure consists in carrying out molecular simulations, where both the solvent and solute are treated explicitly. After a rigorous averaging over the solvent configurations, a coarse-grained description of the ions, which still includes the effect of the solvent structure, can be obtained [@Hess06bis; @Kalcher09; @Gavryushov06; @Lyubartsev95]. However, this set of methods is purely numeric; they do not provide any analytical expression for thermodynamic quantities. They are therefore restricted to simple geometries [@Horinek07; @Lund08] (bulk solutions or planar interfaces). The description of complex systems, such as porous or electrochemical materials, is still based on continuous solvent models [@VanDamme09].
In this letter we present a method aimed at bridging the gap between analytical and numerical approaches. It is based on the application of liquid perturbation theory (LPT) [@Hansen] to effective ion-ion potentials extracted from molecular dynamics (MD) results. Different approximations of the PM are employed for the case of NaCl electrolyte solutions: a two component model (MSA2), that only takes free ions into account, and two different three component models (MSA3 and BIMSA3), which include a third species (the contact ion pair). As we proceed to show, LPT allows us to select the best simple model which accurately accounts for the thermodynamics and the physical-chemistry of the system.
The first stage consists in calculating the McMillan-Mayer effective ion-ion interaction potentials $V_{ij}^{\text{eff}}(r)$, by inverting the radial distribution functions (RDF) $g_{ij}(r)$ obtained by MD. The simulations were carried out on a box of 2000 water molecules and 48 NaCl pairs using the same interaction potentials as in reference [@Rasaiah01]. This setup corresponds to a concentration of [$0.64\,\text{mol\,l}^{-1}$]{}. NPT ensemble sampling at standard pressure and temperature was enforced, with a time step of 1 fs and a pressure bath coupling constant of 1 ps. An equilibration run of 0.25 ns was followed by a production run of 0.6 ns for five different initial configurations. The averages of the resulting RDF were then used for the potential inversion via the HNC closure [@Hansen]. These effective potentials are assumed to be concentration independent and will be used for simulations at all concentrations.
Subtracting the long-range Coulombic potential $V^{\text{LR}}_{ij}(r)$ (which depends on the dielectric constant of the solvent) from $V^{\text{eff}}_{ij}(r)$, we obtain the short-range contribution $V^{\text{SR}}_{ij}(r)$ to the effective potentials. These are given in Fig. \[fig:1\] (species 1 and 2 refer to Na$^+$ and Cl$^-$ free ions, respectively). All the short-range potentials exhibit oscillations corresponding to the solvent layering between the ions, but this effect is particularly important for the cation-anion interaction: a considerable potential barrier ($\gtrsim 2k_{\text{B}}T$) separates the first two attractive wells. To serve as a reference, Monte Carlo (MC) simulations were performed with these effective potentials; a comparison between MD and MC RDF is also provided in Fig. \[fig:1\]. The excellent agreement between both sets of RDF validates the HNC inversion procedure [@Lyubartsev02], and allows us to compute all ion thermodynamic properties through implicit solvent MC simulations.
![\[fig:1\]Effective McMillan-Mayer short-range pair potentials extracted from explicit solvent simulations using the HNC closure. (a) Cation anion, (b) cation cation, (c) anion anion, (d) cation anion RDF obtained from explicit solvent MD and implicit solvent MC simulations.](figure1)
The second stage of our coarse-graining procedure consists in applying LPT, in order to deduce the best analytical model of electrolyte solutions which reproduces this molecular description. The principle of LPT is to describe the properties of a given system in terms of those of a well known reference system, with the difference between them treated as a perturbation in the reference potential. Assuming pairwise additive potentials, $V_{ij} = V_{ij}^{(0)} + \varDelta V_{ij}$, a first-order truncated expression for the free energy density of the system $\beta f_v$ is obtained, $$\label{eqn:w1}
\beta f_v \lesssim \beta f_v^{(0)} + \frac{1}{2}\beta\sum_{i,j}\rho_i\rho_j\int\text{d}{{\ensuremath{\bm{r}}}}\, g_{ij}^{(0)}(r) \varDelta V_{ij}(r)$$ which depends only on the free-energy density $f_v^{(0)}$ and RDF $g^{(0)}$ of the reference fluid, with $\beta = (k_{\mathrm{B}}T)^{-1}$ and $\rho_i$ the concentration of species $i$. The Gibbs-Bogoliubov inequality [@Hansen] ensures that the right-hand side of Eq. (\[eqn:w1\]) is actually a strict upper bound. Once a reference system has been chosen, the expression on the right-hand side of Eq. (\[eqn:w1\]) must be minimized with respect to the parameters defining the reference. This procedure yields the best first-order approximation to the free energy of the system under consideration.
For a system of charged particles in solution, the natural reference is the PM, defined in terms of the charge and diameter ($\sigma_i$) of each species. In this case, the perturbing potentials are just the short-range effective potentials computed above ($\Delta V_{ij} = V^{\text{SR}}_{ij}$). We use the MSA [@Blum1] solution to the PM, since it provides analytical expressions for both the free energy and the RDF. The perturbation term is evaluated using an exponential approximation to the RDF obtained within the MSA, $g(r)=\exp{\left[g_{\rm MSA}(r)-1\right]}$, which removes any unphysical negative regions and improves the comparison with HNC calculations.
![\[fig:2\](Color online) (a) Osmotic coefficient $\Phi$ in the McMillan-Mayer frame of reference. (diamond) MC simulations, (dot dashed) MSA2, (dot) Debye Hückel Limiting law (DHLL), (cross) experiments (Ref. [@Lobo] with the McMillan-Mayer to Lewis Randall conversion). (b) Minimization diameters. (dot dashed) MSA2 and (diamond) MSA-fit.](figure2)
We first used LPT for a two-component system (Na$^+$ and Cl$^-$ free ions) within the MSA (model MSA2), for concentrations ranging from 0.1 to [$2.0\,\text{mol\,l}^{-1}$]{}. The minimization leads to almost constant diameters on the whole range of concentration: $\sigma_1=3.67$ Å and $\sigma_2=4.78$ Å. As shown in Fig. \[fig:2\], these parameters yield osmotic coefficients close to MC calculations only at very low concentration, i.e., $c\leq {\ensuremath{0.1\,\text{mol\,l}^{-1}}}$ (experimental values are given for indicative purposes only, since a *perfect* model will exactly match the MC results). For molar solutions, the LPT results differ considerably from MC calculations. This discrepancy can easily be understood by comparing the diameters found within the MSA2 calculation with the effective potentials given in Fig. \[fig:1\]. The anion/cation contact distance obtained within the MSA2 calculation is $4.2$ Å, which is in the region of the second minimum of the effective potential and corresponds to the situation where there is a single layer of water molecules between the ions. The first minimum of the potential, which corresponds to the contact ion pair (CIP) is thus completely ignored by the MSA2 calculation. If the MSA diameters are directly fitted to reproduce the MC osmotic pressure, much smaller values are obtained. These MSA-fit hydrated diameters, which are compared to the MSA2 diameters in the bottom part of Fig. \[fig:2\], are averages of the CIP and the solvent-separated ion pair.
![\[fig:3\] Effective pair potentials derived for MSA3 and BIMSA3. (a) Cation anion (dashed line: without taking the pair into account), (b) pair cation, (c) pair anion, and (d) pair pair. The internal potential of the pair $\beta{\ensuremath{\widetilde{V}_{\text{int}}}}(r)$ is set equal to $\beta V^{\text{eff}}_{ij}(r)$ for distances less than 4 Å.](figure3)
To overcome this difficulty, we have explicitly introduced the CIP in our model (species 3). Straightforward calculations, based on a characteristic-function formalism, allow us to define an equivalent model in which the free ions and the CIP are explicitly taken into account [@Ciccotti84; @Dufreche03bis]. We apply this formalism by defining a pair as an anion and a cation at a distance less than 4 Å, which corresponds to the position of the effective potential maximum. The interaction between free, like charges in this new system remains unchanged, and the cation-anion interactions are easily approximated by extrapolating the original potential at the barrier separating pairs from free ions (as shown in Fig. \[fig:3\]). We assume that the interaction potential is averaged over the rotational degrees of freedom of the CIP and thus pairwise additive. Hereafter, the quantities referring to such a three-component model are written with a tilda symbol. The short-range potentials involving the pair can be derived, in the infinite dilution limit, from an average of the contributing ion interactions. In Fourier space,
\[eqn:pot\] $$\begin{aligned}
{\ensuremath{\widetilde{V}_{3i}}}^{\text{SR}}({{\ensuremath{\bm{k}}}}) &= {\ensuremath{\widetilde{w}_{}}}({{\ensuremath{\bm{k}}}}/2)\bigl[{\ensuremath{V}_{1i}}^{\text{SR}} + {\ensuremath{V}_{2i}}^{\text{SR}}\bigr]({{\ensuremath{\bm{k}}}}),\quad i=1,2 \\
{\ensuremath{\widetilde{V}_{33}}}^{\text{SR}}({{\ensuremath{\bm{k}}}}) &= {\ensuremath{\widetilde{w}_{}}}({{\ensuremath{\bm{k}}}}/2)^2\bigl[{\ensuremath{V}_{11}}^{\text{SR}} + {\ensuremath{V}_{22}}^{\text{SR}} + 2{\ensuremath{V}_{12}}^{\text{SR}}\bigr]({{\ensuremath{\bm{k}}}})
\intertext{where ${\ensuremath{\widetilde{w}_{}}}({{\ensuremath{\bm{r}}}})$ is the pair probability distribution}
{\ensuremath{\widetilde{w}_{}}}({{\ensuremath{\bm{r}}}}) &= {K}_{0}^{-1}e^{-\beta {\ensuremath{\widetilde{V}_{\text{int}}}}(r)}\end{aligned}$$
${\ensuremath{\widetilde{V}_{\text{int}}}}(r)$ is the internal part of the pair potential (see Fig. \[fig:3\]), and $K_0$ is the association constant, defined as: $${K}_0 = \int_0^\infty\text{d}r\, 4\pi r^2 e^{-\beta {\ensuremath{\widetilde{V}_{\text{int}}}}(r)} \label{eqn:k0} = 0.43~\text{L}.\text{mol}^{-1}$$
The excess free-energy density of the original system $\beta{\ensuremath{f}_{v}}^{\text{ex}}$ is that of the three component mixture $\beta{\ensuremath{\widetilde{f}_{v}}}^{\text{ex}}$ plus a correction term $$\beta{\ensuremath{f}_{v}}^{\text{ex}} = \beta{\ensuremath{\widetilde{f}_{v}}}^{\text{ex}} - {\ensuremath{\widetilde{\rho}_{3}}}\ln{K}_0 \label{eqn:free},$$ which is due to the change in standard chemical potential between the two component and three component models. It should be noted that the fraction of pairs is now an additional parameter in the minimization scheme, which serves to ensure chemical equilibrium. Within this representation, the pair can be modeled as a hard sphere (MSA3) or as a dumbbell-like CIP (BIMSA3) [@Blum95]. Since we have no additional information, we consider only symmetric dumbbells. Furthermore, since analytic expressions for the RDF within BIMSA are not known, we approximate the dumbbell as a hard sphere when computing the perturbation term (this is not necessary for the reference term, since an expression for the free energy is available). Let ${\ensuremath{\widetilde{\sigma}_{c}}}$ be the diameter of the cation (anion) within the dumbbell, the diameter of the hard sphere representing this dumbbell is taken to be ${\ensuremath{\widetilde{\sigma}_{3}}}=\frac{4\sqrt{2}}{\pi}{\ensuremath{\widetilde{\sigma}_{c}}}$[^1].
Using these two reference systems, the three-component MSA3 and BIMSA3, we obtain results in much better agreement with the MC simulations, as shown in Fig. \[fig:4\]. The diameters obtained for species 1, 2, and 3 are 3.65, 4.79, and 5.76 Å for MSA3 and 3.69, 4.75 and 6.19 Å for BIMSA3. The free ion diameters are similar for MSA2, MSA3, and BIMSA3. The pair diameter is smaller when modeled as a hard sphere (MSA3) than when modeled as a dumbbell (BIMSA3). At high concentration (about [$1\,\text{mol\,l}^{-1}$]{}), the MSA3 overestimates the free energy, because the excluded volume repulsion becomes too important for the pairs to be represented as hard spheres. The BIMSA3 model is the closest to the MC simulation results. It is worth noting that even at the lowest concentration considered, the fraction of pairs (shown in the insert of Fig. \[fig:4\]), although less then 5%, has a non-negligible effect on the thermodynamics of the system.
![\[fig:4\](Color online) Excess free-energy density $\beta f^{\text{ex}}_v$ as a function of the square root of the concentration $\sqrt{c}$. (diamond) MC simulations, (dot dashed) MSA2, (dashed) MSA3, (solid) BIMSA3, (dot) DHLL, and (cross) experiments. The inset gives the fraction of pairs (MSA3, BIMSA3) as a function of $\sqrt{c}$.](figure4)
![\[fig:5\](Color online) RDF obtained from MC simulations (diamond), BIMSA3 (solid line), and MSA-fit (dot dashed) at two concentrations.](figure5)
This procedure also provides an accurate description of the structure over the whole range of concentrations. A development similar to the one that leads to Eq. (\[eqn:pot\]) derives the average unpaired RDF from the corresponding paired quantities: $$\begin{aligned}
\rho_i\rho_j{\ensuremath{g}_{ij}}({{\ensuremath{\bm{k}}}}) &= {\ensuremath{\widetilde{\rho}_{3}}}{\ensuremath{\widetilde{w}_{}}}({{\ensuremath{\bm{k}}}})\left(1-\delta_{ij}\right) +{\ensuremath{\widetilde{\rho}_{i}}}{\ensuremath{\widetilde{\rho}_{j}}}{\ensuremath{\widetilde{g}_{ij}}}({{\ensuremath{\bm{k}}}})\notag\\
&+ {\ensuremath{\widetilde{\rho}_{3}}}{\ensuremath{\widetilde{w}_{}}}({{\ensuremath{\bm{k}}}}/2)\bigr[{\ensuremath{\widetilde{\rho}_{i}}}{\ensuremath{\widetilde{g}_{3i}}} + {\ensuremath{\widetilde{\rho}_{j}}}{\ensuremath{\widetilde{g}_{3j}}}\bigr]({{\ensuremath{\bm{k}}}})\\
&+ {\ensuremath{\widetilde{\rho}_{3}}}^{\,2}\left[{\ensuremath{\widetilde{w}_{}}}({{\ensuremath{\bm{k}}}}/2)\right]^2{\ensuremath{\widetilde{g}_{33}}}({{\ensuremath{\bm{k}}}})\notag\end{aligned}$$ The RDF obtained within BIMSA3 are compared with the MC and MSA-fit results in Fig. \[fig:5\]. Our BIMSA3 model accounts for the strong molecular peak of the CIP and provides the correct distances of minimal approach; whereas the naive MSA-fit procedure ignores the former and gives poor estimates for the latter. At larger separations, the BIMSA3 results do not reproduce the oscillations observed in the MC simulations, but the corresponding energy oscillations in the effective potentials are less than $k_{\mathrm{B}}T$. In addition, the perturbation term of the BIMSA3 appears to be negligible compared to the reference term for concentrations less than [$1\,\text{mol\,l}^{-1}$]{}. The perturbation can then be omitted to obtain a fully analytical theory, determined by the hard sphere diameters and the pair fraction given by LPT; with the free energy and the RDF given in terms of the BIMSA and MSA solutions, as described above. While the procedure we have followed uses two different approximations for the reference and perturbation terms (MSA vs BIMSA), these are known to be accurate for the systems under consideration and do not appear to be inconsistent with each other.
To conclude, we have combined MD simulations with LPT to construct simple models of electrolyte solutions which account for the molecular nature of the solvent. The final result is fully analytical and it yields the thermodynamic and structural properties of the solution, in agreement with the original molecular description. The methodology can in principle be adapted to any molecular description of the system (MD simulations involving interaction potentials accounting for polarization effects or Car-Parrinello MD simulations for example) as long as the ion-ion RDF are known. It can also be generalized to study interfaces. The method appears to be a promising approach toward the description of the specific effects of ions, especially for complex systems whose modeling requires an analytic solution.
The authors are particularly grateful to Werner Kunz for fruitful discussions.
[19]{} natexlab\#1[\#1]{} bibnamefont \#1[\#1]{} bibfnamefont \#1[\#1]{} citenamefont \#1[\#1]{} url \#1[`#1`]{} urlprefix \[2\][\#2]{} \[2\]\[\][[\#2](#2)]{}
, ****, ().
, , , ** (, ).
, in **, edited by (, ), vol. , pp. .
, ****, ().
, ****, ().
, ****, ().
, , , ****, ().
, , , ****, ().
, ****, ().
, ****, ()
, ****, ().
, ****, ().
, , , ****, ().
, ****, ().
, ** (, ).
, ****, ().
, ****, ().
, **, vol. (, ).
, , , ****, ().
, , , ****, ().
[^1]: The average contact distance between a symmetric dumbbell and an infinite plane at $\beta=0$.
|
---
author:
- 'S. Graser'
- 'P. J. Hirschfeld'
- 'T. Kopp'
- 'R. Gutser'
- 'B. M. Andersen'
- 'J. Mannhart'
title: 'What limits supercurrents in high temperature superconductors? A microscopic model of cuprate grain boundaries'
---
To explain the exponential dependence of the critical current $J_c$ on the misorientation angle $\alpha$ Chaudhari and collaborators [@Chaudhari] introduced several effects which can influence the critical current that are particular to high-temperature superconductor (HTS) grain boundaries (GB). First, a variation with angle can arise from the relative orientation of the $d$-wave order parameters pinned to the crystal lattices on either side of the boundary. This scenario was investigated in detail by Sigrist and Rice [@SigristRice]. However, such a modelling cannot explain the exponential suppression of the critical current over the full range of misorientation angles. Secondly, dislocation cores, whose density grows with increasing angle, can suppress the total current. A model assuming insulating dislocation cores which nucleate antiferromagnetic regions and destroy superconducting order was studied by Gurevich and Pashitskii [@Gurevich]. However, for grain boundary angles beyond approximately 10$^\circ$ when the cores start to overlap this model fails. Finally, variations of the stoichiometry in the grain boundary region, such as in the oxygen concentration, may affect the scattering of carriers and consequently the critical current. Stolbov and collaborators [@Stolbov] as well as Pennycook and collaborators [@Pennycook] have examined the bond length distribution near the grain boundary and calculated the change in the density of states at the Fermi level, or the change in the Cu valence, respectively. In the latter work the authors used the reduced valences to define an effective barrier near the boundary whose width grows linearly with misorientation angle.
A critical examination of the existing models shows that the difficulty of the longstanding HTS “grain boundary problem" arises from the multiple length scales involved: atomic scale reconstruction of the interface, the electrostatic screening length, the antiferromagnetic correlation length, and the coherence length of the superconductor. Thus it seems likely that only a multiscale approach to the problem can succeed.
Our goal in this paper is to simulate, in the most realistic way possible, the nature of the actual grain boundary in a cuprate HTS system, in order to characterize the multiple scales which cause the exponential suppression of the angle dependent critical current. To achieve this goal we proceed in a stepwise fashion, first simulating the atomic structure of realistic YBCO grain boundaries and assuring ourselves that our simulations are robust and duplicate the systematics of actual grain boundaries. Subsequently we construct an effective disordered tight-binding model, including $d$-wave pairing, whose parameters depend on the structures of the simulated grain boundaries in a well-defined way. Thus for any angle it will be possible to calculate the critical current; then, for a given pairing amplitude (reasonably well known from experiments on bulk systems) the form of $J_c(\alpha)$ and its absolute magnitude is calculated.
![[**Schematic of an HTS symmetric grain boundary.**]{} The misorientation angle $\alpha$ and the orientation of the $d$-wave order parameters are indicated.[]{data-label="schematic"}](Fig1){width="70.00000%"}
We simulate YBCO grain boundaries by a molecular dynamics (MD) procedure which has been shown to reproduce the correct structure and lattice parameters of the bulk YBa$_2$Cu$_3$O$_{7-\delta}$ crystal [@Zhang], and adapt techniques which were successfully applied to twist grain boundaries in monocomponent solids [@Phillpot]. We only sketch the procedure here, and postpone its details to the supplementary information and a longer publication. The method uses an energy functional within the canonical ensemble $$H=\frac{1}{2}\sum_{i=1}^N \sum_{\alpha=1}^3 m_i \dot{r}_{i,\alpha}^2
+ \frac{1}{2}\sum_{i=1}^{N}\sum_{j=1,j\neq i}^{N}U(r_{ij}),
\label{MD}$$ where $m_i$ is the mass of the ion and $U(r_{ij})$ is the effective potential between ions, taken to be of the form $U(r_{ij})=\Phi(r_{ij})+V(r_{ij})$. Here $V(r)$ is the screened Coulomb interaction $ V(r)=\pm e^{-\kappa
r}{Z^{2}e^{2}}/({4\pi\epsilon_{0}}{r})$ with screening length $\kappa^{-1}$, and $\Phi(r)$ is a short range Buckingham potential $\Phi(r)=A\exp(-r/\rho)-C/r^{6}$. We take the parameters $A$, $\rho$ and $C$ from Ref. , and the initial lattice constants are $a=3.82$ Å, $b=3.89$ Å and $c=11.68$ Å. To construct a grain boundary with well defined misorientation angle, we must fix the ion positions on both sides far from the boundary, and ensure that we have a periodic lattice structure along the grain boundary and also along the $c$-axis direction. Therefore we apply periodic boundary conditions in the molecular dynamics procedure in the direction parallel to the GB and also in the $c$-axis direction. In the direction perpendicular to the GB only atomic positions with a distance smaller than six lattice constants from the GB are reconstructed. This method restricts our consideration to grain boundary angles that allow a commensurate structure parallel to the GB, e.g. angles $\alpha=2 \cdot \arctan[{N_{1}a}/({N_{2}b})],~ N_{1},N_{2}\in\mathbf{N}$ [@gbangles].
![[**Top view of a calculated (410) grain boundary.**]{} Here only the Y and the CuO$_2$ layers are shown. The dots show the position of Y ions (magenta), Cu ions (yellow), and O ions (red). Structural units are indicated by solid black lines. For this particular angle we find a sequence of the form A(A/2)B\#(A/2)B, in agreement with the experimental results given in Table 1 of Ref. (for notation see this reference).[]{data-label="410"}](Fig2){width="80.00000%"}
An important step to be taken before starting the reconstruction of the GB is the initial preparation of the GB. Since we use a fixed number of ions at the GB within the molecular dynamics algorithm we have to initialize it with the correct number of ions. If we start with two perfect but rotated crystals on both sides of the interface, in which all ions in the half space behind the imaginary boundary line are cut away, we find that several ions are unfavorably close to each other. Here we have to use a set of selection rules to replace two ions by a single one at the grain boundary. These selection rules have to be carefully chosen for each type of ion since they determine how many ions of each type are present in the grain boundary region; the rules are detailed in the supplementary information. Ultimately they should be confirmed by a grand canonical MD procedure. However, such a procedure for a complex multicomponent system as the YBCO GB is technically still not feasible. While the selection rules are [*ad hoc*]{} in nature, we emphasize that they are independent of misorientation angle, and reproduce very well the experimental TEM structures [@Pennycook].
With the initial conditions established, the MD equations of motion associated with Eq. \[MD\] are iterated until all atoms are in equilibrium. As an example, we show in Fig. \[410\] the reconstructed positions of Y, Cu, and O ions for a (410) boundary. We emphasize that the MD simulation is performed for [*all*]{} the ions in the YBCO full 3D unit cells of two misoriented crystals except for a narrow “frame" consisting of ions that are fixed to preserve the crystalline order far from the grain boundary. The sequence of typical structural units identified in the experiments [@Pennycook] is also indicated and we find excellent agreement.
We next proceed to construct an effective tight-binding model which is restricted to the Cu sites, the positions of which were determined through the algorithm described. We calculate hopping matrix elements $t_{ij}$ of charge carriers (holes) up to next nearest neighbor positions of Cu ions. The Slater-Koster method is used to calculate the directional dependent orbital overlaps of Cu-$3d$ and O-$2p$ orbitals [@SlaterKoster; @Harrison]. The effective hopping between Cu positions is a sum of direct orbital overlaps and hopping via intermediate O sites, where the latter is calculated in 2nd order perturbation theory. For the homogeneous lattice, these parameters agree reasonably well with the numbers typically used in the literature for YBCO. Exact values and details of the procedure are given in the supplementary information. Results for a (410) grain boundary are shown in Fig. \[tightbinding 410\]. Note the largest hopping probabilities across the grain boundary are associated with the 3fold coordinated Cu ions which are close to dislocation cores. The inhomogeneities introduced through the distribution of hopping probabilities along the boundary induce scattering processes of the charge carriers and consequently contribute to an “effective barrier" at the grain boundary. We note that the angular dependence of the critical current $J_c(\alpha)$ is not directly related to the changes in averaged hopping parameters observed for different misalignment angles. In fact we found that the variation of the hopping probabilities with boundary angle cannot account by itself for the exponential dependence of $J_c(\alpha)$ over the whole range of misalignment angles.
![[**Tight-binding model for the CuO$_2$ plane.**]{} Hopping values between the Cu ions calculated from the interatomic matrix elements for a (410) grain boundary (a) and for a (710) grain boundary (b). The line thickness shows the hopping amplitudes and the color and size of the copper sites illustrate the on-site potential.[]{data-label="tightbinding 410"}](Fig3){width="95.00000%"}
The structural imperfection at the grain boundary will necessarily lead to charge inhomogeneities that will contribute—in a similar way as the reduced hopping probabilities—to the effective barrier that blocks the superconducting current over the grain boundary. We include these charge inhomogeneities into the calculations by considering them in the effective Hamiltonian as on-site potentials on the Cu sites. To accomplish this we utilize the method of valence bond sums. The basic idea is to calculate the bond valence of a cation by $$V_i=\sum_j\exp\left(\frac{r_{0}-r_{ij}}{B}\right)$$ where $j$ runs over all neighboring anions, in our case the neighboring negatively charged oxygen ions. The parameter $B=0.37$ Å is a universal constant in the bond valence theory, while $r_0$ is different for all cation-anion pairs and also depends on the formal integer oxidation state of the cation (the values are listed in Ref. ). Strong deviations from the formal valence reveal strain or even an incorrect structure. This procedure is straightforward in the case of the Y$^{3+}$ and Ba$^{2+}$ ions, while it is slightly more complicated for the Cu ions, because they have more than one formal integer oxidation state [@Brown; @Chmaissem]. We show in Fig. \[Cu\_charges\] the distribution of charges at the (410) grain boundary obtained. We also calculate by similar methods the oxidation state of the oxygen ions. Here, charge neutrality is ensured because the already determined cation valences are used.
![[**Charging of the CuO$_4$ squares.**]{} a) Charge distribution on copper (yellow/green) and oxygen (red) sites at a 410 boundary. The diameter of the circles is a measure for positive (copper) or negative (oxygen) charge, as determined by the bond valence analysis. The color of the copper sites represents the charging of the corresponding CuO$_4$ squares as described by Eq. \[CuO4charge\], with green circles referring to a positive charge compared to the bulk charge (see color scale). The transparent red circles represent the oxygen contributions to the CuO$_4$ charge. b) Plot of average charge on squares vs. distance from grain boundary (red points), and fit by a Lorentzian (blue line).[]{data-label="Cu_charges"}](Fig4a "fig:"){width="0.63\columnwidth"} ![[**Charging of the CuO$_4$ squares.**]{} a) Charge distribution on copper (yellow/green) and oxygen (red) sites at a 410 boundary. The diameter of the circles is a measure for positive (copper) or negative (oxygen) charge, as determined by the bond valence analysis. The color of the copper sites represents the charging of the corresponding CuO$_4$ squares as described by Eq. \[CuO4charge\], with green circles referring to a positive charge compared to the bulk charge (see color scale). The transparent red circles represent the oxygen contributions to the CuO$_4$ charge. b) Plot of average charge on squares vs. distance from grain boundary (red points), and fit by a Lorentzian (blue line).[]{data-label="Cu_charges"}](Fig4b "fig:"){width="0.36\columnwidth"}
In the next step we account for the effect of broken Cu-O bonds at the grain boundary that give rise to strong changes of the electronic configuration of the Cu atoms as well as of the electronic screening of charges, as shown in first principle calculations [@Stolbov]. Unfortunately there is no straightforward way to include these changes in the electronic configuration into a purely Cu-based tight-binding Hamiltonian. On the other hand we know that the additional holes doped into the CuO$_2$ planes form Zhang-Rice singlets residing on a CuO$_4$ square rather than on a single Cu site [@ZhangRice], and are therefore affected not only by the charge of the Cu ion but also by the charge of the surrounding oxygens. Modelling this situation, we use a phenomenological potential to sum the Cu and the O charges to obtain an effective charge of the CuO$_{4}$ square. This effective charge is taken as $$Q_{\textrm{CuO}_4}(i)=Q_{\textrm{Cu}}(i)+A\sum_jQ_0(j)e^{-r_{ij}^2/\lambda^2},
\label{CuO4charge}$$ where $A$ and $\lambda$ are two constants chosen to yield a neutral Cu site if 4 oxygen atoms are close to the average Cu-O distance. Correspondingly, the energy cost of a Cu site that has only 3 close oxygen neighbors instead of all 4 neighbors, is strongly enhanced. Thus, the broken Cu-O bonds induce strong charge inhomogeneities in the “void” regions of the grain boundary, mainly described by pentagonal structural units, while Cu sites belonging to “bridge” regions with mostly quadrangular structural units have charge values close to their bulk values.
Finally we have to translate the charge on the Cu sites (or better CuO$_4$ squares) into effective on-site lattice potentials. The values of screening lengths in the cuprates, and particularly near grain boundaries, are not precisely known, but near optimal doping they are of order of a lattice spacing or less. We adopt the simplest approach and assume a 3D Yukawa-type screening with phenomenological length parameter $\ell$ of this order, and find a potential integrated over a unit cell $V_0$ of $$\frac{V_0}{a^2 \bar{q}}\approx 4\pi
~(a_0 \ell)~\mathrm{Ryd}\approx 10~\mathrm{eV},$$ where $\bar{q}$ is the charge in units of the elementary charge, $a_{0}$ is the Bohr radius, and $\ell$ is taken to be $2$ Å while $a=4$ Å. Thus we find a surplus charge of a single elementary charge integrated over a unit cell to produce an effective potential of around $10$ eV. In the following we will use an effective potential of either 6 or 10 eV, reflecting the uncertainty in this parameter. The value of the effective potential will affect the scale of the final critical current.
To calculate the order parameter profile and the current across the grain boundary we solve the Bogoliubov-de Gennes mean field equations of inhomogeneous superconductivity self-consistently. The Hamiltonian is $$\hat{H}=\sum_i\epsilon_i\hat{n}_{i\sigma}-\sum_{ij\sigma}t_{ij}
\hat{c}_{i\sigma}^{\dagger}\hat{c}_{j\sigma}+\sum_{ij}\left(\Delta_{ij}
\hat{c}_{i\uparrow}^{\dagger}\hat{c}_{j\downarrow}^{\dagger}+\mathrm{h.c.}\right),
\label{BdGHamiltonian}$$ where the effective hopping parameters $t_{ij}$ are determined for a given grain boundary by the procedure described above and the onsite energies $\epsilon_i=u_i-\mu$ are a sum of the effective charge potentials $u_i$ and the chemical potential $\mu$. Performing a Bogoliubov transformation, we find equations for the particle and hole amplitudes $u_n$ and $v_n$ $$\sum_j\left(\begin{array}{cc}
H_{ij} & \Delta_{ij}\\
{\Delta}_{ij}^* &
-H_{ij}\end{array}\right)\left(\begin{array}{c}
u_n(j)\\
v_n(j)\end{array}\right)=E_n\left(\begin{array}{c}
u_n(i)\\
v_n(i)\end{array}\right)$$ with $$H_{ij}=\epsilon_i\delta_{ij}-t_{ij}$$ The self-consistency equation for the $d$-wave order parameter is then $$\Delta_{ij}=\frac{V_{ij}}{2 N_{\textrm{sc}}}\sum_{k_y}
\sum_n \left[ u_n(r_i)v_n^*(r_j) f(-E_n) -v_n^*(r_i)u_n(r_j)f(E_n) \right],$$ where we use $N_{\textrm{sc}}$ supercells in the direction parallel to the GB and $k_y$ is the corresponding Bloch wave vector. We adjust the chemical potential $\mu$ to ensure a fixed carrier density in the superconducting leads corresponding to 15$\%$ hole doping. The definition of the $d$-wave pair potential $V_{ij}$ in the vicinity of the grain boundary is not straightforward since the bonds connecting a Cu to its neighbors are not exactly oriented perpendicular to each other. We use a model that ties the strength of the pairing on a given bond to the size of the hopping on the bond, as well as to the charge difference across it, as detailed in the supplementary information. The final results for the critical current are not sensitive to the exact model employed.
![[**Supercurrent distribution.**]{} The current pattern in the vicinity of a (410) (a) and a (710) GB (b). The arrows only display the direction of the current, the red lines denote current flowing from left to right, blue lines denote current from right to left. The line thickness shows the current strength, while the point size and color of the Cu sites correspond to the on-site potential.[]{data-label="current"}](Fig5){width="95.00000%"}
With these preliminaries, the current itself can finally be calculated by imposing a phase gradient across the sample (see, e.g. Ref. for details) from the eigenfunctions $u_n,v_n$ and eigenvalues $E_n$ of the BdG equations for the grain boundary, $$\frac{j(r_i,r_j)}{e/\hbar} = -\frac{2it_{ij}}{N_\textrm{sc}}
\sum_{k_y} \sum_n \left[u_n^*(r_i)u_n(r_j)
f(E_n)+v_n(r_i)v_n^* (r_j)f(-E_n) -\mathrm{h.c.} \right]$$ The critical current $J_c$ is defined as the maximum value of the current as a function of the phase. In the tunnelling limit, when the barrier is large, this relationship is sinusoidal so the maximum current occurs at phase $\pi/2$. However, for very low angle grain boundaries we observe deviations from the tunnelling limit, i.e. higher transparency grain boundaries with non sinusoidal current-phase characteristics, although for the parameters studied here these deviations are rather small.
It is instructive to examine the spatial pattern of supercurrent flow across a grain boundary, which is far from simple, as illustrated in Fig. \[current\]. Along many bonds even away from the boundary, the current flows backwards or runs in closed loops around the squares. The flow appears to be dominated by large contributions between the regions which resemble classical dislocation cores. In most of our simulated grain boundaries we do not observe true $\pi$ junction behavior, characterized by an overall negative critical current. To derive the total current density across a 2D cross section parallel to the grain boundary at $x=0$ we sum up all contributions of $j(r_i,r_j)$ for which $x_i>0$ and $x_j<0$, with $x=0$ the $x$ coordinate (perpendicular to the interface) of the boundary, and normalize by the period length of the grain boundary structure $ p=a/\sin\alpha$.
![[**Angle dependence of the critical current.**]{} The critical current $J_c$ as a function of the grain boundary angle $\alpha$ for screening lengths $\ell = 1.2$ Å (a) and 2 Å (b). Here the red points denote experimental results for YBCO junctions taken from Ref. , the light blue crosses show theoretical results for differently reconstructed GBs, the dark blue crosses show averaged theoretical values, and the light red triangles show “hypothetical” $s$-wave results. The dashed red and blue lines are exponential fits to the experimental and theoretical data, respectively.[]{data-label="jc"}](Fig6){width="95.00000%"}
This calculation is in principle capable of providing the absolute value of the critical current. To accomplish this, we have to normalize the current per grain boundary length by the height of the crystal unit cell $c$ and multiply it by the number of CuO planes per unit cell $N_\textrm{UC}$, e.g. for the YBCO compound under consideration $c=11.7$ Å, $N_\textrm{UC}=2$, $$j_{c}(x_{0})=\frac{N_\textrm{UC}}{pc}\sum_{i<j,x_i<0<x_j}j(r_i,r_j)$$ To account for the difference of the calculated gap magnitude from its experimental value we multiply the current by a factor of $\Delta_\textrm{exp}/\Delta_0$, where $\Delta_\textrm{exp}$ is the experimentally measured order parameter and $\Delta_0$ is its self-consistently determined bulk value. We have checked that at low temperatures $T \ll T_c$ an approximately linear scaling of the critical current as a function of the order parameter holds true for all grain boundary angles.
In Fig. \[jc\], the critical current is plotted as a function of misorientation angle for a set of grain boundary junctions from (710) to (520) which we have simulated. All model parameters are fixed for the different junctions, except for a range of values affecting the initial conditions of the grain boundary reconstruction, which resulted in slightly different structures with the same misorientation angle $\alpha$. Intriguingly, the variability of our simulated junctions is quite similar to the variability of actual physical samples as plotted in Fig. \[jc\]. For two different choices of the screening length $\ell$, we see that the dependence on misorientation angle is exponential. Since in our picture this parameter directly affects the strength of the barrier, it is also natural that it should affect the exponential decay constant, as also shown by the Figure. The value of $\ell$ which gives the correct slope of the log plot yields a critical current which exceeds the experimental value by an order of magnitude. We speculate that the effect of strong correlations (see Ref. and references therein), not yet included in this theory, may account for this discrepancy, given that a suppression by an order of magnitude was already shown for (110) junctions[@Andersen_AF_SC]. We show in addition results for “hypothetical” $s$-wave junctions using the same model parameters as for the $d$-wave junctions. We simply replaced the bond-centered pair potential by an on-site pair potential resulting in an isotropic $s$-wave state. Although it is one order of magnitude larger, the critical current for the $s$-wave junction still shows a similar exponential dependence on the grain boundary angle. We emphasize that this model does not reflect the situation in a “real” $s$-wave superconductor like niobium or lead, that do not show an exponential angle dependence of the grain boundary current, since it is based on the microscopic structures of a CuO$_2$ plane.
Our multiscale analysis of the grain boundary problem of HTS suggests that the primary cause of the exponential dependence on misorientation angle is the charging of the interface near defects which resemble classical dislocation cores[@Gurevich], leading to a porous barrier where weak links are distributed in a characteristic way which depends on the global characteristics of the interface at a given angle (structure of defects, density of dislocations, etc.). The $d$-wave order parameter symmetry and the nature of the atomic wave functions at the boundary which modulate the hopping amplitudes do not appear to be essential for the functional form of the angle dependence although they cannot be neglected in a quantitative analysis. As such, we predict that this type of behavior may be observed in other classes of complex superconducting materials. Very recently, a report of similar tendencies in ferropnictide grain boundary junctions appeared to confirm this[@Larbalestier]. It will be interesting to use the new perspective on the longstanding problem to try to understand how Ca doping of the grain boundaries is able to increase the critical current by large amounts[@Hammerl], and to explore other chemical and structural methods of accomplishing the same goal.
This work was supported by DOE grant DE-FG02-05ER46236 (PJH), and by the DFG through SFB 484 (SG, TK, RG, and JM) and a research scholarship (SG). We are grateful to Y. Barash for important early contributions to the project and we acknowledge fruitful discussions with A. Gurevich and F. Loder. PH would also like to thank the Kavli Institute for Theoretical Physics for support under NSF-PHY05-51164 during the writing of this manuscript. The authors acknowledge the University of Florida High-Performance Computing Center for providing computational resources and support that have contributed to the research results reported in this paper.
[00]{}
Hilgenkamp, H. & Mannhart, J. Grain boundaries in high-$T_c$ superconductors. [*Rev. Mod. Phys.*]{} [**74**]{}, 485 (2002).
Dimos, D., Chaudhari, P., Mannhart, J. & LeGoues, F. K. Orientation dependence of grain-boundary critical currents in YBa$_2$Cu$_3$O$_{7-\delta}$ bicrystals. [*Phys. Rev. Lett.*]{} [**61**]{}, 219 (1988).
Chaudhari, P., Dimos, D. & Mannhart, J. Critical Currents in Single-Crystal and Bicrystal Films. in [*Earlier and Recent Aspects of Superconductivity*]{} (eds Bednorz, J. G. & Müller, K. A.) 201-207 (Springer-Verlag, 1990).
Sigrist, M. & Rice, T. M. Paramagnetic Effect in High $T_c$ Superconductors - A Hint for $d$-Wave Superconductivity. [*J. Phys. Soc. Jpn.*]{} [**61**]{}, 4283 (1992); [*J. Low Temp Phys.*]{} [**95**]{}, 389 (1994).
Gurevich, A., & Pashitskii, E. A. Current transport through low-angle grain boundaries in high-temperature superconductors. [*Phys. Rev. B.*]{} [**57**]{}, 13878 (1998).
Stolbov, S. V., Mironova, M. K. & Salama, K. Microscopic origins of the grain boundary effect on the critical current in superconducting copper oxides. [*Supercond. Sci. Technol.*]{} [**12**]{}, 1071 (1999).
Pennycook, S. J. [*et al.*]{} The Relationship Between Grain Boundary Structure and Current Transport in High-Tc Superconductors. in [*Studies of High Temperature Superconductors: Microstructures and Related Studies of High Temperature Superconductors-II, Vol. 30*]{} (ed Narlikar, A. V.) Ch. 6 (Nova Science Publishers, 2000).
Andersen, B. M., Barash, Yu. S., Graser, S. & Hirschfeld, P. J. Josephson effects in $d$-wave superconductor junctions with antiferromagnetic interlayers. [*Phys. Rev. B*]{} [**77**]{}, 054501 (2008).
Zhang, X. & Catlow, C. R. A. Molecular dynamics study of oxygen diffusion in YBa$_{2}$Cu$_{3}$O$_{6.91}$. [*Phys. Rev. B*]{} [**46**]{}, 457 (1992).
Phillpot, S. R. & Rickman, J. M. Simulated quenching to the zero-temperature limit of the grand-canonical ensemble. [*J. Chem. Phys.*]{} [**97**]{}, 2651 (1992).
In the following we will call a GB with $N_{1}=1$, $N_{2}=4$ and therefore a GB angle of $\alpha=2 \cdot 0.24074\mathrm{\,\,
rad}=27.58^\circ$ a symmetric (410) GB.
Slater, J. C. & Koster, G. F. Simplified LCAO Method for the Periodic Potential Problem. [*Phys. Rev.*]{} [**94**]{}, 1498 (1954).
Harrison, W. A. Electronic structure and the properties of solids. (Dover Publications, 1989).
Liu, P. & Wang, Y. Theoretical study on the structure of Cu(110)-p2$\times$1-O reconstruction. [*J. Phys.: Condens. Matter*]{} [**12**]{}, 3955 (2000).
Baetzold, R. C. Atomistic Simulation of ionic and electronic defects in YBa$_{2}$Cu$_{3}$O$_{7}$. [*Phys. Rev. B*]{} [**38**]{}, 11304 (1988).
Brown, I. D. A Determination of the Oxidation States and Internal Stress in Ba$_{2}$YCu$_{3}$O$_{x}$, $x$=6-7 Using Bond Valences. [*J. of Solid State Chem.*]{} [**82**]{}, 122 (1989).
Chmaissem, O., Eckstein, Y. & Kuper, C. G. The Structure and a Bond-Valence-Sum Study of the 1-2-3 Superconductors (Ca$_{x}$La$_{1-x}$)(Ba$_{1.75-x}$La$_{0.25+x}$)Cu$_{3}$O$_{y}$ and YBa$_{2}$Cu$_{3}$O$_{y}$. [*Phys. Rev. B*]{} [**63**]{}, 174510 (2001).
Zhang, F. C. & Rice, T. M. Effective Hamiltonian for the superconducting Cu oxides. [*Phys. Rev. B*]{} [**37**]{}, 3759 (1988).
Andersen, B. M., Bobkova, I., Barash, Yu. S. & Hirschfeld, P. J. $0-\pi$ transitions in Josephson junctions with antiferromagnetic interlayers. [*Phys. Rev. Lett.*]{} [**96**]{}, 117005 (2006).
Freericks, J. K. Transport in Multilayered Nanostructures. The Dynamical Mean-Field Theory Approach. (Imperial College Press, 2006).
Lee, S. [*et al.*]{} Weak-link behavior of grain boundaries in superconducting Ba(Fe$_{1-x}$Co$_x$)$_2$As$_2$ bicrystals. [*not published*]{}, available at arXiv:0907.3741.
Hammerl, G. [*et al.*]{} Possible solution of the grain-boundary problem for applications of high-$T_c$ superconductors. [*Appl. Phys. Lett.*]{} [**81**]{}, 3209 (2002).
Schwingenschlögl U. & Schuster C. Quantitative calculations of charge-carrier densities in the depletion layers at YBa$_2$Cu$_3$O$_{7−\delta}$ interfaces. [*Phys. Rev. B*]{} [**79**]{}, 092505 (2009).
Alloul, H., Bobroff, J., Gabay, M., Hirschfeld, P. J. Defects in correlated metals and superconductors. [*Rev. Mod. Phys.*]{} [**81**]{}, 45 (2009).
Grain boundary reconstruction using molecular dynamics techniques
=================================================================
The first step in our multiscale approach to determine the critical current over a realistic tilt grain boundary (GB) is the modelling of the microscopically disordered region in the vicinity of the seam of the two rotated half crystals. Since the disorder introduced by the mismatch of the two rotated lattices extends far into the leads on both sides of the GB plane, simulations have to include a large number of atoms and it is very difficult to perform them with high precision [*ab initio*]{} methods as for example density function theory (DFT) [@Schuster]. Here we have employed a molecular dynamics technique to simulate the reconstruction of the ionic positions in the vicinity of the GB starting from an initial setup containing a realistic stoichiometry of each of the different ions. In the following we will present the details of the calculational scheme.
The initial setup of the grain boundary
---------------------------------------
The simplest way of modelling a tilt grain boundary is to stitch two perfect crystals together that are rotated into one another around an axis in the plane of the GB perpendicular to a lattice plane; atoms which cross into the region of space initially occupied by the other crystal are then eliminated (see Fig. \[GBsetup\] a). If we examine the structures constructed in this way we find on the one hand that some ions are left unphysically close to each other, while on the other hand large “void” regions may also remain. Since a molecular dynamics scheme employing an energy functional in the canonical ensemble does not allow for the creation and annihilation of ions in the GB region these faults in the setup will persist during the reconstruction process. To improve the initial structures we develop a set of “selection rules”: (i) We introduce an overlap of the two crystals, extending them into a region behind the virtual GB plane to prevent the creation of “void” regions. (ii) We replace two ions that are too close to each other by a single one at the GB. Due to the different ionic radii, the different multiplicities and the different internal positions these “selection rules” have to be adjusted for every ion in the unit cell in order to reproduce the correct stoichiometry found in TEM experiments [@Pennycook]. In Table \[TabSelRules\] we show the maximum overlap of ionic positions behind the GB plane as well as the minimum distance between two ions allowed before replacing them by a single ion at the GB. These values have proven to reproduce the experimentally observed GB structures for all misorientation angles examined within this work. An example of such a constructed GB is shown in Fig. \[GBsetup\] b).
Y$^{3+}$ Ba$^{2+}$ Cu$^{2+}$ (CuO$_2$) Cu$^{2+}$ (CuO)
-------------------------- ---------------- ------------------------- ------------------------- -----------------
ov \[$a$\] 0.15 0.2 0.15 0.15
$d_\mathrm{min}$ \[$a$\] 0.4 0.4 0.42 0.4
O$^{2-}$ (BaO) O$^{2-}$ (CuO$_2$: $a$) O$^{2-}$ (CuO$_2$: $b$) O$^{-}$ (CuO)
ov \[$a$\] 0.08 0.08 0.09 0.08
$d_\mathrm{min}$ \[$a$\] 0.3 0.35 0.3 0.3
: The maximum overlap behind the GB plane and the minimum distance allowed between two ionic positions itemised by the different ion types. The distances are given in units of the lattice constant $a = 3.82$ Å. \[TabSelRules\]
![The different steps in the reconstruction of a symmetric (410) GB: (a) Two half crystals rotated and cut behind the virtual GB plane (dashed line), (b) Initial setup of the GB using the “selection rules” outlined in the text, (c) Reconstructed GB using molecular dynamics, (d) Identification of basic structural units[@Pennycook]. The blue line in (a) visualizes the classification of the ($mn0$) GB with $m=4$ and $n=1$. []{data-label="GBsetup"}](Fig_410.eps){width="70.00000%"}
The molecular dynamics procedure
--------------------------------
The GB structures determined by applying the “selection rules” described in detail in the previous section are now used to initialize the molecular dynamics process. For monocomponent solids a method of zero temperature quenching has been successfully applied for the reconstruction of high-angle twist GBs [@Phillpot]. Since this method uses an energy functional within the grand-canonical ensemble it allows besides the movement of the atomic position also the creation and annihilation of atoms. For multicomponent systems like the complicated perovskite-type structures of the high-$T_c$ superconductors this method is not readily applicable. Therefore we choose a different approach using an energy functional in the canonical ensemble with a fixed number of ions. Here we can write the Lagrangian as $$\mathcal{L} = \frac{1}{2}\sum_{i=1}^M \sum_{\alpha=1}^3 m_i \dot{r}_{i\alpha}^2
-\frac{1}{2}\sum_{i=1}^M \sum_{j=1,j\neq i}^M U(r_{ij}),$$ and the Euler-Lagrange equations follow as the equations of motion for the ions $$m_i \ddot{r}_{i,\alpha} = -\frac{1}{2}\sum_{j=1,j\neq i}^M
\frac{\partial U(r_{ij})}{\partial r_{i,\alpha}}.$$ One of the main tasks is now the correct choice of model potentials to ensure that the crystal structure of YBa$_2$Cu$_3$O$_7$ (YBCO) is correctly reproduced for a homogeneous sample. Here we use Born model potentials with long range Coulomb interactions and short range terms of the Buckingham form $$U(r_{ij})=\Phi(r_{ij})+V(r_{ij}).$$ For the Coulomb interaction we can write the potential as $$V(r)=\pm e^{-\kappa r}\frac{1}{4\pi\epsilon_{0}}\frac{Z^{2}e^{2}}{r},$$ where we have introduced a Yukawa-type cut-off with $\kappa=\frac{1}{3.4}$ Å$^{-1}$ to avoid the necessity to balance the long range Coulomb potentials of the different ionic charges by the introduction of a Madelung constant. For the short range Buckingham terms $$\Phi(r)=A\exp(-r/\rho)-C/r^6$$ we take the parameters $A$, $\rho$ and $C$ from molecular dynamics studies by Zhang and Catlow [@Zhang] leading to a stable YBCO lattice with reasonable internal coordinates of each atom (see Table \[TabBuckParam\]). In addition we use the lattice constants $a=3.82$ Å, $b=3.89$ Å, and $c=11.68$ Å when setting up the initial GB structure and we fix them in the leads far away from the GB. The CuO chains in the CuO layer are directed parallel to the $b$-axis direction (compare Fig. \[crystal\]).
[**A**]{} $A$ (eV) $\rho$ (Å) $C$ (eV Å$^6$)
--------------------- ------------ ------------ ---------------- --
O$^{2-}$-O$^{2-}$ $22764.3$ $0.149$ $25.0$
O$^{2-}$-O$^{-}$ $22764.3$ $0.149$ $25.0$
O$^{2-}$-Cu$^{2+}$ $3799.3$ $0.243$ $0$
O$^{2-}$-Ba$^{2+}$ $3115.5$ $0.33583$ $0$
O$^{2-}$-Y$^{3+}$ $20717.5$ $0.24203$ $0$
O$^{-}$-O$^{-}$ $22764.3$ $0.149$ $25.0$
O$^{-}$-Cu$^{2+}$ $1861.6$ $0.25263$ $0$
O$^{-}$-Ba$^{2+}$ $29906.5$ $0.27238$ $0$
Cu$^{2+}$-Ba$^{2+}$ $168128.6$ $0.22873$ $0$
Ba$^{2+}$-Ba$^{2+}$ $2663.7$ $0.25580$ $0$
: (A) The parameters used to model the short range potentials of the Buckingham form [@Zhang]. (B) The bond lengths found within the molecular dynamics ($d_{\textrm{MD}}$) compared to the experimental values ($d_{\textrm{exp}}$) [@Zhang]. The ions are labelled according to Fig. \[crystal\]. \[TabBuckParam\]
[**B**]{} $d_{\textrm{MD}}$ (Å) $d_{\textrm{exp}}$ (Å)
------------ ----------------------- ------------------------ --
Cu(1)-O(1) $1.955$ $1.94$
Cu(1)-O(4) $1.783$ $1.847$
Cu(2)-O(2) $1.951$ $1.925$
Cu(2)-O(3) $1.98$ $1.957$
Cu(2)-O(4) $2.367$ $2.299$
Ba-O(1) $3.058$ $2.964$
Ba-O(2) $2.837$ $2.944$
Ba-O(3) $2.797$ $2.883$
Ba-O(4) $2.759$ $2.740$
Y-O(2) $2.372$ $2.407$
Y-O(3) $2.351$ $2.381$
: (A) The parameters used to model the short range potentials of the Buckingham form [@Zhang]. (B) The bond lengths found within the molecular dynamics ($d_{\textrm{MD}}$) compared to the experimental values ($d_{\textrm{exp}}$) [@Zhang]. The ions are labelled according to Fig. \[crystal\]. \[TabBuckParam\]
![The crystal structure found within the molecular dynamics procedure calculated for a single unit cell with fixed lattice parameters $a=3.82$ Å, $b=3.89$ Å, and $c=11.68$ Å.[]{data-label="crystal"}](Fig_cry.eps){width="40.00000%"}
To construct a GB with well defined misalignment angle we have to fix the atomic positions on both sides of the interface. In addition we apply periodic boundary conditions in the molecular dynamics procedure in the directions parallel to the GB plane. In the direction perpendicular to the GB only atoms with a distance from the GB plane smaller than 6 lattice constants are reconstructed.
Since we are only interested in deriving stable equilibrium positions for all atoms in the GB region and do not try to simulate the temperature dependent dynamics of the system we completely remove the kinetic energy at the end of every iteration step. With this method the ions relax to their equilibrium positions following paths given by classical forces. With this procedure we are very likely to end up with an ionic distribution that corresponds to a local minimum of the potential energy instead of reaching the true ground state of the system. Randomly changing the initial setup of the GB before starting the reconstruction we are thus able to find different GB structures corresponding to the same misalignment angle $\alpha$. This reflects the experimental situation where one also observes different patterns of ionic arrangements along a macroscopic grain boundary with fixed misalignment angle. For all GB angles under consideration (except the 710 GB) we have reconstructed and analyzed two differently reconstructed grain boundary structures. An example of a reconstructed 410 GB is shown in Fig. \[GBsetup\] c). Finally we can identify the charateristic structural units as classified in Ref. . Here we distinguish between structural units of the bulk material and structural units that are formed due to the lattice mismatch at the GB. The first group consists of (deformed) rectangular and triangular units, that can be seen as fragments of a full rectangular unit, while the latter group consists of large pentagonal units bordered by either Cu or Y ions, that introduce strong deformations and can be identified as the centres of classical dislocation cores (see Fig. \[GBsetup\] d).
The effective tight-binding model Hamiltonian
=============================================
Slater-Koster method for the calculation of hopping matrix elements
-------------------------------------------------------------------
In the following we will derive a tight-binding Hamiltonian for the CuO$_2$ planes with charge carriers located in the $d_{x^{2}-y^{2}}$ orbitals of the copper atoms. The kinetic energy associated with the hopping of charge carriers from one Cu site to one of its neighboring sites can be calculated from the orbital overlaps of two Cu-$d$ orbitals. Besides the direct overlap between two Cu-$d$ orbitals, that is small due to the small spatial extension of the Cu-$3d$ orbitals, we will also include the indirect hopping “bridged” by an O-$p$ orbital, that can be calculated in second order perturbation theory. Here we will have to add up all possible second order processes involving the O-$p_x$ and O-$p_y$ orbitals of all intermediate oxygen atoms. In the vicinity of the grain boundary, the directional dependences of the orbital overlaps become important and we calculate the interatomic hopping elements from the Slater and Koster table of the displacement dependent interatomic matrix elements [@SlaterKoster; @Harrison] that depend on the direction cosines $l$, $m$ and $n$ of the vector pointing from one atom to the next, $\vec{r}=(l\vec{e}_{x}+m\vec{e}_{y}+n\vec{e}_{z})d$. In addition, we calculate the effective potentials $V_{pd\sigma}$ and $V_{pd\pi}$ for the $\sigma$- or $\pi$-bonds between the O-$p$ and the Cu-$d$ orbitals, as well as the potentials $V_{dd\sigma}$ and $V_{dd\delta}$ for the $\sigma$- or $\delta$-bonds between two Cu-$d$ orbitals using the effective parameters provided in Ref. . In the following we will outline the calculational scheme used within this work by deriving the effective hopping parameter between two neighboring Cu ions in a bulk configuration of a flat CuO$_2$ plane with an average Cu-O distance of $$d_{\textrm{Cu-O}} = 1.95 \textrm{\AA} = 3.685 a_0.$$ As a first step we calculate the hopping between the Cu-$d_{x^2-y^2}$ orbital and the O-$p_x$ orbital that are connected by the vector $\vec{r}=d\vec{e}_x$ and therefore $l=1$ and $m=n=0$. The angular dependence is introduced as $$E_{x,x^2-y^2}=\frac{1}{2} 3^{1/2} l(l^2-m^2) V_{pd\sigma}
+l(1-l^2+m^2) V_{pd\pi} = \frac{1}{2} 3^{1/2} V_{pd\sigma}.$$ In the next step we have to calcuate the distance-dependent potential of the $\sigma$-bond $$V_{pd\sigma}=\eta_{pd\sigma} \frac{\hbar^2 r_d^{3/2}}{md^{7/2}}
= - 2.95 \cdot 7.62 \textrm{ eV \AA$^2$}
\frac{(0.67\textrm{\AA})^{3/2}}{(1.95\textrm{\AA})^{7/2}} = -1.19061\textrm{ eV},$$ where we have used $\frac{\hbar^2}{m}=7.62$ eV Å$^2$ and the characteristic length $r_{d}=0.67$ Å of the Cu-$d$ orbital has been taken from Ref. . Now we can calculate the interatomic matrix element as $$E_{x,x^2-y^2}=\frac{1}{2} 3^{1/2} V_{pd\sigma} = -1.0311\textrm{ eV}.$$ The corresponding hopping parameter between the Cu-$d$ and the O-$p_y$ orbital vanishes due to a basic symmetry argument. The directional part of the direct overlap between two Cu-$d$ orbitals on nearest neighbor Cu sites ($d_{\textrm{Cu-Cu}}=3.9$ Å) can be calculated as $$E_{x^2-y^2,x^2-y^2}=\frac{3}{4} V_{dd\sigma} + \frac{1}{4} V_{dd\delta} = \frac{3}{4} V_{dd\sigma}.$$ Again we need in addition the distance-dependent potential of the $\sigma$-bond of two Cu-$d$ orbitals: $$V_{dd\sigma} = \eta_{dd\sigma} \frac{\hbar^2 r_d^3}{md^5}
= -16.2 \cdot 7.62 \textrm{ eV} \textrm{\AA}^2 \cdot
\frac{(0.67 \textrm{\AA})^3}{(3.9 \textrm{\AA})^5}=-0.04115\textrm{ eV},$$ and the total energy associated with the direct overlap of two Cu-$d$ orbitals can finally be calculated as $$E_{x^2-y^2,x^2-y^2}=\frac{3}{4} V_{dd\sigma}=-0.03086\textrm{ eV}.$$ Now we can compare this energy to the kinetic energy describing the superexchange between two Cu sites via the O-$p$ orbitals $$E_{\textrm{Cu-Cu}}= \frac{E_{x,x^2-y^2} \cdot E_{x,x^2-y^2}}{\epsilon_d-\epsilon_p}
+ \frac{E_{y,x^2-y^2}\cdot E_{y,x^2-y^2}}{\epsilon_d-\epsilon_p}
= \frac{1.06317}{-3.5} \textrm{ eV} + 0\textrm{ eV}=-0.304\textrm{ eV}.$$ Due to a strong renormalization of the site energies in the cuprates the effective charge transfer gap $\Delta=\epsilon_p-\epsilon_d$ is larger than one would expect from the difference of the bare site energies of the Cu-$3d$ and the O-$2p$ orbitals [@Alloul]. Here we have chosen the charge transfer gap to be $\Delta=3.5$ eV, a value that is consistent with the range of values found in numerical studies. The full matrix element between two Cu-$d$ orbitals is now the sum of the direct overlap and the second order term including the intermediate O-$p$ orbitals.
![The averaged hopping as a function of the distance to the GB plane for different GBs.[]{data-label="hopping"}](Hopping_Profile.eps){width="\textwidth"}
![The suppression of the hopping as a function of the misalignment angle $\alpha$ (red points) and a linear fit (blue dashed line). The hopping suppression is defined as the intgral over a Gaussian fit of the hopping profiles shown in Fig. \[hopping\].[]{data-label="hopping1"}](Hopping_suppression_angle.eps){width="50.00000%"}
In Fig. \[hopping\] we show the averaged hopping as a function of the distance to the grain boundary plane for different GBs. If we fit the suppressed hopping values in the vicinity of the grain boundary by a Gaussian form and integrate over the “effective barrier” derived in this way, we find only a linear variation with misalignment angle (see Fig. \[hopping1\]).
The bond valence analysis
-------------------------
The structural imperfection at the grain boundary will necessarily lead to charge inhomogeneities that will contribute — in a similar way as the reduced hopping — to the effective barrier that blocks the superconducting current over the GB. We can include these charge inhomogeneities in our calculations by “translating” them into on-site potentials on the Cu sites. It is evident that we also have to include the charges on the O sites in our considerations although the O sites themselves have already been integrated out in the effective one-band tight-binding model. We will start our considerations by assigning every atom of the perfect crystal a formal integer ionic charge: Y$^{3+}$, Ba$^{2+}$, O$^{2-}$. The requirement of charge neutrality leaves us with 7 positive charges to be distributed on the 3 Cu atoms: Thus we will have two Cu$^{2+}$, and one Cu$^{3+}$ ion per unit cell. For a crystal with bonds that are neither strictly covalent nor strictly ionic it is convenient to introduce a fractional valence for each ion, that is determined with respect to its ionic environment in the unit cell of the crystal. Here we calculate the bond valence of a cation by $$V_i = \sum_j \exp\left(\frac{r_0 - r_{ij}}{B} \right),$$ where the sum is over all neighboring anions, in our case the neighboring negatively charged O and $r_{ij}$ is the cation-anion distance. Here we take $B=0.37$ Å following Ref. , while $r_0$ is different for all cation-anion pairs and can also depend on the formal integer oxidation state of the ion. The basic idea is that the bond valence sum should agree with the assumed integer oxidation state of the ion, and strong deviations indicate strain in the crystal or even an incorrect structure. This seems to be a very clear concept for the case of the Y$^{3+}$ and Ba$^{2+}$ ions. For the copper atoms, where we can have more than one formal integer oxidation state, the situation is slightly more complicated. Following Refs. we define $\xi_i^{(3)}$ as the fraction of Cu ions at site $i$ that are in a Cu$^{3+}$-type oxidation state while the remaining $(1-\xi_i^{(3)})$ Cu ions are in a Cu$^{2+}$-type oxidation state. With this the average oxidation state of the Cu ion at site $i$ is $$\bar{V}_i = 3\xi_i^{(3)} + 2 (1-\xi_i^{(3)})=2 + \xi_i^{(3)}.$$ On the other hand, this should be equal to the sum of a fraction of bond valences $V_i^{(3+)}$ of ions characterized by $r_0(\textrm{Cu}^{3+})=1.73$ Å and a fraction of bond valences $V_i^{(2+)}$ of ions characterized by $r_0(\textrm{Cu}^{2+})=1.679$ Å: $$\bar{V}_i = \xi_i^{(3)}V_i^{(3+)} + (1-\xi_i^{(3)})V_i^{(2+)}.$$ Solving this set of equations for $\xi_i^{(3)}$ allows us to determine the fraction of Cu ions with the valence $3+$: $$\xi_i^{(3)}=\frac{V_i^{(2+)} - 2}{V_i^{(2+)}-V_i^{(3+)} + 1}.$$ In a similar way we can proceed assuming that we have a Cu ion that could be in a 1+ or a 2+ oxidation state. Here we will use $r_0(\textrm{Cu}^{2+})=1.679$ Å and $r_0(\textrm{Cu}^{1+})=1.6$ Å and we find $$\xi_i^{(1)}=\frac{V_i^{(2+)}-2}{V_i^{(2+)}-V_i^{(1+)}-1}.$$ In the case that Cu$^{2+}$ and Cu$^{3+}$ are the most probable oxidation states we will find that $\xi_i^{(3)}$ is positive and $\xi_i^{(1)}$ is negative while in the case that Cu$^{2+}$ and Cu$^{+}$ are the most probable oxidation states we will find that $\xi_i^{(1)}$ is positive and $\xi_i^{(3)}$ is negative. For the calculation of the final oxidation state of a particular Cu atom one has to use the correct, positive $\xi_i$.
In a last step we determine the fractional valence of the O ions by summing up all charge contributions from the neighboring cations. Here we assume that the Ba and the Y ions are in their formal integer oxidations state whereas we assign every Cu ion a fractional oxidation state determined by $\xi_i^{(1/3)}$. This method ensures that we end up with a charge neutral crystal.
In the bulk system we derive fractional valences close to the formal integer oxidation states. In the vicinity of the GB we find however deviations from the bulk values due to missing or displaced neighboring ions.
The definition of the superconducting pairing interaction
---------------------------------------------------------
To model the known momentum space structure of the superconducting order parameter in the weak coupling description of the Bogoliubov-de Gennes theory, one usually defines an interaction $V_{ij}$ on the bonds connecting two nearest neighbor Cu sites. Unfortunately, there is no obvious way to define an analogous pairing interaction in a strongly disordered region of the crystal, as e.g. in the vicinity of the GB, since the exact microscopic origin of this interaction is not known. Assuming that a missing or a broken Cu-O bond destroys the underlying pairing mechanism, we develop a method of tying the superconducting pairing interaction between two Cu sites to the hopping matrix elements connecting the two sites as well as to the charge imbalance between them. Hence we define the pairing interaction on a given bond as the product of a dimensionless constant $V_0$, that is adjusted to reproduce the correct modulus of the gap in the bulk, and the hopping parameter $t_{ij}$. In addition we impose an exponential suppression of the pairing strength with increasing charge imbalance: $$V_{ij} = V_0 t_{ij} e^{-|Q_i-Q_j|/e}.$$ To avoid long range contributions to the pairing we use a threshhold of $0.2$ eV for $|t_{ij}|$, thus restricting the pairing interaction to the bonds between nearest neighbor Cu sites in the bulk. Here we emphasize that although we tried to model the pairing interaction in a realistic way, the exact procedure how we define the pairing interaction in the disordered region does not qualitatively change the results.
The calculation of the critical current
=======================================
![The modulus (a,b) and the phase (c,d) of the self-consistently calculated order parameter as a function of the distance to the GB plane for a 410 and for a 710 GB.[]{data-label="orderparam"}](Gap_Functions.eps){width="70.00000%"}
The superconducting current over a weak link, as well as over a metallic or an insulating barrier is accompanied by a drastic change of the phase of the superconducting order parameter. For a true tunnel junction, for which the two superconducting regions are completely decoupled, the current-phase relation is known to be sinusoidal, and the maximum current is found for a phase jump of $\pi/2$. For a weak link, as e.g. provided by a geometrical constriction or a disordered region like a grain boundary, the phase of the superconducting order parameter has to change continously between its two bulk values in the leads on both sides of the weak link. Here one would expect deviations from the sinusoidal current-phase relation and a smooth transition of the phase over a length scale given by the dimensions of the weak link. Besides the change in the phase of the order parameter in the presence of an applied current, a weak link is also characterized by a suppression of the modulus of the order parameter, either due to geometric restrictions or due to strong disorder. An additional suppression of the order parameter on the length scale of the coherence length can occur due to the formation of Andreev bound states present at specifically orientated interfaces of $d$-wave superconductors. Here we have calculated the order parameter self-consistently from Eq. 8 in the main text and found a suppression of its modulus in the vicinity of the grain boundary that depends in its width and depth only weakly on the misalignment angle (compare Fig. \[orderparam\] a,b).
![(a) The current profile as a function of the distance $x$ to the GB plane for a 710 GB. The phase of the order parameter has been fixed at different distances $d/2$ from the GB. (b) The dependence of the critical current as a function of the distance $d$ between the two points at which the phase of the order parameter has been fixed.[]{data-label="current1"}](CC_SizeDep.eps){width="90.00000%"}
For a numerical determination of the critical current — the maximum current that can be applied to a system without destroying its superconducting properties — it is convenient to enforce a certain phase difference of the order parameter between two points of the system separated by a distance $d$, e.g. in our example on both sides of the grain boundary region. Here we calculate the superconducting current over the grain boundary modelled by the Hamiltonian in Eq. 5 of the main text using Eqs. 9 and 10 therein and fixing the phase of the order parameter in the leads in every iteration step. The critical current for our system can then be found as the current maximum under a variation of the phase difference. If a strong perturbation limits the superconducting current — in the so-called tunneling limit — the main drop of the superconducting phase will appear in the small region of the perturbation and the change of the phase in the superconducting leads is negligible (see Fig. \[orderparam\] c). However, if the perturbation is weak, the change of the phase in the superconducting leads becomes more important (see Fig. \[orderparam\] d) and the current will depend on the distance $d$ between the two points, at which the phase of the order parameter is fixed. Since we are interested in the calculation of the critical current, we have to decrease the distance, thus increasing the phase gradient, until we reach the maximum current. To calculate the current over the low angle grain boundaries, we have determined the maximum current by extrapolating it to the value expected for $d=0$ (see Fig. \[current1\]).
|
---
author:
- 'Sergiu I. Vacaru'
title: 'Wormholes and Off–Diagonal Solutions in f(R,T), Einstein and Finsler Gravity Theories'
---
Nonholonomic Deformations in Modified Gravity Theories {#s2}
======================================================
We study gravity theories formulated on a spacetime manifold $\mathbf{V}, dim \mathbf{V}= n \geq 4$ (for Finsler models, on tangent bundle $T\mathbf{V}$) endowed with metric, $\mathbf{g}$, and compatible linear connection $\mathbf{D} $, structures, $\mathbf{Dg}=0$, see details in Refs. [@vadm1; @vfinsl1; @vfinsl2; @vfinsl3]. Our goal is to prove that there are such local frame and canonical connection structures when the gravitational field equations in $f(R,T)$–modified gravity, MG, see reviews [@odints1; @odints2; @odints3; @stv], can be integrated in generic off–diagonal forms with metrics depending on all spacetime coordinates. We provide explicit examples when generalized solutions in MG can be equivalently modelled as effective Einstein spaces and determine deformations of wormhole spacetimes in general relativity (GR).
[**1.1 Geometric Preliminaries**]{}: We consider a conventional horizontal (h) and vertical (v) splitting of the tangent space $T\mathbf{V,}$ when a non–integrable (equivalently, nonholonomic, or anholonomic) distribution $\mathbf{N}:\ T\mathbf{V}=h\mathbf{V}\oplus v\mathbf{V}$ (for Finsler theories, $\mathbf{N}:\ TT\mathbf{V}=hT\mathbf{V}\oplus vT\mathbf{V}$). Locally, such a h–v–splitting is determined by a set of coefficients $\mathbf{N}=\{N_{i}^{a}(x,y)\}$ and coordinates parameterized: $u=(x,y)$, $u^{\mu }=(x^{i},y^{a}),$ where the h–(v–)indices run $i,j,...=1,2,...,n$ ($a,b,...=n+1,...,n+n$). There are N–adapted frames $\mathbf{e}_{\nu }=(\mathbf{e}_{i}, e_{a})$, $\mathbf{e}^{\mu}=(e^{i},\mathbf{e}^{a}),$ $$\mathbf{e}_{i} = \partial /\partial x^{i}-\ N_{i}^{a}(u)\partial /\partial
y^{a},\ e_{a}=\partial _{a}=\partial /\partial y^{a},\ %\label{nader} \\
e^{i} = dx^{i},\ \mathbf{e}^{a}=dy^{a}+\ N_{i}^{a}(u)dx^{i}, \label{nadif}$$which satisfy the conditions $\lbrack \mathbf{e}_{\alpha },\mathbf{e}_{\beta }]=\mathbf{e}_{\alpha }%
\mathbf{e}_{\beta }-\mathbf{e}_{\beta }\mathbf{e}_{\alpha }=W_{\alpha \beta
}^{\gamma }\mathbf{e}_{\gamma }$, with anholonomy coefficients $W_{ia}^{b}=\partial _{a}N_{i}^{b},W_{ji}^{a}=\Omega _{ij}^{a}=\mathbf{e}%
_{j}\left( N_{i}^{a}\right) -\mathbf{e}_{i}(N_{j}^{a})$.
On a nonholonomic manifold $(\mathbf{V,N})$, and/or nonholonomic bundle $(T\mathbf{V,N})$, we can represent any data $(\mathbf{g,D})$ in N–adapted form (preserving under parallel transport a chosen h-v–splitting) parameterized as: 1) a *distinguished metric, d–metric,* $$\mathbf{g}=g_{\alpha }(u)\mathbf{e}^{\alpha }\otimes \mathbf{e}^{\beta
}=g_{i}(x)dx^{i}\otimes dx^{i}+g_{a}(x,y)\mathbf{e}^{a}\otimes \mathbf{e}%
^{a}. \label{dm1}$$ and 2) a *distinguished connection, d–connection,* $\mathbf{D}=(hD,vD)$.
Any d–connection is characterized by d–torsion, nonmetricity, and d–curvature structures: $\mathcal{T}(\mathbf{X,Y}) :=\mathbf{D}_{\mathbf{X}}\mathbf{Y}-\mathbf{D}_{%
\mathbf{Y}}\mathbf{X}-[\mathbf{X,Y}],$ $\mathcal{Q}(\mathbf{X}):=\mathbf{D}_{%
\mathbf{X}}\mathbf{g,}$ $\mathcal{R}(\mathbf{X,Y}) :=\mathbf{D}_{\mathbf{X}}\mathbf{D}_{\mathbf{Y}}-%
\mathbf{D}_{\mathbf{Y}}\mathbf{D}_{\mathbf{X}}-\mathbf{D}_{\mathbf{[X,Y]}}$, where $\mathbf{X,Y}\in T\mathbf{V}$ (or $\in TT\mathbf{V}$, in Finsler like theories).
There are two “preferred” linear connections which can be defined for the same data $(\mathbf{g,N})$: 1) the *canonical d–connection* $%
\widehat{\mathbf{D}}$ uniquely determined by the conditions that it is metric compatible, $\widehat{\mathbf{D}}\mathbf{g=0,}$ and with zero h–torsion, $h\widehat{%
\mathcal{T}}=\{\widehat{T}_{\ jk}^{i}\}=0,$ and zero v–torsion, $v\widehat{\mathcal{T}}=\{\widehat{T}_{\ bc}^{a}\}=0$; 2) the Levi–Civita (LC) connection, $\nabla$, when $\mathcal{T}=0 $ and $\mathcal{Q}=0$, if $\mathbf{D}\rightarrow \nabla$. Such linear connections are related via a canonical distortion relation $\widehat{\mathbf{D}}=\nabla +\widehat{\mathbf{Z}}$. We can work equivalently on $\mathbf{V}$ and $T\mathbf{V}$ using both linear connections. For any data $(\mathbf{g,N,}\widehat{\mathbf{D}})$, we can define and compute in standard form, respectively, the Riemann, $\widehat{\mathcal{R}}=\mathbf{\{}%
\widehat{\mathbf{R}}_{\ \beta \gamma \delta }^{\alpha }\},$ and the Ricci, $\widehat{\mathcal{R}}ic=\{\widehat{\mathbf{R}}_{\alpha \beta }:=\widehat{\mathbf{R}}_{\ \alpha \beta \gamma }^{\gamma }\}$ d–tensors; for $\widehat{R}:=\mathbf{g}^{\alpha \beta }\widehat{\mathbf{R}}_{\alpha \beta
}$, we can introduce $\widehat{\mathbf{E}}_{\alpha \beta }:= \widehat{\mathbf{R}}_{\alpha
\beta }-\frac{1}{2}\mathbf{g}_{\alpha \beta }\ \widehat{R}$.
[**1.2 Nonholonomically Modified Gravity**]{}: We study theories with action$$S=\frac{1}{16\pi }\int \delta u^{n+n}\sqrt{|\mathbf{g}_{\alpha \beta }|}[f(%
\widehat{R},T)+\ ^{m}L], \label{act}$$ generalizing the so–called modified $f(R,T)$ gravity [@odints1; @odints2; @odints3] to the case of d–connection $\widehat{\mathbf{D}}$, which can be considered for (pseudo) Riemannian spaces (as an “auxiliary” one) [@vadm1], for Hořava–Lifshits type modifications [@vfinsl2; @vhlquant] and on (non) commutative Finsler spaces [@vfinsl1; @vfinsl3; @stv]. In (\[act\]), $T$ is the trace of the stress–energy momentum tensor constructed for the matter fields Lagrangian $\ ^{m}L$. It is possible to elaborate a N–adapted variational formalism for a large class of models with perfect fluid matter with $\ ^{m}L=-p$, for pressure $p$, and assuming that $f(\widehat{R},T)=\ ^{1}f(\widehat{R})+\ ^{2}f(T)$, where $\ ^{1}F(\widehat{R}):=\partial \ ^{1}f(\widehat{R})/\partial
\widehat{R}$ and $\ ^{2}F(T):=\partial \ ^{2}f(T)/\partial T$. We obtain a model of MG with effective Einstein equations, $\widehat{\mathbf{E}}_{\alpha \beta }={\Upsilon }_{\beta \delta }$, for source ${\Upsilon }_{\beta \delta }=\ ^{ef}\eta \ G\ \mathbf{T}_{\beta \delta
}+\ ^{ef}\mathbf{T}_{\beta \delta }$, where $\ ^{ef}\eta =[1+\ ^{2}F/8\pi ]/\
^{1}F $ is the effective polarization of cosmological constant $G$, $\mathbf{T}_{\beta \delta
}$ is the usual energy–momentum tensor for matter fields and the $f$–modification of the energy–momentum tensor results in $\ ^{ef}\mathbf{T}_{\beta \delta }=[\frac{1}{2}(\ ^{1}f-\ ^{1}F\ \widehat{R}%
+2p\ ^{2}F+\ ^{2}f)\mathbf{g}_{\beta \delta }-(\mathbf{g}_{\beta \delta }\
\widehat{\mathbf{D}}_{\alpha }\widehat{\mathbf{D}}^{\alpha }-\widehat{%
\mathbf{D}}_{\beta }\widehat{\mathbf{D}}_{\delta })\ ^{1}F]/\ ^{1}F$.
The effective Einstein equations decouple for parameterizations of metrics (\[dm1\]) when the coefficients $N_{i}^{a}(u)$ in (\[nadif\]) are such way prescribed that the corresponding nonholonomic constraints result in $\widehat{\mathbf{D}}$ with $\widehat{R}=const$ and ${\Upsilon }_{~\delta
}^{\beta }=(\Lambda +\lambda )\mathbf{\delta }_{~\delta }^{\beta }$ for an effective cosmological constant $\Lambda $ for modified gravity and $\lambda$ for a possible cosmological constant in GR. This results in $\widehat{\mathbf{D}}_{\delta }\ ^{1}F_{\mid \Upsilon =\Lambda +\lambda}=0$, see details in [@vadm1; @vfinsl1; @vfinsl2].
Ellipsoid, Solitonic & Toroid Deformations of Wormholes
=======================================================
The general stationary ansatz for off–diagonal solutions is [$$\begin{aligned}
\mathbf{ds}^{2} &=&e^{\tilde{\psi}(\widetilde{\xi },\theta )}(d\widetilde{%
\xi }^{2}+\ d\vartheta ^{2})+ \frac{\lbrack \partial _{\varphi }\varpi (\widetilde{\xi },\vartheta
,\varphi )]^{2}}{ \Lambda + \lambda }
\left( 1+\varepsilon \frac{\partial _{\varphi }[\chi _{4}(\widetilde{\xi }%
,\vartheta ,\varphi )\varpi (\widetilde{\xi },\vartheta ,\varphi )]}{%
\partial _{\varphi }\varpi (\widetilde{\xi },\vartheta ,\varphi )}\right) \nonumber \\
&& r^{2}(\widetilde{\xi })\sin ^{2}\theta (\widetilde{\xi },\vartheta )(\delta
\varphi )^{2} -\frac{e^{2\varpi (\widetilde{\xi },\vartheta ,\varphi )]}}{| \Lambda + \lambda | }[1+\varepsilon \chi _{4}(%
\widetilde{\xi },\vartheta ,\varphi )]e^{2B(\widetilde{\xi })}(\delta t)^{2},
\label{offdwans1} \\
\delta \varphi &=&d\varphi +\partial _{\widetilde{\xi }}[\ ^{\eta }%
\widetilde{A}(\widetilde{\xi },\vartheta ,\varphi )+\varepsilon \overline{A}(%
\widetilde{\xi },\vartheta ,\varphi )]d\widetilde{\xi }+\partial _{\vartheta
}[\ ^{\eta }\widetilde{A}(\widetilde{\xi },\vartheta ,\varphi )+\varepsilon
\overline{A}(\widetilde{\xi },\vartheta ,\varphi )]d\vartheta ,\ \nonumber \\
\delta t &=&dt+\partial _{\widetilde{\xi }}[\ ^{\eta }n(\widetilde{\xi }%
,\vartheta )+\varepsilon \partial _{i}\overline{n}(\widetilde{\xi }%
,\vartheta )]~d\widetilde{\xi }+\partial _{\vartheta }[\ ^{\eta }n(%
\widetilde{\xi },\vartheta )+\varepsilon \partial _{i}\overline{n}(%
\widetilde{\xi },\vartheta )]~d\vartheta , \nonumber\end{aligned}$$]{} where $\ \widetilde{\xi }=\int dr/\sqrt{|1-b(r)/r|}$ for $b(r),B(\widetilde{\xi
}) $ determined by a wormhole metric in GR. For 4–d theories, we consider $x^i=(\widetilde{\xi },\theta)$ and $y^a=(\varphi,t)$.
[**2.1 Rotoid –configurations**]{} with a small parameter (eccentricity) $\varepsilon $ are “extracted” from (\[offdwans1\]) if we take for the $f$–deformations $\chi _{4}=\overline{\chi }_{4}(r,\varphi ):=\frac{2\overline{M}(r)}{r}$$\left(
1-\frac{2\overline{M}(r)}{r}\right) ^{-1}\underline{\zeta }\sin (\omega
_{0}\varphi +\varphi _{0})$, for $r$ considered as a function $r(\ \widetilde{\xi })$. Let us define [$$\begin{aligned}
&& h_{3} =\widetilde{\eta }_{3}(\widetilde{\xi },\vartheta ,\varphi )[
1+\varepsilon \chi _{3}(\widetilde{\xi },\vartheta ,\varphi )]\ ^{0}%
\overline{h}_{3}(\widetilde{\xi },\vartheta ),\
h_{4} = \widetilde{\eta }_{4}(\widetilde{\xi },\vartheta ,\varphi )[
1+\varepsilon \overline{\chi }_{4}(\widetilde{\xi },\varphi )] \ ^{0}%
\overline{h}_{4}(\widetilde{\xi }), \mbox{for} \nonumber \\
&& \ ^{0}\overline{h}_{3}=r^{2}(\widetilde{\xi })\sin ^{2}\theta (%
\widetilde{\xi },\vartheta ), \ ^{0}\overline{h}_{4}=q(\widetilde{\xi }),\
\widetilde{\eta }_{3}=\frac{[\partial _{\varphi }\varpi (\widetilde{\xi }%
,\vartheta ,\varphi )]^{2}}{ \Lambda +\lambda },\widetilde{\eta }_{4}=\frac{e^{2\varpi (\widetilde{\xi }%
,\vartheta ,\varphi )]}}{| (\Lambda + \lambda) |q(\widetilde{\xi })}e^{2B(\widetilde{\xi })}, \label{polfew}\end{aligned}$$]{} where $e^{2B(\widetilde{\xi })}\rightarrow q(\widetilde{\xi })$ if $\widetilde{\xi }\rightarrow \xi .$ For a prescribed $\widetilde{%
\varpi }(\widetilde{\xi },\vartheta ,\varphi ),$ we compute $\widetilde{\chi }_{3}=\chi _{3}(\widetilde{\xi },\vartheta ,\varphi ) =\partial _{\varphi }[\overline{\chi }_{4}\widetilde{\varpi }%
]/\partial _{\varphi }\widetilde{\varpi }$, $\overline{w}_{i}=\frac{\partial _{i}(\ r(\widetilde{\xi })\sin \theta (%
\widetilde{\xi },\vartheta )\sqrt{|q(\widetilde{\xi })|}\partial _{\varphi }[%
\overline{\chi }_{4}\varpi ])}{e^{\varpi }r(\widetilde{\xi })\sin \theta (%
\widetilde{\xi },\vartheta )\sqrt{|q(\widetilde{\xi })|}\partial _{\varphi
}\varpi }=\partial _{i}\overline{A}(\widetilde{\xi },\vartheta ,\varphi ),$ for $x^{i}=(\widetilde{\xi },\vartheta ).$ We model an ellipsoid configuration with $\ r_{+}(\widetilde{\xi }_{+})\simeq \frac{2\ \overline{M}(\ \widetilde{\xi }_{+})}{%
1+\varepsilon \underline{\zeta }\sin (\omega _{0}\varphi +\varphi _{0})}$, for constants $\underline{%
\zeta },\omega _{0}, \varphi _{0}$ and eccentricity $\varepsilon .$ We obtain[$$\begin{aligned}
\mathbf{ds}^{2} &=&e^{\tilde{\psi}(\widetilde{\xi },\theta )}(d\widetilde{%
\xi }^{2}+\ d\vartheta ^{2})+ \frac{[\partial _{\varphi }\widetilde{\varpi }]^{2}}{\Lambda +
\lambda }(1+\varepsilon \frac{\partial _{\varphi }[\overline{\chi }_{4}%
\widetilde{\varpi }]}{\partial _{\varphi }\widetilde{\varpi }})\ ^{0}%
\overline{h}_{3}[d\varphi +\partial _{\widetilde{\xi }}(\ ^{\eta }\widetilde{%
A}+\varepsilon \overline{A})d\widetilde{\xi }+\partial _{\vartheta }(\
^{\eta }\widetilde{A} + \nonumber \\ && \varepsilon \overline{A})d\vartheta ]^{2}
-\frac{e^{2\widetilde{\varpi }}}{|\Lambda + \lambda |}[1+\varepsilon
\overline{\chi }_{4}(\widetilde{\xi },\varphi )]e^{2B(\widetilde{\xi }%
)}[dt+\partial _{\widetilde{\xi }}(\ ^{\eta }n+\varepsilon \overline{n})~d%
\widetilde{\xi }+\partial _{\vartheta }(\ ^{\eta }n+\varepsilon \overline{n}%
)~d\vartheta ]^{2}. \label{ellipswh}\end{aligned}$$]{}If the generating functions $\widetilde{\varpi }$ and effective sources are such way chosen that the polarization functions (\[polfew\]) can be approximated $\widetilde{\eta }_{a}\simeq 1$ and $%
^{\eta }\widetilde{A}$ and $\ ^{\eta }n$ are “almost constant”, the metric (\[ellipswh\]) mimics small rotoid wormhole like configurations.
[**2.2 Solitonic waves, wormholes and black ellipsoids:**]{} An interesting class of off–diagonal solutions depending on all spacetime coordinates can be constructed by designing a configuration when a 1–solitonic wave preserves an ellipsoidal wormhole configuration. Such a spacetime metric can be written in the form [$$\mathbf{ds}^{2} = e^{\tilde{\psi}(x^{i})}(d\widetilde{\xi }^{2}+\
d\vartheta ^{2})+\omega ^{2} \left[ \widetilde{\eta }_{3}(1+\varepsilon \frac{\partial _{\varphi }[%
\overline{\chi }_{4}\widetilde{\varpi }]}{\partial _{\varphi }\widetilde{%
\varpi }})\ ^{0}\overline{h}_{3}(\delta \varphi )^{2}-\widetilde{\eta }%
_{4}[1+\varepsilon \overline{\chi }_{4}(\widetilde{\xi },\varphi )]\ ^{0}%
\overline{h}_{4}(\delta t)^{2}\right], \label{solitwh}$$ ]{} for $\delta \varphi = d\varphi +\partial _{i}(\ ^{\eta }\widetilde{A}%
+\varepsilon \overline{A})dx^{i},\delta t=dt+~_{1}n_{i}(\widetilde{\xi }%
,\vartheta )dx^{i}$, $x^{i}=(\widetilde{\xi },\vartheta )$ and $y^{a}=(\varphi ,t).$ The factor $\omega (\widetilde{\xi },t)=4\arctan e^{m\gamma (\widetilde{\xi }-vt)+m_{0}}$, where $\gamma ^{2}=(1-v^{2})^{-1}$ and constants $m,m_{0},v,$ defines a 1–soliton for the sine–Gordon equation, $\frac{\partial ^{2}\omega }{\partial t^{2}}-\frac{\partial ^{2}\omega }{%
\partial \widetilde{\xi }^{2}}+\sin \omega =0$.
For $\omega =1,$ the metrics (\[solitwh\]) are of type (\[ellipswh\]). A nontrivial value $\omega $ depends on the time like coordinate $t$ and has to be constrained to certain conditions which do not change the Ricci d–tensor, which can be written for $~_{1}n_{2}=0$ and $~_{1}n_{1}=const$ in the form $\frac{\partial \omega
}{\partial \widetilde{\xi }}-~_{1}n_{1}\frac{\partial \omega }{\partial t}=0$. A gravitational solitonic wave propagates self–consistently in a rotoid wormhole background with $_{1}n_{1}=v$ which solve both the sine–Gordon and constraint equations. Re–defining the system of coordinates with $x^{1}=\widetilde{\xi }$ and $x^{2}=\theta ,$ we can transform any $~_{1}n_{i}(\widetilde{\xi },\theta )$ into necessary $_{1}n_{1}=v$ and $_{1}n_{2}=0.$
[**2.3 Ringed wormholes:**]{} We can generate a rotoid wormhole plus a torus, [ $$\mathbf{ds}^{2} =e^{\tilde{\psi}(x^{i})}(d\widetilde{\xi }^{2}+\
d\vartheta ^{2})+\widetilde{\eta }_{3}(1+\varepsilon \frac{\partial
_{\varphi }[\overline{\chi }_{4}\widetilde{\varpi }]}{\partial _{\varphi }%
\widetilde{\varpi }})\ ^{0}\overline{h}_{3}(\delta \varphi )^{2} -f \widetilde{\eta }%
_{4}[1+\varepsilon \overline{\chi }_{4}(\widetilde{\xi },\varphi )]\ ^{0}%
\overline{h}_{4}(\delta t)^{2},$$ ]{} for $\delta \varphi = d\varphi +\partial _{i}(\ ^{\eta }\widetilde{A}%
+\varepsilon \overline{A})dx^{i},\delta t=dt+~_{1}n_{i}(\widetilde{\xi }%
,\vartheta )dx^{i}$, when $x^{i}=(\widetilde{\xi },\vartheta )$ and $y^{a}=(\varphi ,t),$ where the function $f(\widetilde{\xi },\vartheta ,\varphi )$ can be rewritten equivalently in Cartesian coordinates, $f(\widetilde{x},\widetilde{y},\widetilde{z})=\left( R_{0}-\sqrt{\widetilde{x}%
^{2}+\widetilde{y}^{2}}\right) ^{2}+\widetilde{z}^{2}-a_{0}$, for $a_{0}<a,R_{0}<r_{0}$. We get a ring around the wormhole throat (we argue that we obtain well–defined wormholes in the limit $\varepsilon
\rightarrow 0$ and for corresponding approximations $\widetilde{\eta }%
_{a}\simeq 1$ and $^{\eta }\widetilde{A}$ and $\ ^{\eta }n$ to be almost constant). The ring configuration is stated by the condition $\ f=0.$ In above formulas, $R_{0}$ is the distance from the center of the tube to the center of the torus/ring and $a_{0}$ is the radius of the tube. If the wormhole objects exist, the variants ringed by a torus may be stable for certain nonholonomic geometry and exotic matter configurations.
[**Acknowledgments:**]{} Research is supported by IDEI, PN-II-ID-PCE-2011-3-0256.
[99.]{}
S. Vacaru, Int. J. Geom. Meth. Mod. Phys. **8** (2011) 9
S. Vacaru, Class. Quant. Grav. **27** (2010) 105003
S. Vacaru, Gener. Relat. Grav. **44** (2012) 1015
S. Vacaru, Int. J. Mod. Phys. D **21** (2012) 1250072
S. Nojiri and S. D. Odintsov, Int. J. Geom. Meth. Mod. Phys. **4** (2007) 115-146
S. Nojiri and S. D. Odintsov, Phys. Rept. **505** (2011) 59
T. Harko, F. S. N. Lobo, S. Nojiri and S. D. Odintsov, Phys. Rev. D ** 84** (2011) 024020
P. Stavrinos and S. Vacaru, accepted to: Class. Quant. Grav. **30** (2013); arXiv: 1206.3998
S. Vacaru, Europhysics Letters (EPL) **96** (2011) 50001
|
---
author:
- 'G. A. Cruz-Diaz , G. M. Muñoz Caro , Y.-J. Chen , and T.-S. Yih'
date: 'Received - , 0000; Accepted - , 0000'
subtitle: 'I. Absorption cross-sections of polar-ice molecules.'
title: 'Vacuum-UV spectroscopy of interstellar ice analogs.'
---
[The VUV absorption cross sections of most molecular solids present in interstellar ice mantles with the exception of H$_{2}$O, NH$_{3}$, and CO$_{2}$ have not been reported yet. Models of ice photoprocessing depend on the VUV absorption cross section of the ice to estimate the penetration depth and radiation dose, and in the past, gas phase cross section values were used as an approximation.]{} [We aim to estimate the VUV absorption cross section of molecular ice components.]{} [Pure ices composed of CO, H$_{2}$O, CH$_{3}$OH, NH$_{3}$, or H$_{2}$S were deposited at 8 K. The column density of the ice samples was measured *in situ* by infrared spectroscopy in transmittance. VUV spectra of the ice samples were collected in the 120-160 nm (10.33-7.74 eV) range using a commercial microwave-discharged hydrogen flow lamp.]{} [We provide VUV absorption cross sections of the reported molecular ices. Our results agree with those previously reported for H$_2$O and NH$_3$ ices. Vacuum-UV absorption cross section of CH$_3$OH, CO, and H$_2$S in solid phase are reported for the first time. H$_2$S presents the highest absorption in the 120-160 nm range.]{} [Our method allows fast and readily available VUV spectroscopy of ices without the need to use a synchrotron beamline. We found that the ice absorption cross sections can be very different from the gas-phase values, and therefore, our data will significantly improve models that simulate the VUV photoprocessing and photodesorption of ice mantles. Photodesorption rates of pure ices, expressed in molecules per absorbed photon, can be derived from our data.]{}
\[Intro\]
Introduction
============
Ice mantles in dense cloud interiors and cold circumstellar environments are composed mainly of H$_{2}$O and other species such as CO$_2$, CO, CH$_4$, CH$_{3}$OH, and NH$_{3}$ (Mumma & Charnley 2011 and ref. therein). Some species with no permanent or induced dipole moment such as O$_2$ and N$_2$, cannot be easily observed in the infrared, but are also expected to be present in the solid phase (e.g., Ehrenfreund & van Dishoeck 1998). The relative to water abundances of the polar species CO, CH$_{3}$OH, and NH$_{3}$ are between 0-100%, 1-30%, and 2-15%. In comets the abundances are 0.4-30%, 0.2-7%, 0.2-7%, and $\sim$ 0.12-1.4% for CO, CH$_3$OH, NH$_3$, and H$_2$S (Mumma & Charnley 2011 and ref. therein). We therefore included H$_{2}$S in this study because this reduced species was detected in comets and is expected to form in ice mantles (Jiménez-Escobar & Muñoz Caro 2011). In the coldest regions where ice mantles form, thermally induced reactions are inhibited. Therefore, irradiation processes by UV-photons or cosmic rays may play an important role in the formation of new species in the ice and contribute to the desorption of ice species to the gas phase. Cosmic rays penetrate deeper into the cloud interior than UV-photons, generating secondary UV-photons by excitation of H$_2$ molecules. This secondary UV-field will interact more intensively with the grain mantles than direct impact by cosmic rays (Cecchi-Pestellini & Aiello 1992, Chen et al. 2004). VUV photons have the power to excite or dissociate molecules, leading to a complex chemistry in the grains. The VUV-region encloses wavelengths from 200 nm to about 100 nm, since the term extreme-ultraviolet (EUV) is often used for shorter wavelengths.
The estimation of the VUV-absorption cross sections of molecular ice components allows one to calculate the photon absorption of icy grains in that range. In addition, the VUV-absorption spectrum as a function of photon wavelength is required to study the photodesorption processes over the full photon emission energy range. Photoabsorption cross-section measurements in the VUV-region were performed for many gas phase molecules. Results for small molecules in the gas phase were summarized by Okabe [@Okabe], but most of these cross-section measurements for molecular bands with fine structure are severely distorted by the instrumental bandwidths (Hudson & Carter 1968). The integrated cross sections are less affected by the instrumental widths, and approach the true cross sections as optical depth approaches zero. Therefore, the true integrated cross section can be obtained from series of data taken with different column densities (Samson & Ederer 2000).
The lack of cross-section measurements in solids has led to the assumption that the cross sections of molecules in ice mantles were similar to the gas phase values. In recent years the VUV-absorption spectra of solid H$_{2}$O, CH$_{3}$OH, NH$_{3}$, CO$_2$, O$_2$, N$_2$, and CH$_4$ were reported by Mason et al. [@Mason], Kuo et al. [@Kuo], and Lu et al. [@Lu2]. But the VUV-absorption cross sections were only estimated for solid H$_{2}$O, NH$_{3}$, and CO$_{2}$ (Mason et al. 2006). Furthermore, all previous works have been performed using synchrotron monochromatic light as the VUV-source, scanning the measured spectral range.
In the present work, we aim to provide accurate measurements of the VUV-absorption cross sections of interstellar ice polar components including H$_{2}$O, CH$_{3}$OH, NH$_{3}$, CO, and H$_{2}$S. The use of a hydrogen VUV-lamp, commonly used in ice irradiation experiments, limits the spectroscopy to the emission range between 120 and 160 nm, but this has several advantages: the measurements are easy to perform and can be made regularly in the laboratory, without the need to use synchrotron beam time. A second paper will be dedicated to the nonpolar molecular ice components including CO$_2$, CH$_4$, N$_2$, and O$_2$. In Sect. \[Expe\] the experimental protocol is described. Sect. \[VUV\] provides the determination of VUV-absorption cross sections for the different ice samples. The astrophysical implications are presented in Sect. \[Astro\]. The conclusions are summarized in Sect. \[Conclusions\].
Experimental protocol {#Expe}
=====================
The experiments were performed using the interstellar astrochemistry chamber (ISAC), see Fig. \[setup\]. This set-up and the standard experimental protocol were described in Muñoz Caro et al. [@Caro1]. ISAC mainly consists of an ultra-high-vacuum (UHV) chamber, with pressure typically in the range P = 3-4.0 $\times$ 10$^{-11}$ mbar, where an ice layer made by deposition of a gas species onto a cold finger at 8 K, achieved by means of a closed-cycle helium cryostat, can be UV-irradiated. The evolution of the solid sample was monitored with *in situ* transmittance FTIR spectroscopy and VUV-spectroscopy. The chemical components used for the experiments described in this paper were H$_2$O(liquid), triply distilled; CH$_3$OH(liquid), Panreac Química S. A. 99.9%; CO(gas), Praxair 99.998%; NH$_3$(gas), Praxair 99.999%; and H$_{2}$S(gas), Praxair 99.8%. The deposited ice layer was photoprocessed with an F-type microwave-discharged hydrogen flow lamp (MDHL), from Opthos Instruments. The source has a VUV-flux of $\approx$ 2 $\times$ 10$^{14}$ cm$^{-2}$ s$^{-1}$ at the sample position, measured by CO$_{2}$ $\to$ CO actinometry, see Muñoz Caro et al. [@Caro1]. The Evenson cavity of the lamp is refrigerated with air. The VUV-spectrum is measured routinely [*in situ*]{} during the irradiation experiments with the use of a McPherson 0.2-meter focal length VUV monochromator (model 234/302) with a photomultiplier tube (PMT) detector equipped with a sodium salicylate window, optimized to operate from 100-500 nm (11.27-2.47 eV), with a resolution of 0.4 nm. The characterization of the MDHL spectrum has been studied before by Chen et al. [@Asper1] and will be discussed in more detail by Chen et al. [@Asper2].
![Scheme of the main chamber of ISAC. The VUV-light intersects three MgF$_{2}$ windows before it enters the UV-spectrometer, but the emission spectrum that the ice experiences corresponds to only one MgF$_{2}$ window absorption, the one between the VUV-lamp and the ISAC main chamber. FTIR denotes the source and the detector used to perform infrared spectroscopy. QMS is the quadrupole mass spectrometer used to detect gas-phase species. PMT is the photomultiplier tube that makes the ultraviolet spectroscopy posible.[]{data-label="setup"}](F1.eps){width="8.5cm"}
The interface between the MDHL and the vacuum chamber is a MgF$_{2}$ window. The monochromator is located at the rear end of the chamber, separated by another MgF$_{2}$ window. This means that the measured background spectra, that is without an ice sample intersecting the VUV-light cone, are the result of the radiation that intersects two MgF$_{2}$ windows.
Grating corrections were made for the VUV-spectra in the range of 110-180 nm (11.27-6.88 eV). The mean photon energy was calculated for the spectrum corresponding to only one MgF$_{2}$ window intersecting the VUV-lamp emission, since that is the mean photon energy that the ice sample experiences. This one-window spectrum, displayed in Fig. 2, was measured by directly coupling the VUV-lamp to the spectrometer with a MgF$_{2}$ window acting as the interface. The proportion of Ly-$\alpha$ in 110-180 nm range is 5.8%, lower than the 8.4% value estimated by Chen et al. [@Asper2]. This difference could be due to the lower transmittance of the MgF$_{2}$ window used as interface between the MDHL and the UHV-chamber, although the different position of the pressure gauge measuring the H$_{2}$ flow in the MDHL may also play a role.
![UV-photon flux as a function of wavelength of the MDHL in the 110 to 170 nm range estimated with the total photon flux calculated using actinometry. The spectrum corresponds to a measurement with one MgF$_{2}$ window intersecting the emitted VUV-light cone. This spectrum is the one experienced by the ice sample. The VUV-emission is dominated by the Ly-$\alpha$ peak (121.6 nm) and the Lyman band system.[]{data-label="Uno"}](F2.ps){width="\columnwidth"}
It was observed that most of the energy emitted by the VUV-lamp lies below 183 nm (6.77 eV) and the MgF$_{2}$ window cutoff occurs at 114 nm (10.87 eV). The mean photon energy in the 114-180 nm (10.87-6.88 eV) range is E$_{photon} = 8.6$ eV. The main emission bands are Ly-$\alpha$ at 121.6 nm (10.20 eV) and the molecular hydrogen bands centered on 157.8 nm (7.85 eV) and 160.8 nm (7.71 eV) for a hydrogen pressure of 0.4 mbar, see Fig. \[Uno\].
VUV-absorption cross section of interstellar polar ice analogs {#VUV}
==============================================================
We recorded VUV-absorption spectra of pure ices composed of CO, H$_{2}$O, CH$_{3}$OH, NH$_{3}$, and H$_{2}$S. For each ice spectrum a series of three measurements was performed: i) the emission spectrum of the VUV-lamp was measured to monitor the intensity of the main emission bands, ii) the emission spectrum transmitted by the MgF$_{2}$ substrate window was measured to monitor its transmittance, and iii) the emission spectrum transmitted by the substrate window with the deposited ice on top was measured. The absorption spectrum of the ice corresponds to the spectrum of the substrate with the ice after subtracting the bare MgF$_2$ substrate spectrum. In addition, the column density of the ice sample was measured by FTIR in transmittance. The VUV-spectrum and the column density of the ice were therefore monitored in a single experiment for the same ice sample. This improvement allowed us to estimate the VUV-absorption cross section of the ice more accurately. The column density of the deposited ice obtained by FTIR was calculated according to the formula $$\hspace{3cm}
N= \frac{1}{\mathcal{A}} \int_{band} \tau_{\nu}d{\nu},
\label{1}$$ where $N$ is the column density of the ice, $\tau_{\nu}$ the optical depth of the band, $d\nu$ the wavenumber differential, in cm$^{-1}$, and $\mathcal{A}$ is the band strength in cm molecule$^{-1}$. The integrated absorbance is equal to 0.43 $\times$ $\tau$, where $\tau$ is the integrated optical depth of the band. The VUV-absorption cross section was estimated according to the Beer-Lambert law, $$\begin{aligned}
\hspace{3cm}
I_t(\lambda) &=& I_0(\lambda) {e}^{-\sigma(\lambda) N} \nonumber \\
\sigma(\lambda) &=& - \frac{1}{N} \ln \left( \frac{I_t(\lambda)}{I_0(\lambda)} \right) \nonumber \\
\text{with} \quad N &\approx& \frac{N_i + N_f}{2}
\label{2}\end{aligned}$$ where I$_{t}(\lambda)$ is the transmitted intensity for a given wavelength $\lambda$, I$_{0}(\lambda)$ the incident intensity, $N$ is an average ice column density before ($N_i$) and after ($N_f$) irradiation in cm$^{-2}$, and $\sigma$ is the cross section in cm$^{2}$. It is important to notice that the total VUV-flux value emitted by the lamp does not affect the VUV-absorption spectrum of the ice sample, since it is obtained by substracting two spectra to obtain the absorbance in the VUV.
A priori, the VUV-absorption cross section of the ice was not known. Therefore, several measurements for different values of the ice column density were performed to improve the spectroscopy. Table \[table1\] provides the infrared-band position and the band strength used to obtain the column density of each ice component, along with the molecular dipole moment, since the latter has an effect on the intermolecular forces that operate in the ice that distinguish solid spectroscopy from gas phase spectroscopy in the IR and VUV ranges. The gas-phase cross sections published in the literature were also represented for comparison. We note that the majority of the gas-phase cross sections were performed at room temperature. The gas-phase spectrum can vary significantly when the gas sample is cooled to cryogenic temperatures, but it will still differ from the solid-phase spectrum, see Yoshino et al. [@Yoshino] and Cheng et al. [@Cheng2].
------------ --------------- ------------------------------------- ----------------------------------------- -------
Species Position $\mathcal{A}$ $N$ $\mu$
\[cm$^{-1}$\] \[cm/molec.\] \[$\times$ 10$^{15}$ molec./cm$^{2}$)\] \[D\]
CO 2139 1.1 $\pm$ 0.1 $\times10^{-17 \; a}$ 207$^{+16}_{-16}$ 0.11
H$_{2}$O 3259 1.7 $\pm$ 0.2 $\times10^{-16 \; b}$ 91$^{+7}_{-7}$ 1.85
CH$_{3}$OH 1026 1.8 $\pm$ 0.1 $\times10^{-17 \; c}$ 62$^{+5}_{-9}$ 1.66
NH$_{3}$ 1067 1.7 $\pm$ 0.1 $\times10^{-17 \; c}$ 67$^{+5}_{-8}$ 1.47
H$_{2}$S 2544 2.0 $\pm$ 0.2 $\times10^{-17 \; d}$ 37$^{+3}_{-9}$ 0.95
------------ --------------- ------------------------------------- ----------------------------------------- -------
: Infrared-band positions and strengths ($\mathcal{A}$), deposited column density ($N$ in 10$^{15}$ molec./cm$^{2}$) and molecular dipole moment ($\mu$) of the samples used in this work. Error estimation in the column density is the sum of the error by the band strength, the error by thickness loss for photodesorption, and the MDHL, PMT, and multimeter stabilities.
\
$^a$Jiang et al. 1975, $^b$calculated for this work, $^c$d’Hendecourt & Allamandola 1986, $^d$Jiménez-Escobar & Muñoz Caro 2011.\
\[table1\]
The performance of VUV-spectroscopy inevitably leads to exposure of the sample to irradiation, often causing photodestruction or photodesorption of the ice molecules to some extent during spectral acquisition (e.g., d’Hendecourt et al. 1985, 1986; Bernstein et al. 1995; Gerakines et al. 1995, 1996; Schutte 1996; Sandford 1996; Muñoz Caro et al. 2003, 2010; Dartois 2009; Öberg et al. 2009b,c,d; Fillion et al. 2012; Fayolle et al. 2011, 2013, and ref. therein). Photoproduction of new compounds and the decrease of the starting ice column density was monitored by IR spectroscopy. Photodesorption was less intense for the reported species except for CO, in line with the previous works mentioned above; it will also contribute to the decrease of the column density of the ice during irradiation. Photoproduct formation of new compounds is different for each molecule. No photoproducts were detected after the 9 min VUV exposure required for VUV-spectroscopy, except for CH$_3$OH ice, where product formation was 2%, 6%, 7%, and 8% for CO$_2$, CH$_4$, CO, and H$_2$CO relative to the initial CH$_3$OH column density estimated by IR spectroscopy. But the VUV-absorption spectrum of CH$_3$OH was not significantly affected, cf Kuo et al. [@Kuo]. Indeed, we show below that the discrete bands of the CO photoproduced in the CH$_3$OH ice matrix, with an intrinsic absorption higher than methanol in the VUV, are absent from the VUV-absorption spectrum.
Error values for the column density in Table \[table1\] result mainly from the selection of the baseline for integration of the IR absorption band and the decrease of the ice column density due to VUV-irradiation during spectral acquisition. The band strengths were adapted from the literature, and their error estimates are no more than 10 % of the reported values (Richey & Gerakines 2012). The solid H$_2$O band strength in Table \[table1\] was estimated by us using interference fringes in the infrared spectrum for a density of 0.94 g cm$^{-3}$ and a refractive index of 1.3. This estimation gives a value of 1.7 $\pm$ 0.2 $\times10^{-16}$ cm molec$^{-1}$, different from the Hagen et al. (1985) value (2 $\times10^{-16}$ cm molec$^{-1}$). The errors in the column density determined by IR spectroscopy were 16%, 15%, 23%, 19 %, and 32% for solid CO, H$_2$O, CH$_3$OH, NH$_3$, and H$_2$S ices. The MDHL, photomultiplier tube (PMT), and multimeter stabilities lead to an estimated error of about 6% in the values of the VUV-absorption cross sections of the ices. The largest error corresponds to H$_2$S ice, the most VUV photoactive molecule studied in this work. It presents a high photodestruction (Jiménez-Escobar & Muñoz Caro 2011), which leads to a fast decrease in its IR feature. Therefore, VUV-absorption cross-section errors result from the error values estimated above, using the expression
$$\delta(N) = \sqrt{\frac{\delta_i^2 + \delta_j^2 + \delta_k^2 + ... + \delta_n^2}{n-1}} .$$
The VUV-absorption cross-section spectra of CO, H$_2$O, CH$_3$OH, and NH$_3$ ices were fitted with Gaussian profiles using the band positions reported in the literature (Lu et al. 2005, 2008; Kuo et al. 2007) as a starting point, see the red traces in Figs. \[CO\], \[H2O\], \[CH3OH\], and \[NH3\]. Table \[tableGauss\] summarizes the Gaussian profile parameters used to fit the spectra of these ices, deposited at 8 K. H$_{2}$S ice displays only a slightly decreasing absorption at longer wavelengths, and Gaussian deconvolution is thus not pertinent. Gaussian fits of the reported molecules were made with an in-house IDL code. The fits reproduce the VUV-absorption cross-section spectra well.
------------ -------- -------- -------------------------------------
Molecule Center FWHM Area
\[nm\] \[nm\] \[$\times$ 10$^{-17}$ cm$^{2}$ nm\]
H$_{2}$O 120.0 22.1 12.0
143.6 17.4 10.8
154.8 8.0 0.9
CO 127.0 1.27 0.05
128.9 1.27 0.07
130.9 1.32 0.14
132.8 1.27 0.24
135.2 1.46 0.58
137.6 1.41 0.95
140.0 1.62 1.61
142.8 1.65 1.98
145.8 2.17 3.16
148.8 2.24 3.45
149.7 0.92 0.50
152.0 2.12 3.09
153.3 1.34 1.53
156.3 2.24 3.10
157.0 1.18 0.56
CH$_{3}$OH 118.0 30.6 27.7
146.6 28.3 16.8
NH$_{3}$ 121.2 34.1 33.8
166.6 31.8 14.2
------------ -------- -------- -------------------------------------
: Parameter values used to fit the spectra of Gaussian profiles of the different molecular pure ices deposited at 8 K.
\[tableGauss\]
Solid carbon monoxide
---------------------
The ground state of CO is X$^{1}\Sigma^{+}$ and its bond energy is E$_{b}$(C–O) = 11.09 eV (Okabe 1978).
Fig. \[CO\] displays the CO fourth positive band system; it is attributed to the A$^{1}\Pi \leftarrow$ X$^{1}\Sigma^{+}$ system. Table \[TableCO\] summarizes the transition, band position, and area of each feature at 8 K. We were able to observe twelve bands identified as (0,0) to (11,0) where the (0,0), (1,0), and (2,0) bands present a Davydov splitting, in agreement with Mason et al. [@Mason], Lu et al. [@Lu], and Brith & Schnepp [@Brith]. Like Mason et al. [@Mason], we were unable to observe the (12,0) transition reported by Brith & Schnepp [@Brith] and Lu et al. [@Lu], because of the low intensity of this feature. We detected part of the transitions to the excited Rydberg states, B$^{1}\Sigma^{+}$, C$^{1}\Sigma^{+}$, and E$^{1}\Pi$ measured by Lu et al. [@Lu] as a broad band in the 116-121 nm (10.68-10.25 eV) region, despite the decreasing VUV-flux in this region.
The average VUV-absorption cross section has a value of 4.7$^{+0.4}_{-0.4}$ $\times$ 10$^{-18}$ cm$^{2}$. This value is not very different from the one roughly estimated by Muñoz Caro et al. [@Caro1], 3.8 $\times$ 10$^{-18}$ cm$^{2}$ in the 115-170 nm range based on Lu et al. [@Lu]. The total integrated VUV-absorption cross section has a value of 1.5$^{+0.1}_{-0.1}$ $\times$ 10$^{-16}$ cm$^{2}$ nm (7.9$^{+0.6}_{-0.6}$ $\times$ 10$^{-18}$ cm$^{2}$ eV) in the 116-163 nm (10.68-7.60 eV) spectral region. The VUV-absorption of CO ice at 121.6 nm is very low, an upper limit of $\leq$ 1.1$^{+0.8}_{-0.8}$ $\times$ 10$^{-19}$ cm$^{2}$ was estimated, while at 157.8 and 160.8 nm the VUV-absorption cross sections are 6.6$^{+0.5}_{-0.5}$ $\times$ 10$^{-18}$ cm$^{2}$ and 0.9$^{+0.1}_{-0.1}$ $\times$ 10$^{-18}$ cm$^{2}$. The VUV-absorption spectrum of solid CO presents a maximum at 153.0 nm (8.10 eV) with a value of 1.6$^{+0.1}_{-0.1}$ $\times$ 10$^{-17}$ cm$^{2}$. Table \[TableCOgas\] summarizes the transition, band position, and area of each feature present in the CO gas-phase spectrum (blue trace in Fig. \[CO\]) adapted from Lee & Guest [@Lee]. The VUV-absorption cross section of CO in the gas phase has an average value of 20.7 $\times$ 10$^{-18}$ cm$^{2}$. Based on Lee & Guest [@Lee], the estimated total integrated VUV-absorption cross section, 2.3 $\times$ 10$^{-16}$ cm$^{2}$ nm, is larger than the CO ice value, 1.5 $\pm$ 0.1 $\times$ 10$^{-16}$ cm$^{2}$ nm. Similar to solid CO, the VUV-absorption cross section of CO gas is very low at 121.6 nm. There are no data available at 157.8 and 160.8 nm, but the absorption is expected to be very low or zero because the first transition ($\nu_0$) is centered on 154.5 nm.
![VUV-absorption cross section as a function of photon wavelength (bottom X-axis) and VUV-photon energy (top X-axis) of pure CO ice deposited at 8 K, black solid trace. The blue dashed trace is the VUV-absorption cross-section spectrum of CO gas (divided by a factor of 40 to compare it with CO ice) adapted from Lee & Guest [@Lee]. The fit, red dashed-dotted trace, is the sum of 15 Gaussians, dotted trace. It has been vertically offset for clarity. []{data-label="CO"}](F3.ps){width="\columnwidth"}
----------------- ------- ------ --------------------- ---------------------
($\nu$’,$\nu$”) nm eV cm$^{2}$nm cm$^{2}$eV
11,0 127.0 9.76 8.6$\times10^{-20}$ 6.6$\times10^{-21}$
10,0 128.8 9.62 1.6$\times10^{-19}$ 1.2$\times10^{-20}$
9,0 131.0 9.46 4.0$\times10^{-19}$ 2.9$\times10^{-20}$
8,0 132.8 9.33 7.0$\times10^{-19}$ 4.9$\times10^{-20}$
7,0 135.2 9.17 2.6$\times10^{-18}$ 1.8$\times10^{-19}$
6,0 137.6 9.01 4.3$\times10^{-18}$ 2.8$\times10^{-19}$
5,0 140.0 8.85 7.1$\times10^{-18}$ 4.5$\times10^{-19}$
4,0 142.8 8.68 7.6$\times10^{-18}$ 4.6$\times10^{-19}$
3,0 146.0 8.49 1.1$\times10^{-17}$ 6.5$\times10^{-19}$
2,0 149.4 8.29 1.5$\times10^{-17}$ 8.4$\times10^{-19}$
1,0 153.0 8.10 2.5$\times10^{-17}$ 1.3$\times10^{-18}$
0,0 156.6 7.91 2.6$\times10^{-17}$ 1.3$\times10^{-18}$
----------------- ------- ------ --------------------- ---------------------
: Transitions observed in the VUV-absorption cross-section spectrum of pure CO ice, deposited at 8 K, in the 115-170 nm range. The peak positions agree with previous works, see Mason et al. [@Mason], Lu et al. [@Lu], and Brith & Schnepp [@Brith].
\
Error margins corresponding to the values in the second and third column are $\pm$ 0.4 nm and $\pm$ 0.03 eV. \[TableCO\]
----------------- ------- ------- --------------------- ---------------------
($\nu$’,$\nu$”) nm eV cm$^{2}$nm cm$^{2}$eV
14,0 121.6 10.19 1.5$\times10^{-19}$ 1.2$\times10^{-20}$
13,0 123.1 10.07 2.3$\times10^{-19}$ 1.9$\times10^{-20}$
12,0 124.7 9.93 6.6$\times10^{-19}$ 5.3$\times10^{-20}$
11,0 126.5 9.79 7.5$\times10^{-19}$ 5.8$\times10^{-20}$
10,0 128.4 9.65 1.3$\times10^{-18}$ 9.7$\times10^{-20}$
9,0 130.3 9.51 2.4$\times10^{-18}$ 1.8$\times10^{-19}$
8,0 132.3 9.36 6.5$\times10^{-18}$ 4.4$\times10^{-19}$
7,0 134.7 9.20 6.8$\times10^{-18}$ 4.8$\times10^{-19}$
6,0 136.9 9.05 1.2$\times10^{-17}$ 8.2$\times10^{-19}$
5,0 139.4 8.89 2.1$\times10^{-17}$ 1.3$\times10^{-18}$
4,0 142.1 8.72 2.6$\times10^{-17}$ 1.7$\times10^{-18}$
3,0 144.9 8.55 2.7$\times10^{-17}$ 1.6$\times10^{-18}$
2,0 147.9 8.38 3.3$\times10^{-17}$ 1.9$\times10^{-18}$
1,0 151.1 8.20 3.0$\times10^{-17}$ 1.6$\times10^{-18}$
0,0 154.4 8.03 2.3$\times10^{-17}$ 1.2$\times10^{-18}$
----------------- ------- ------- --------------------- ---------------------
: Transitions observed in the VUV-absorption cross-section spectrum of pure CO gas based on data adapted from Lee & Guest [@Lee].
\[TableCOgas\]
Solid water
-----------
The ground state and bond energy of H$_{2}$O are X$^{1}$A$_{1}$ and E$_{b}$(H–OH) = 5.1 eV (Okabe 1978).
The VUV-absorption cross-section spectrum of H$_{2}$O ice is displayed in Fig. \[H2O\]. The spectral profile is similar to those reported by Lu et al. [@Lu2] and Mason et al. [@Mason]. The band between 132-163 nm is centered on 142 nm (8.73 eV), in agreement with Lu et al. [@Lu2] and Mason et al. [@Mason]. This band is attributed to the 4a$_{1}$:Ã$^{1}$B$_{1}
\leftarrow $ 1b$_{1}$:X$^{1}$A$_{1}$ transition. The VUV-absorption cross section reaches a value of 6.0$^{+0.4}_{-0.4}$ $\times$ 10$^{-18}$ cm$^{-2}$ at this peak, which coincides with the spectrum displayed in Fig. 3 of Mason et al. [@Mason]. The H$_{2}$O ice spectrum of Lu et al. [@Lu2] presents a local minimum at 132 nm, also visible in Fig. \[H2O\]. This minimum is not observed in the Mason et al. [@Mason] data, which is most likely due to the larger scale of their plot. The portion of the band in the 120-132 nm range is attributed to the transition B$^{1}$A$_{1}$ $\leftarrow$ X$^{1}$A$_{1}$, according to Lu et al. [@Lu2].
The average and the total integrated VUV-absorption cross sections of H$_2$O ice are 3.4$^{+0.2}_{-0.2}$ $\times$ 10$^{-18}$ cm$^{2}$ and 1.8$^{+0.1}_{-0.1}$ $\times$ 10$^{-16}$ cm$^{2}$ nm (1.2$^{+0.1}_{-0.1}$ $\times$ 10$^{-17}$ cm$^{2}$ eV) in the 120-165 nm (10.33-7.51 eV) spectral region. The VUV-absorption cross sections of H$_{2}$O ice at 121.6, 157.8, and 160.8 nm are 5.2$^{+0.4}_{-0.4}$ $\times$ 10$^{-18}$ cm$^{2}$, 1.7$^{+0.1}_{-0.1}$ $\times$ 10$^{-18}$ cm$^{2}$, and 0.7$^{+0.05}_{-0.05}$ $\times$ 10$^{-18}$ cm$^{2}$. Gas-phase data from Mota et al. [@Mota] were adapted for comparison with our solid-phase data, see Fig. \[H2O\]. Gas and ice data were previously compared by Mason et al. [@Mason]. The VUV-absorption cross section of H$_{2}$O in the gas phase has an average value of 3.1 $\times$ 10$^{-18}$ cm$^{2}$. H$_{2}$O gas data were integrated in the 120-182 nm range, giving a value for the VUV-absorption cross section of 2.3 $\times$ 10$^{-16}$ cm$^{2}$ nm (1.4 $\times$ 10$^{-17}$ cm$^{2}$ eV), which is higher than the VUV-absorption cross section of solid H$_{2}$O. The VUV-absorption cross sections of H$_{2}$O gas at 121.6, 157.8, and 160.8 nm are 13.6 $\times$ 10$^{-18}$ cm$^{2}$, 3.2 $\times$ 10$^{-18}$ cm$^{2}$, and 4.1 $\times$ 10$^{-18}$ cm$^{2}$, which is also higher than the solid-phase measurements.
![VUV-absorption cross section as a function of photon wavelength (bottom X-axis) and VUV-photon energy (top X-axis) of pure H$_{2}$O ice deposited at 8 K, black solid trace. The blue dashed trace is the VUV-absorption cross-section spectrum of gas phase H$_{2}$O taken from Mota et al. [@Mota]. The fit, red dashed-dotted trace, is the sum of three Gaussians, dotted trace. It has been vertically offset for clarity.[]{data-label="H2O"}](F4.ps){width="\columnwidth"}
Solid methanol
--------------
The ground state and bond energy of CH$_{3}$OH are X$^{1}$A’ and E$_{b}$(H–CH$_{2}$OH) = 4.0 eV (Darwent 1970).
Fig. \[CH3OH\] shows the VUV-absorption cross section of CH$_{3}$OH as a function of wavelength and photon energy. The spectrum profile is similar to the one reported by Kuo et al. [@Kuo], a decreasing continuum for longer wavelengths without distinct local maximum. These authors found three possible broad bands centered on 147 nm (8.43 eV), 118 nm (10.50 eV), and 106 nm (11.68 eV), the latter is beyond our spectral range. We observed the 147 nm peak (associated to the 2$^{1}$A” $\leftarrow$ X$^{1}$A’ molecular transition) as well as part of the 118 nm band ( corresponding to the 3$^{1}$A” $\leftarrow$ X$^{1}$A’ molecular transition), but due to the decreasing VUV-flux below 120 nm it was not possible to confirm the exact position of this peak.
The average and the total integrated VUV-absorption cross sections of solid CH$_3$OH are 4.4$^{+0.4}_{-0.7}$ $\times$ 10$^{-18}$ cm$^{2}$ and 2.7$^{+0.2}_{-0.4}$ $\times$ 10$^{-16}$ cm$^{2}$ nm (1.8$^{+0.1}_{-0.3}$ $\times$ 10$^{-17}$ cm$^{2}$ eV) in the 120-173 nm (10.33-7.16 eV) spectral region. The VUV-absorption cross sections of CH$_{3}$OH ice at 121.6 nm, 157.8 and 160.8 nm are 8.6$^{+0.7}_{-1.3}$ $\times$ 10$^{-18}$ cm$^{2}$, 3.8$^{+0.3}_{-0.6}$ $\times$ 10$^{-18}$ cm$^{2}$, and 2.9$^{+0.2}_{-0.4}$ $\times$ 10$^{-18}$ cm$^{2}$. Gas phase data from Nee et al. [@Nee] were used for comparison with our solid-phase spectrum, see Fig. \[CH3OH\]. Pure gas and solid CH$_3$OH, and CH$_{3}$OH in Ar or Kr matrices were compared by Kuo et al. [@Kuo]. CH$_{3}$OH gas has a vibrational peak profile throughtout the 120-173 nm range, which is absent from in CH$_{3}$OH ice. The VUV-absorption cross section of CH$_{3}$OH in the gas phase has an average value of 8.9 $\times$ 10$^{-18}$ cm$^{2}$. CH$_{3}$OH gas data were integrated in the 120-173 nm range, giving a value for the VUV-absorption cross section of 3.7 $\times$ 10$^{-16}$ cm$^{2}$ nm (2.5 $\times$ 10$^{-17}$ cm$^{2}$ eV), which is higher than the VUV-absorption cross section (2.7$^{+0.2}_{-0.4}$ $\times$ 10$^{-16}$ cm$^{2}$ nm) for solid CH$_{3}$OH. The VUV-absorption cross sections of CH$_{3}$OH gas at 121.6 nm, 157.8, and 160.8 nm are 13.2 $\times$ 10$^{-18}$ cm$^{2}$, 9.9 $\times$ 10$^{-18}$ cm$^{2}$, and 8.6 $\times$ 10$^{-18}$ cm$^{2}$, which are also higher than the ice-phase measurements.
![VUV-absorption cross section as a function of photon wavelength (bottom X-axis) and VUV-photon energy (top X-axis) of pure CH$_{3}$OH ice deposited at 8 K, black solid trace. The blue dashed trace is the VUV-absorption cross-section spectrum of gas phase CH$_{3}$OH adapted from Nee et al. [@Nee]. The fit, red dashed-dotted trace, is the sum of two Gaussians, dotted trace. It has been vertically offset for clarity. []{data-label="CH3OH"}](F5.ps){width="\columnwidth"}
Solid ammonia
-------------
The ground state of NH$_{3}$ is X$^{1}$A$_{1}$ and its bond energy is E$_{b}$(H–NH$_{2}$) = 4.4 eV (Okabe 1978).
Fig. \[NH3\] displays the VUV-absorption cross section of NH$_{3}$ as a function of the photon wavelength and photon energy. It presents a continuum with two broad absorption bands between 120-151 nm (10.33-8.21 eV) and 151-163 nm (8.21-7.60 eV) in this wavelength region, without narrow bands associated to vibrational structure. Mason et al. [@Mason] found a broad band centered on 177 nm (7.00 eV) in the 146-225 nm (5.50-8.50 eV) spectral range. Lu et al. [@Lu2] observed the same feature centered on 179 nm (6.94 eV). We observed only a portion of this feature because of the low VUV-flux of the MDHL in the 163-180 nm (7.60-6.88 eV) spectral range. This band is associated to the Ã$^{1}$A$_{2}$” $\leftarrow$ X$^{1}$A$_{1}$ molecular transition. The minimum we observed at 151 nm (8.21 eV) is the same as in the above-cited works. At this minimum, the VUV-absorption cross section reaches a value of 3.3 $\pm$ 0.2 $\times$ 10$^{-18}$ cm$^{2}$, higher than the 1.9 $\times$ 10$^{-18}$ cm$^{2}$ value reported in Fig. 10 of Mason et al. [@Mason]. We also obtained a higher value at 128.4 nm (9.65 eV), 8.1 $\pm$ 0.5 $\times$ 10$^{-18}$ cm$^{2}$ compared with 3.8 $\times$ 10$^{-18}$ cm$^{2}$ by Mason et al. [@Mason]. This difference is most likely due to the different method employed to estimate the ice column density, infrared spectroscopy or laser interferometry. In addition, a broad band centered on 121.2 nm (10.23 eV) was observed, in agreement with Lu et al. [@Lu2], probably associated to the B $\leftarrow$ X molecular transition.
The average and the total integrated VUV-absorption cross sections of solid NH$_3$ are 4.0$^{+0.3}_{-0.5}$ $\times$ 10$^{-18}$ cm$^{2}$ and 2.2$^{+0.2}_{-0.3}$ $\times$ 10$^{-16}$ cm$^{2}$ nm (1.5$^{+0.1}_{-0.2}$ $\times$ 10$^{-17}$ cm$^{2}$ eV) in the 120-161 nm (10.33-7.70 eV) spectral region. The VUV-absorption cross sections of NH$_{3}$ ice at 121.6 nm, 157.8 and 160.8 nm are 9.1$^{+0.7}_{-1.1}$ $\times$ 10$^{-18}$ cm$^{2}$, 3.8$^{+0.3}_{-0.5}$ $\times$ 10$^{-18}$ cm$^{2}$, and 4.1$^{+0.3}_{-0.5}$ $\times$ 10$^{-18}$ cm$^{2}$. Gas-phase data from Cheng et al. [@Cheng] and Wu et al. [@Wu2] were adapted, see Fig. \[CH3OH\]. At least qualitatively, our result is compatible with Mason et al. [@Mason]. The VUV-absorption cross section of NH$_{3}$ in the gas phase has an average value of 6.1 $\times$ 10$^{-18}$ cm$^{2}$. NH$_{3}$ gas data were integrated in the 120-161 nm range, giving a value of 2.5 $\times$ 10$^{-16}$ cm$^{2}$ nm (1.8$\times$ 10$^{-17}$ cm$^{2}$ eV), slightly higher than the VUV-absorption cross section of solid NH$_{3}$ (2.2$^{+0.2}_{-0.3}$ $\times$ 10$^{-16}$ cm$^{2}$ nm). The VUV-absorption cross sections of NH$_{3}$ gas at 121.6, 157.8 and 160.8 nm are 9.8 $\times$ 10$^{-18}$ cm$^{2}$, 0.1 $\times$ 10$^{-18}$ cm$^{2}$, and 0.3 $\times$ 10$^{-18}$ cm$^{2}$; these values are lower than the ice-phase measurements, except for the Ly-$\alpha$ photon wavelength (121.6 nm).
![VUV-absorption cross section as a function of photon wavelength (bottom X-axis) and VUV-photon energy (top X-axis) of pure NH$_{3}$ ice deposited at 8 K, black solid trace. The blue dashed trace is the VUV-absorption cross-section spectrum of gas phase NH$_{3}$ adapted from Cheng et al. [@Cheng] and Wu et al. [@Wu2]. The fit, red dashed-dotted trace, is the sum of two Gaussians, dotted trace. It has been vertically offset for clarity.[]{data-label="NH3"}](F6.ps){width="\columnwidth"}
Solid hydrogen sulfide
----------------------
The ground state and bond energy of H$_{2}$S are X$^{1}$A$_{1}$ and E$_{b}$(H–SH) = 3.91 eV (Okabe 1978).
Fig. \[H2S\] shows the VUV-absorption cross section of H$_{2}$S as a function of the wavelength and photon energy. Owing to the high VUV-absorption of solid H$_{2}$S, very thin ice samples were deposited to obtain a proper VUV-spectrum. H$_{2}$S gas has a nearly constant VUV-absorption cross section in the 120-150 nm (10.33-8.26 eV) range, between 150-180 nm (8.26-6.88 eV) it decreases, and in the 180-250 nm (6.88-4.95 eV) region it presents a maximum at 187 nm (6.63 eV), see Okabe [@Okabe]. The VUV-absorption cross section of H$_{2}$S is almost constant in the 120-143 nm (10.33-8.67 eV) range and decreases in the 143-173 nm (8.67-7.16 eV) range, with a maximum at 200 nm (6.19 eV) according to Feng et al. [@Feng]. The $^{1}$B$_{1}$ $\leftarrow$ $^{1}$A$_{1}$ molecular transition was identified at 139.1 nm (8.91 eV) by Price & Simpson [@Price] and confirmed by Gallo & Innes [@Gallo].
The total integrated VUV-absorption cross section of H$_2$S ice has a value of 1.5$^{+0.1}_{-0.3}$ $\times$ 10$^{-15}$ cm$^{2}$ nm (4.3$^{+0.3}_{-1.0}$ $\times$ 10$^{-16}$ cm$^{2}$ eV) in the 120-160 nm (10.33-7.74 eV) spectral region. The VUV-absorption cross sections of the same ice at 121.6, 157.8, and 160.8 nm are 4.2$^{+0.3}_{-1.0}$ $\times$ 10$^{-17}$ cm$^{2}$, 3.4$^{+0.3}_{-0.8}$ $\times$ 10$^{-17}$ cm$^{2}$, and 3.3$^{+0.3}_{-0.8}$ $\times$ 10$^{-17}$ cm$^{2}$. Gas-phase data from Feng et al. [@Feng] were adapted, see Fig. \[H2S\]. The spectrum of the solid is almost constant, only a slight decrease is observed as the wavelength increases. The gas-phase spectrum presents an abrupt decrease in the cross section starting near 140 nm, according to Feng et al. [@Feng]. In contrast to the Okabe [@Okabe] data, this spectrum of the gas shows no vibrational structure, probably because of its low resolution. The average VUV-absorption cross section of H$_{2}$S gas reported by Feng et al. [@Feng], 3.1 $\times$ 10$^{-17}$ cm$^{2}$, disagrees by one order of magnitude with the 3-4 $\times$ 10$^{-18}$ cm$^{2}$ value of Okabe [@Okabe]. We estimated a value of 3.9$^{+0.3}_{-0.9}$ $\times$ 10$^{-17}$ cm$^{2}$ for the solid in the same range, which is close to the Feng et al. [@Feng] gas-phase value.
H$_{2}$S gas data were integrated in the 120-160 nm range, giving a value of 10.2 $\times$ 10$^{-16}$ cm$^{2}$ nm (8.0 $\times$ 10$^{-17}$ cm$^{2}$ eV), which is lower than the VUV-absorption cross section of solid H$_{2}$S. The VUV-absorption cross sections of H$_{2}$S gas at 121.6, 157.8, and 160.8 nm are 31.0 $\times$ 10$^{-18}$ cm$^{2}$, 7.9 $\times$ 10$^{-18}$ cm$^{2}$, and 4.9 $\times$ 10$^{-18}$ cm$^{2}$, which are also lower than the solid-phase values reported above.
![VUV-absorption cross section as a function of wavelength (bottom X-axis) and photon energy (top X-axis) of pure H$_{2}$S ice deposited at 8 K, black trace. The blue trace is the VUV-absorption cross-section spectrum of gas phase H$_{2}$S adapted from Feng et al. [@Feng][]{data-label="H2S"}](F7.ps){width="\columnwidth"}
Comparison between all the ice species
--------------------------------------
Fig. \[todas\] shows a comparison of the VUV-absorption cross section for all the ice species deposited at 8 K, represented in the same linear scale. The most absorbing molecule is H$_{2}$S. H$_{2}$O has the lowest average absorption in this range. All the represented species absorb VUV-light significantly in the 120-163 nm (10.33-7.60 eV) range, only the CO absorption is negligible at the Ly-$\alpha$ wavelength. Except for CO, the absorption of the reported ice species in the H$_{2}$ molecular emission region between 157-161 nm is lower than the Ly-$\alpha$ absorption. Table \[tableInt\] summarizes the comparison between the VUV-absorption cross sections of all the species in the gas and solid phase.
-- ------------ ------------------------------------- --------------------- ----------------------- --------------------- ----------------------- --------------------- -----------------------
Species Total Int. Avg. Ly-$\alpha$ LBS 121.6 nm 157.8 nm 160.8 nm
\[$\times$ 10$^{-16}$ cm$^{2}$ nm\]
CO 1.5$^{+0.1}_{-0.1}$ 4.7$^{+0.4}_{-0.4}$ 0.1$^{+0.01}_{-0.01}$ 11$^{+0.9}_{-0.9}$ 0.1$^{+0.01}_{-0.01}$ 6.6$^{+0.5}_{-0.5}$ 0.9$^{+0.07}_{-0.07}$
H$_{2}$O 1.8$^{+0.1}_{-0.1}$ 3.4$^{+0.2}_{-0.2}$ 5.1$^{+0.4}_{-0.4}$ 4.0$^{+0.3}_{-0.3}$ 5.2$^{+0.4}_{-0.4}$ 1.7$^{+0.1}_{-0.1}$ 0.7$^{+0.05}_{-0.05}$
CH$_{3}$OH 2.2$^{+0.2}_{-0.3}$ 4.4$^{+0.4}_{-0.7}$ 6.7$^{+0.5}_{-1.0}$ 6.4$^{+0.5}_{-1.0}$ 8.6$^{+0.7}_{-1.3}$ 3.8$^{+0.3}_{-0.6}$ 2.9$^{+0.2}_{-0.4}$
NH$_{3}$ 2.2$^{+0.1}_{-0.3}$ 4.0$^{+0.3}_{-0.5}$ 9.1$^{+0.7}_{-1.1}$ 3.6$^{+0.3}_{-0.5}$ 9.1$^{+0.7}_{-1.1}$ 3.8$^{+0.3}_{-0.5}$ 4.1$^{+0.3}_{-0.5}$
H$_{2}$S 15$^{+1}_{-3}$ 39$^{+3}_{-9}$ 41$^{+3}_{-10}$ 34.0$^{+3}_{-8}$ 42$^{+3}_{-10}$ 34$^{+3}_{-8}$ 33$^{+3}_{-8}$
CO 2.3 20.7 – – – – –
H$_{2}$O 2.3 3.1 – – 13.6 3.2 4.1
CH$_{3}$OH 3.4 8.2 – – 13.6 8.5 10.0
NH$_{3}$ 2.5 6.1 – – 9.8 0.1 0.3
H$_{2}$S 10.2 31.0 – – 31.0 7.9 4.9
-- ------------ ------------------------------------- --------------------- ----------------------- --------------------- ----------------------- --------------------- -----------------------
\[tableInt\]
![VUV-absorption cross section as a function of wavelength (bottom x-axis) and photon energy (top x-axis) of all the pure ice species studied in this work, deposited at 8 K.[]{data-label="todas"}](F8.ps){width="\columnwidth"}
Astrophysical implications {#Astro}
==========================
Far-ultraviolet observations of IC 63, an emission/reflection nebula illuminated by the B0.5 IV star $\gamma$Cas, with the *Hopkins Ultraviolet Telescope* (HUT) on the central nebular position, have revealed a VUV-emission spectrum very similar to the spectrum of our VUV-lamp, see Fig. 4 of France et al. [@France], which means that the VUV-spectrum that interacts with the ice sample mimicks the molecular hydrogen photoexcitation observed toward this photodissociation region (PDR) in the local interstellar medium. This spectrum is similar to that emitted by the MDHL with a H$_{2}$ pressure of 0.2 mbar.
Ice mantle build-up takes place in cold environments such as dark cloud interiors, where the radiation acting on dust grains is the secondary UV field produced by cosmic ray ionization of H$_2$. These are the conditions mimicked in our experiments: low density and low temperature, and UV photons impinging on ice mantles. The secondary UV field calculated by Gredel et al. [@Gredel] is very similar to the emission spectrum of our lamp, see Fig. \[2UV\], except for photons with wavelengths shorter than about 114 nm, which are not transmitted by the MgF$_2$ window used as interface between the MDHL and the ISAC set-up.
![Calculated spectrum of ultraviolet photons created in the interior of dense molecular clouds by impact excitation of molecular hydrogen by cosmic ray ionization, adapted from Gredel et al. [@Gredel], in the background. VUV-emission spectrum of the MDHL used in this work, red foreground trace.[]{data-label="2UV"}](F9.ps){width="\columnwidth"}
The reported VUV-absorption cross sections allow a more quantitative study of photon absorption in ice mantles. Based on our data, the VUV-light reaching an interstellar ice mantle must have an energy higher than 7.00, 7.60, and 7.08 eV, to be absorbed by solid CO, H$_{2}$O, and CH$_{3}$OH, respectively. The full absorption spectrum in the VUV measured by other authors provides an absorption threshold above 6.20 and 5.16 eV for NH$_{3}$ and H$_{2}$S, respectively, see Mason et al. [@Mason] and Feng et al. [@Feng].\
The ice penetration depth of photons with a given wavelength, or the equivalent absorbing ice column density of a species in the solid phase, can be calculated from the VUV-absorption cross section following Eq. \[2\]. Table \[penetration\] summarizes the penetration depth of the ice species for an absorbed photon flux of 95% and 99% using the cross section value at Ly-$\alpha$, the average value of the cross section in the 120-160 nm range, and the maximum value of the cross section in the same range.
---------- ------------- ------ ------ ------------- ------ ------
Species Ly-$\alpha$ Avg. Max. Ly-$\alpha$ Avg. Max.
CO $\geq$150 6.4 1.9 $\geq$230 9.8 2.9
H$_{2}$O 5.8 8.3 4.9 8.9 13.0 7.7
CH$_3$OH 3.5 5.7 3.4 5.4 8.7 5.3
NH$_3$ 3.3 5.5 3.2 5.1 8.5 5.0
H$_2$S 0.73 0.81 0.73 1.1 1.2 1.1
---------- ------------- ------ ------ ------------- ------ ------
: Penetration depth, expressed as absorbing column density, of the different pure ice species deposited at 8 K, corresponding to an absorbed photon flux of 95% and 99%. Ly-$\alpha$ corresponds to the cross section at the Ly-$\alpha$ wavelength, 121.6 nm; in the case of CO, the upper limit in the cross section leads to a lower limit in the absorbing column density. Avg. corresponds to the average value of the cross section in the 120-160 nm range. Max. corresponds to the maximum value of the cross section in the same wavelength range. Errors in the penetration depth correspond to those estimated for the VUV-absorption cross sections of the different ices.
\[penetration\]
The VUV-absorption of ice mantles is low compared with that of the dust grain cores, and therefore difficult to observe directly, but it is an essential parameter quantitatively for studing the photoprocessing and photodesorption of molecular species in the ice mantle. The desorbing photoproducts can be detected in observations of the gas phase. The reported measurements of the cross sections are thus needed to estimate the absorption of icy grain mantles in the photon range where molecules are photodissociated or photodesorbed, which often leads to the formation of photoproducts. These results can be used as input in models that predict the processing of ice mantles that are exposed to an interstellar VUV-field. They can be used, for instance, to predict the gas phase abundances of molecules photodesorbed from the ice mantles, as we discuss below.
There is a clear correspondence between the photodesorption rates of CO ice measured at different photon energies (Fayolle et al. 2011) and the VUV-absorption spectrum of CO ice (Lu et al. 2009, this work) for the same photon energies. This indicates that photodesorption of some ice species like CO and N$_2$ is mainly driven by a desorption induced by electronic transition (DIET) process (Fayolle et al. 2011; 2013). The lowest photodesorption reported by Fayolle et al. [@Fayolle1] at the Ly-$\alpha$ wavelength (121.6 nm) is <6 $\times$ 10$^{-3}$ molecules per incident photon, coinciding with the low VUV-absorption at this wavelength in Fig. \[CO\], and the maximum in the photodesorption occurs approximately at $\sim$ 151.2 nm, near the most intense VUV-absorption band, see Fig. \[CO\]. The photodesorption rate per absorbed photon in the \[$\lambda_{i}$,$\lambda_{f}$\] wavelength range, $R^{\rm abs}_{\rm ph-des}$, can differ significantly from the photodesorption rate per incident photon, $R^{\rm inc}_{\rm ph-des}$. It can be estimated as follows: $$\begin{aligned}
\centering
R^{\rm abs}_{\rm ph-des} &=& \frac{\Delta N}{I_{abs}} \quad and \quad R^{\rm inc}_{\rm ph-des} = \frac{\Delta N}{I_0} \nonumber \\
\nonumber \\
\nonumber \\
R^{\rm abs}_{\rm ph-des} &=& \frac{I_0}{I_{abs}} \; R^{\rm inc}_{\rm ph-des},
\label{RR}\end{aligned}$$ where $$\begin{aligned}
I_{abs} &=& \displaystyle\sum\limits_{\lambda_i}^{\lambda_f} \quad I_0(\lambda) - I(\lambda) = \displaystyle\sum\limits_{\lambda_i}^{\lambda_f} \quad I_0(\lambda)(1 - e^{- \sigma(\lambda) N}), \nonumber\end{aligned}$$ and $\Delta N$ is the column density decrease for a given irradiation time in molecules cm$^{-2}$ s$^{-1}$, $I_0$ is the total photon flux emitted (Fayolle et al. 2011 reports $\sim$ 1.7 $\times$ 10$^{14}$ photons cm$^{-2}$ s$^{-1}$, in our experiments $\sim$ 2.0 $\times$ 10$^{14}$ photons cm$^{-2}$ s$^{-1}$), $I_{abs}$ is the total photon flux absorbed by the ice, $I_0(\lambda)$ is the photon flux emitted at wavelength $\lambda$, $\sigma(\lambda)$ is the VUV absorption cross section at the same wavelength, and $N$ is the column density of the ice sample.
Fayolle et al. [@Fayolle1] reported $R^{\rm inc}_{\rm ph-des}$ = 5 $\times$ 10$^{-2}$ molecules per incident photon for monochromatic $\sim$ 8.2 eV light irradiation of a 9-10 ML ice column density of CO. We also estimated the value of $R^{\rm inc.}_{\rm ph-des}$ in our CO irradiation experiment using the MDHL continuum emission source with an average photon energy of 8.6 eV; this gave a value of 5.1 $\pm$ 0.2 $\times$ 10$^{-2}$ for an ice column density of 223 ML, in agreement with Muñoz Caro et al. [@Caro1]. Therefore, similar values of $R^{\rm inc}_{\rm ph-des}$ are obtained using either, monochromatic or continuum emission sources, provided that the photon energy of the former is near the average photon energy of the latter, this is discussed below. Using Eq. \[RR\], we estimated the $R^{\rm abs}_{\rm ph-des}$ values in both experiments; they are summarized in Table \[Rabs\] for a column density of 5 ML (only the photons absorbed in the top 5 $\pm$ 1 ML participate in the photodesorption) for three monochromatic photon energies in the Fayolle et al. [@Fayolle1] work, selected to coincide with Lyman-$\alpha$ (10.2 eV), the maximum cross section (9.2 eV energy), and the monochromatic energy of 8.2 eV, that is, the one closer to the average photon energy of the MDHL at 8.6 eV.
--------------- ------------------------- ----------------------------------- ----------------------------
Irrad. energy $\sigma$ R$^{\rm inc}_{\rm ph-des}$ R$^{\rm abs}_{\rm ph-des}$
eV cm$^{2}$ molec./photon$_{inc}$ molec./photon$_{abs}$
10.2 1.1 $\times$ 10$^{-19}$ 6.9 $\pm$ 2.4 $\times$ 10$^{-3}$ 12.5 $\pm$ 4.4
9.2 2.8 $\times$ 10$^{-18}$ 1.3 $\pm$ 0.91 $\times$ 10$^{-2}$ 0.9 $\pm$ 0.6
$\:$ 8.2 9.3 $\times$ 10$^{-18}$ 5 $\times$ 10$^{-2}$ 1.1
8.6 4.7 $\times$ 10$^{-18}$ 5.1 $\pm$ 0.2 $\times$ 10$^{-2}$ 2.5 $\pm$ 0.1
--------------- ------------------------- ----------------------------------- ----------------------------
: VUV-absorption cross sections for different irradiation energies. R$_{inc}$ values correspond to Fayolle et al. (2011), except for 8.6 eV, which is the average photon energy of our MDHL and the corresponding value of the cross section is the average VUV-absorption cross section of pure CO ice, deposited at 8 K, in the 120-160 nm range. $R^{\rm abs}_{\rm ph-des}$ values for a column density of about 5 ML, i.e. where the absorbed photons contribute to a photodesorption event (Muñoz Caro et al. 2010; Fayolle et al. 2011).
\
Peak yield value at $\sim$ 8.2 eV, see Fayolle et al. (2011).\
\[Rabs\]
Irradiation of CO ice using our VUV-lamp, with a broad-band energy distribution and mean energy of $\sim$ 8.6 eV, gives similar $R^{\rm inc}_{\rm ph-des}$ values, but different $R^{\rm abs}_{\rm ph-des}$ values than irradiation with a $\sim$ 8.2 eV energy monochromatic light source (Fayolle et al. 2011). This suggests that in this particular case, the photodesorption rate per incident photon does not depend much on the photon energy distribution, but this coincidence is only by chance, since the corresponding $R^{\rm abs}_{\rm ph-des}$ values differ significantly.
The properties of individual molecular componentes of interstellar ice analogs have been studied over the years (e.g., Sandford et al. 1988; Sandford & Allamandola 1990; Gerakines et al. 1996; Escribano et al. 2013). Our VUV-absorption spectra can be directly applied to astrophysical icy environments practically made of a single compound, for example, either CO$_2$ or H$_2$O largely dominate the composition of different regions at the south pole of Mars (Bibring et al. 2004). Nevertheless, interstellar ice mantles are either thought to be a mixture of different species or to present a layered structure (e.g., Gerakines et al. 1995, Dartois et al. 1999, Pontoppidan et al. 2008, Boogert et al. 2011, Öberg et al. 2011, Kim et al. 2012). Our study on pure ices can be used to estimate the absorption of multilayered ice mantles, but [*a priori*]{}, it cannot be extrapolated to ice mixtures. A follow-up of this work will include ice mixtures and VUV-spectroscopy at ice temperatures above 8 K.
Conclusions {#Conclusions}
===========
Several conclusions can be drawn from our experimental work; they are summarized as follows:
- The combination of infrared (FTIR) spectroscopy in transmission for measuring the deposited ice column density and VUV-spectroscopy allowed us to determine more accurate VUV-absorption cross-section values of interstellar ice analogs with an error within 16 %, 15 %, 23 %, 19 %, and 32 % for CO, H$_2$O, CH$_3$OH, NH$_3$, and H$_2$S. The errors are mainly caused by the ice column density decrease due to VUV-irradiation during VUV spectral acquisition.
<!-- -->
- For the first time, the VUV-absorption cross sections of CO, CH$_{3}$OH and H$_{2}$S were measured for the solid phase, with average VUV-absorption cross sections of 4.7$^{+0.4}_{-0.4}$ $\times$ 10$^{-18}$ cm$^{2}$, 4.4$^{+0.4}_{-0.7}$ $\times$ 10$^{-18}$ cm$^{2}$, and 39$^{+3}_{-9}$ $\times$ 10$^{-18}$ cm$^{2}$. The total integrated VUV-absorption cross sections are 1.5$^{+0.1}_{-0.1}$ $\times$ 10$^{-16}$ cm$^{2}$ nm, 2.7$^{+0.2}_{-0.4}$ $\times$ 10$^{-16}$ cm$^{2}$ nm, and 1.5$^{+0.1}_{-0.3}$ $\times$ 10$^{-15}$ cm$^{2}$ nm. Our estimated values of the average VUV-absorption cross sections of H$_{2}$O and NH$_{3}$ ices 3.4$^{+0.2}_{-0.2}$ $\times$ 10$^{-18}$ cm$^{2}$ and 4.0$^{+0.3}_{-0.5}$ $\times$ 10$^{-18}$ cm$^{2}$, are comparable with those reported by Mason et al. [@Mason], which were measured using a synchrotron as the emission source.
<!-- -->
- The ice samples made of molecules such as H$_{2}$O, CH$_{3}$OH, and NH$_{3}$ present broad absorption bands and similar average VUV-absorption cross sections between 3.4$^{+0.2}_{-0.2}$ $\times$ 10$^{-18}$ cm$^{2}$ and 4.4$^{+0.4}_{-0.7}$ $\times$ 10$^{-18}$ cm$^{2}$, see Fig. \[todas\]. But H$_{2}$S ice, which also displays a continuum spectrum, presents about four times more absorption than the other molecules in the same VUV range.
<!-- -->
- Solid CO displays discrete VUV-absorption bands and very low absorption at the Ly-$\alpha$ wavelength. But the other solid samples present high absorption at the Ly-$\alpha$ wavelength.
<!-- -->
- For the H$_{2}$O, CH$_{3}$OH, and NH$_{3}$ species, the VUV-absorption range and the total integrated VUV-absorption cross section in the gas phase is larger than the solid phase. An exception is H$_{2}$S.
<!-- -->
- The main emission peaks of the MDHL occur at 121.6 nm (Lyman-$\alpha$), 157.8 nm and 160.8 nm (molecular H$_2$ bands), see Fig. \[Uno\]. It is still a common mistake in the molecular astrophysics community to consider the MDHL as a pure Ly-$\alpha$ photon source. As we mentioned, the lamp VUV-spectrum affects the photochemistry but not the VUV-spectroscopy in a direct way, since the VUV-emission spectrum was subtracted to measure the ice absorption in the same energy range.
<!-- -->
- Monitoring of the photon energy distribution and the stability of the irradiation source is important for studing ice photoprocesses.
<!-- -->
- The results are satisfactory and demonstrate the viability of the MDHL, which commonly used in irradiation experiments, as a source for VUV spectroscopy of solid samples. This method to perform VUV-spectroscopy does not require a synchrotron facility and can be used routinely in the laboratory.
<!-- -->
- Our estimates of the VUV-absorption cross sections of polar ice molecules can be used as input in models that simulate the photoprocessing of ice mantles present in cold environments, such as dense cloud interiors and circumstellar regions. The data reported in this paper can be applied to estimate the absorption of layered ice mantles, but not to ices built up with different species that are intimately mixed.
This research was financed by the Spanish MICINN under projects AYA2011-29375 and CONSOLIDER grant CSD2009-00038. This work was partially supported by NSC grants NSC99-2112-M-008-011-MY3 and NSC99-2923-M-008-011-MY3, and the NSF Planetary Astronomy Program under Grant AST-1108898.
[99]{} Bibring, J. -P., Langevin, Y., Poulet, F., et al. 2004, Nature, 428, 627 Boursey, E., Chandrasekharan, V., Gürtler, P., et al. 1978, Phys. Rev. Lett., 41, 1516 Boogert, A. C. A., Huard, T. L., Cook, A. M., et al. 2011, ApJ, 729, 92 Brith, M., & Schnepp, O. 1965, Molecular Phys., 9, 473 Cecchi-Pestellini, & Aiello, S. 1992, MNRAS, 258, 125 Chen, Y.-J., Chu, C.-C, Lin, Y.-C, et al. 2010, Advances in Geosciences, 25, 259 Chen, Y.-J., Chuang, K.-Y., & Muñoz Caro, G. M. 2013, ApJ, in press. Cheng, B.-M., Lu, H.-C., Chen, H.-K., et al. 2006, ApJ, 647, 1535 Cheng, B.-M., Chen, H.-F., Lu, H.-C., et al. 2011, ApJSS, 196:3, 6 Collings, M.P., Anderson, M.A., Chen, R., et al. 2004, Mon. Not. R. Astron. Soc., 354, 1133 Dalgarno, A., & Stephens, T. L. 1970, ApJ, 160, L107 Darwent, B. deB. 1970, Bond Dissociation Energies in Simple Molecules, National Standard Reference Data System, 31, 70-602101 Dartois, E., Demyk, K., d’Hendecourt, L., & Ehrenfreund, P. 1999, A&A, 351, 1066 Dartois, E., Pontoppidan, K., Thi, W.-F., & Muñoz Caro, G. M. 2005, A&A, 444, L57 Dartois, E. 2009. ASP Conference Series, 414, 411 Dawes, A., Mukerji, R. J., Davis, M. P., et al. 2007, J. Chem. Phys., 126, 244711 d’Hendecourt, L. B., Allamandola, L. J., & Greenberg, J. M. 1985. A&A, 152, 130 d’Hendecourt, L. B., Allamandola, L. J., Grim, R. J. A., & Greenberg, J. M. 1986. A&A, 158, 119 Ehrenfreund, P., & van Dishoeck, E. F. 1998, Advances in Space Research, 21, 15 Escribano, R., Muñoz Caro, G. M., Cruz-Diaz, G. A., Rodríguez-Lazcano, Y., & Maté, B. 2013, PNAS, 110, 32, 12899 Fayolle, E. C., Bertin, M., Romanzin, C., et al. 2011, ApJ Letters, 739, L36 Fayolle, E. C., Bertin, M., Romanzin, C., et al. 2013, A&A, 556, A122 Feng, R., Cooper, G., & Brion, C. E. 1999, Chem. Phys., 244, 127 Fillion, J.-H., Bertin, M., Lekic, A., et al. 2012, EAS Publications Series, 58, 307 France, K., Andersson, B.-G., McCandliss, S. R., & Feldman, P. D. 2005, ApJ, 628, 750 Gallo, A. R., & Innes, K.K. 1975, J. Mol. Spectrosc., 54, 472 Gerakines, P. A., Schutte, W. A., Greenberg, J. M. & van Dishoeck, E. F. 1995. A&A, 296, 810 Gerakines, P. A., Schutte, W. A. & Ehrenfreund, P., 1996, A&A, 312, 289 Gibb, E. L., Whittet, D. C. B., Boogert, A. C. A., & Tielens, A. G. G. M. 2004, ApJ, 151, 35 Gredel, R., Lepp, S., & Dalgarno, A. 1989, ApJ, 347, 289 Hudson, R. D., & Carter, V. L. 1968, J. Opt. Soc. Am., 58, 227 Inn, E. C. Y. 1954, Spectrochimica Acta, 7, 65 Jiang, G. J., Person, W. B., & Brown, K. G. 1975, J. Chem. Phys., 75, 4198 Jiménez-Escobar, A., & Muñoz Caro, G. M. 2011, A&A, 536, 11 Kim, H. J., Evans II, N. J., Dunham, M. M., Lee, J. -E., & Pontoppidan, K. M. 2012, ApJ, 758, 38 Knez, C., Boogert, A. C. A., Pontoppidan, K. M., et al. 2005, ApJ, 635, L145 Kuo, Y.-P., Lu, H.-C., Wu, Y.-J., Cheng, B.-M., & Ogilvie, J. F. 2007, Chem. Phys. Lett., 447, 168 Lee, L. C., & Guest, J. A. 1981, J. Phys. B: At. Mol. Phys., 14, 3415 Lu, H.-C., Chen, H.-K., Cheng, B.-M., Kuo, Y.-P., & Ogilvie, J. F. 2005, J. Phys. B: At. Mol. Opt. Phys., 38, 3693 Lu, H.-C., Chen, H.-K., Cheng, B.-M., & Ogilvie, J. F. 2008, Spectrochimica Acta Part A: Molecular and Biomolecular Spectroscopy, 71, 1485 Mason, N. J., Dawes, A., Holton, P. D., et al. 2006, Faraday Discussions, 133, 311 Monahan, K. M., & Walker, W. C. 1974, J. Chem. Phys., 61, 3886 Mota, R., Parafita, R., Giuliani, A., et al. 2005, Chem. Phys. Lett., 416, 152 Mumma, M. J., & Charnley, S. B. 2011, Annu. Rev. Astro. Astrophys., 49, 471 Muñoz Caro, G. M., Jiménez-Escobar, A., Martín-Gago, J.Á., et al. 2010, A&A, 522, A108 Nee, J. B., Suto, M., & Lee, L. C. 1985, Chemical Physics, 98, 147 Öberg, K. I., Garrod, R. T., van Dishoeck, E. F., & Linnartz, H. 2009a. A&A, 504, 891 Öberg, K. I., Linnartz, H., Visser, R., & van Dishoeck, E. F. 2009b. ApJ, 693, 1209 Öberg, K. I., Boogert, A. C. A., Pontoppidan, K. M., et al. 2011, ApJ, 740, 109 Okabe, H. 1978, Photochemistry of small molecules, ed. John Wiley & Sons, New York Pontoppidan, K. M., Boogert, A. C. A., Fraser, H. J., et al. 2008, ApJ, 678, 1005 Price, W. C., & Simpson, D. M. 1938, Proc. Roy. Soc. (Lond.), A165, 272 Richey, C. R. & Gerakines, P. A. 2012, ApJ, 759, 74 Samson, J. A. R., & Ederer, D. L. 2000, Vacuum Ultaviolet Spectroscopy, ed. Elsevier Inc. Sandford, S. A., Allamandola, L. J., Tielens, A. G. G. M., & Valero, G. J. 1988, ApJ, 329, 498 Sandford, S. A., & Allamandola, L. J. 1990, ApJ, 355, 357 Sandford, S. A. 1996. Astronomical Society of the Pacific Conference Series, 97, 29 Schutte, W. A. 1996. Molecules in Astrophysics: Probes and Processes, IAU symposium 178, 1 Smith, P. L., Rufus, L., Yoshino, K., & Parkinson, W. H. 2002, NASA Laboratory Astrophysics Workshop, NASA/CP-2002-21186, 158 Sternberg, A. 1989, ApJ, 347, 863 Wu, Y.-J., Lu, H.-C., Chen, H.-K., & Cheng, B.-M. 2007, J. Chem. Phys., 127, 154311 Wu, Y.-J., Wu, C. Y. R., Chou S.-L., et al. 2012, ApJ, 746, 175 Yoshino, K., Esmond, J. R., Sun, Y., et al. 1996, J. Quant. Spectrosc. Radiat. Transfer, 55, 53
|
---
abstract: 'We give a new proof of the fact that for any proper holomorphic map between reduced analytic spaces there is an associated natural pullback operation on smooth differential forms.'
address: 'M. Andersson, H. Samuelsson Kalm, Department of Mathematical Sciences, Division of Algebra and Geometry, University of Gothenburg and Chalmers University of Technology, SE-412 96 Göteborg, Sweden'
author:
- 'Mats Andersson & H[å]{}kan Samuelsson Kalm'
nocite: '[@*]'
title: A note on smooth forms on analytic spaces
---
Introduction
============
There is a natural notion of smooth differential forms on any reduced analytic space. The dual objects are the currents. Such forms and currents have turned out to be useful tools in several contexts, e.g., the analytic approach to intersection theory [@ASWY; @AESWY1] and the ${\bar{\partial}}$-equation on analytic spaces [@AS; @RSW].
It is desirable to be able to take the direct image of a current under a proper holomorphic map $f\colon X\to Z$ between reduced analytic spaces. By duality this amounts to take pullbacks of smooth forms. In some works, e.g., [@ASWY; @AESWY1], it is implicitly assumed that this is possible. There is an obvious tentative definition of $f^*\phi$ for a smooth form $\phi$ on $Z$. It is however not completely clear that it gives a well-defined pullback operation, not even if $\phi$ is holomorphic (of positive degree); this case is settled in [@Barlet-alpha Corollary 1.0.2]. The problem occurs already if $f$ is the inclusion of a subvariety contained in $Z_{sing}$. It was proved in [@Barlet-book III Corollary 2.4.11] that the suggested definition indeed gives a functorial operation on smooth forms on analytic spaces. In this short note we give a new proof of this fact. Our proof is quite elementary except for an application of Hironaka’s theorem.
Results
=======
Let $X$ be a reduced analytic space. Recall that, by definition, there is a neighborhood $U$ of any point in $X$ and an embedding $i\colon U\hookrightarrow D$ in an open set $D\subset{\mathbb{C}}^N$ such that $U$ can be identified with its image. For notational convenience we will suppress $U$ and say that $i$ is a local embedding of $X$. A smooth $(p,q)$-form $\phi$ on $X_{reg}$ is smooth on $X$, $\phi\in\mathcal{E}^{p,q}(X)$, if there is a smooth form $\varphi$ in $D$ such that $$i|_{X_{reg}}^*\varphi=\phi.$$ A different definition is given in [@BH Section 3.3], see Remark \[BHaltdef\] below. If $j\colon X\to D'$ is another local embedding, then the identity on $X$ induces a biholomorphism $i(X)\xrightarrow{\sim} j(X)$. Thus, again by definition, locally in $D$ and $D'$, there are holomorphic maps $g\colon D\to D'$ and $h\colon D'\to D$ such that $i=h\circ j$ and $j=g\circ i$. Since $h^*\varphi$ is smooth in $D'$ and $$j|_{X_{reg}}^*h^*\varphi=\phi,$$ it follows that the notion of smooth forms on $X$ is independent of embedding.
We will write $i^*\varphi$ for the image of $\varphi\in\mathcal{E}(D)$ in $\mathcal{E}(X)$. Let $[i(X)]$ be the Lelong current of integration over $i(X)_{reg}$. The kernel of $i^*$ is closed since $$i^*\varphi=0 \quad \iff \quad \varphi\wedge [i(X)]=0.$$ Thus, with the quotient topology $\mathcal{E}(X)=\mathcal{E}(D)/\text{Ker}\, i^*$ is a Fréchet space.
\[snabel\] Let $f\colon X\to Z$ be a proper holomorphic map between reduced analytic spaces. There is a well-defined map $f^*\colon \mathcal{E}(Z)\to\mathcal{E}(X)$ with the following property: If $\phi$ is a smooth form on $Z$, $X\hookrightarrow D_X$ and $Z\hookrightarrow D_Z$ are local embeddings, and $\tilde f\colon D_X\to D_Z$ is a holomorphic extension of $f$, then $f^*\phi$ is the smooth form on $X$ obtained by extending $\phi$ to $D_Z$, pull the extension back under $\tilde f$, and pull the result back to $X_{reg}$.
Assume that $g\colon X\to Y$ and $h\colon Y\to Z$ are proper holomorphic maps such that $f=h\circ g$. Let $i\colon X\hookrightarrow D_X$, $j\colon Y\hookrightarrow D_Y$, and $\iota\colon Z\hookrightarrow D_Z$ be local embeddings such that $\tilde g\colon D_X\to D_Y$ and $\tilde h\colon D_Y\to D_Z$ are holomorphic extensions of $g$ and $h$, respectively. Notice that $\tilde h\circ \tilde g$ is a holomorphic extension of $f$. If $\phi\in\mathcal{E}(Z)$ and $\phi=\iota|_{Z_{reg}}^*\varphi$ it follows by Theorem \[snabel\] that $h^*\phi=j|_{Y_{reg}}^*\tilde h^*\varphi$ and $g^*h^*\phi=i|_{X_{reg}}^*\tilde g^* \tilde h^*\varphi=f^*\phi$. Hence, $$f^*\phi=g^*h^*\phi, \quad \phi\in\mathcal{E}(Z).$$
The space of currents on $X$, $\mathscr{C}(X)$, is the dual of the space of test forms, i.e., compactly supported forms in $\mathcal{E}(X)$, cf. [@HL Section 4.2]. If $\phi$ is a test form on $Z$ then $f^*\phi$ is a test form on $X$ since $f$ is proper. If $\mu$ is a current on $X$ thus $f_*\mu$ is a current on $Z$ defined by $$\label{snyting}
f_*\mu . \phi = \mu . f^*\phi.$$
Let $f\colon X\to Z$ be a proper holomorphic map between reduced analytic spaces. Then the induced mapping $f_*\colon\mathscr{C}(X)\to\mathscr{C}(Z)$ has the property that if $f=h\circ g$, where $g\colon X\to Y$ and $h\colon Y\to Z$ are proper holomorphic maps, then $$f_*\mu = h_*g_*\mu, \quad \mu\in\mathscr{C}(X).$$
Suppose that $i\colon X\to D$ is an embedding and consider the induced mapping $i_*\colon\mathscr{C}(X)\to\mathscr{C}(D)$. It follows from and the definition of test forms on $X$ that $i_*$ is injective. Thus $\mathscr{C}(X)$ can be identified with its image $i_*\mathscr{C}(X)$. In view of the definition of $\mathscr{C}(X)$ and it follows that $i_*\mathscr{C}(X)$ is the set of currents $\mu$ in $D$ such that $\mu . \varphi=0$ if $i^*\varphi=0$. Notice in particular that $i_* 1=[i(X)]$.
\[BHaltdef\] Let $i\colon X\to D$ be a local embedding. In [@BH Section 3.3] the space of smooth forms on $X$ is defined as $\mathcal{E}(D)/\mathcal{N}(D)$, where $\mathcal{N}(D)$ is the space of smooth forms $\varphi$ in $D$ such that for any manifold $W$ and any smooth map $g\colon W\to D$ with $g(W)\subset X$ one has $g^*\varphi=0$. A priori $\mathcal{N}(D)$ contains $\text{Ker}\, i^*$. If $\mathcal{N}(D)=\text{Ker}\, i^*$, then $\mathcal{E}(D)/\mathcal{N}(D)$ coincides with our definition of $\mathcal{E}(X)$ and Theorem \[snabel\] becomes trivial. It is claimed in [@BH Section 3.3] that indeed $\mathcal{N}(D)=\text{Ker}\, i^*$, but we have not been able to find a proof.
Proofs
======
We will prove Theorem \[snabel\] by showing that the stated property indeed is independent of the choices of extensions. The technical part is contained in Proposition \[kul\], cf. [@Barlet-alpha Proposition 1.0.1]. We begin with the following lemma.
\[snar\] Let $M$ be a reduced analytic space, $N$ a complex manifold, and $p\colon M\to N$ a proper holomorphic map. If each connected component of $N$ has dimension $d\geq 1$ and $\text{rank}_x\, p <d$ for all $x\in M_{reg}$, then $p$ is not surjective on any connected component of $N$.
We may assume that $N$ is connected. Notice also that if $M$ is smooth then the lemma follows in view of the constant rank theorem. We reduce to this case by induction on $\text{dim}\, M$. If $\text{dim}\, M=0$ the lemma is obvious. Assume that the lemma is true if $\text{dim}\, M<\delta$.
Let $\text{dim}\,M=\delta$. Let $M'$ be the union of $M_{sing}$ and the irreducible components of dimension $<\delta$. Since $p|_{M'}$ satisfies the hypothesis of the statement it follows from the induction hypothesis that $p|_{M'}$ is not surjective. By Remmert’s proper mapping theorem thus $p(M')$ is a proper analytic subset of $N$. If $M=p^{-1}p(M')$ we are done. If not we replace $M$, $N$, and $p$ by $M\setminus p^{-1}p(M')$, $N\setminus p(M')$, and $p|_{M\setminus p^{-1}p(M')}$, respectively. Since $p|_{M\setminus p^{-1}p(M')}$ is proper and $M\setminus p^{-1}p(M')$ is smooth the lemma follows for $\text{dim}\,M=\delta$ by the constant rank theorem.
\[kul\] Let $W\subset V$ be analytic subsets of an open set $D\subset{\mathbb{C}}^N$ and let $\varphi$ be a smooth form in $D$. If the pullback of $\varphi$ to $V_{reg}$ vanishes, then the pullback of $\varphi$ to $W_{reg}$ vanishes.
We may assume that $W$ is irreducible of dimension $d$. We may also assume that $\varphi$ has positive degree since a smooth function vanishing on $V_{reg}$ must vanish on $W$ by continuity. The case $d=0$ is then clear since the pullback of a form of positive degree to discrete points necessarily vanishes. Let $\tilde\pi\colon V'\to V$ be a Hironaka resolution of singularities. Suppose that $W'\subset V'$ is analytic and such that $\tilde\pi(W')=W$. Let $\pi=\tilde\pi|_{W'}$ and let $\phi$ be the pullback of $\varphi$ to $W_{reg}$. Since the pullback of $\varphi$ under $W'\hookrightarrow V'\to V\hookrightarrow D$ is $0$, it follows that $\pi^*\phi=0$. We will find such $W'$ and $\pi$ such that $\pi^*\phi=0$ implies $\phi=0$.
To begin with we set $W'=\tilde\pi^{-1}(W)$. If $\tilde\pi(W'_{sing})=W$, replace $W'$ by $W'_{sing}$. Possibly repeating this we may assume that $\tilde\pi(W'_{sing})\nsubseteq W$. Thus $\tilde\pi(W'_{sing})$ is a proper analytic subset of $W$. Set $\pi=\tilde\pi|_{W'}$ and notice that $\pi$ is surjective on $W$.
Let $$M=W'\setminus \pi^{-1}(W_{sing}\cup \pi(W'_{sing})), \quad N=W_{reg}\setminus \pi(W'_{sing}),$$ and $p=\pi|_M$. Since $p$ is surjective it follows from Lemma \[snar\] that there is $x\in M$ such that $\text{rank}_x\,p=d$. Since $d$ is the optimal rank of $p$ this holds for $x$ in a non-empty Zariski-open subset of $M$. Let $\widetilde M=\{x\in M; \text{rank}_x\,p\leq d-1\}$ be the complement of this set. Then $\text{rank}_x\,p_{\widetilde M}\leq d-1$ for all $x\in\widetilde M_{reg}$. By Lemma \[snar\], $p(\widetilde M)\nsubseteq N$ and thus a proper analytic subset of $N$.
Now, $N\setminus p(\widetilde M)$ is a dense open subset of $W_{reg}$ and so it suffices to show that $\phi=0$ there. However, $M\setminus p^{-1}p(\widetilde M)$ is a (non-empty) open subset of $M$ and in this set $p$ has constant rank $=d=\text{dim}\, W$. Thus, $p$ is locally a simple projection and it follows that if $p^*\phi=0$, then $\phi=0$.
Let $\phi\in\mathcal{E}(Z)$ and let $f^*\phi$ be any form on $X'_{reg}$ obtained by the procedure described in Theorem \[snabel\]. Clearly $f^*\phi$ is smooth on $X$. To see that it is independent of the choices of extensions we may assume that $X$ is irreducible. By Remmert’s proper mapping theorem $f(X)$ is an analytic subset of $Z$. In view of Proposition \[kul\], if $\varphi$ is an extension of $\phi$ to $D_Z$, then the pullback $\phi'$ of $\varphi$ to $f(X)_{reg}$ is independent of the extension. Let $X':=X\setminus f^{-1}(f(X)_{sing})$ and notice that it is a non-empty Zariski-open subset of $X$. The restriction of $f$ to $X'_{reg}$ is a holomorphic map between complex manifolds and it follows that $$f^*\phi= f|_{X'_{reg}}^*\phi'$$ on $X'_{reg}$. Since $f|_{X'_{reg}}^*\phi'$ is well-defined and $X'_{reg}$ is dense in $X$ it follows that $f^*\phi$ is independent of the choices of extensions.
[99]{} <span style="font-variant:small-caps;">Andersson, Mats; Samuelsson H[å]{}kan</span> A Dolbeault-Grothendieck lemma on complex spaces via Koppelman formulas. *Invent. Math.* 190 (2012), 261–297.
<span style="font-variant:small-caps;">Andersson, Mats; Samuelsson H[å]{}kan; Wulcan, Elizabeth; Yger, Alain</span> Segre numbers, a generalized King formula, and local intersections. *J. Reine Angew. Math.* 728 (2017), 105–136.
<span style="font-variant:small-caps;">Andersson, Mats; Eriksson, Dennis; Samuelsson H[å]{}kan; Wulcan, Elizabeth; Yger, Alain</span> Global representation of Segre numbers by Monge-Ampère products. *Math. Ann.*, to appear. Available at arXiv:1812.03054.
<span style="font-variant:small-caps;">Barlet, Daniel; Magnússon, Jón</span> Cycles analytiques complexes. I. Théorèmes de préparation des cycles. *Cours Sp' ecialis' es, 22. Soci' eté Mathématique de France, Paris, 2014. 525 pp.*
<span style="font-variant:small-caps;">Barlet, Daniel</span> The sheaf $\alpha_X^\bullet$. *J. Singul.* 18 (2018), 50–83.
<span style="font-variant:small-caps;">Bloom, Thomas; Herrera, Miguel</span> De Rham cohomology of an analytic space. *Invent. Math.* 7 (1969), 275–296.
<span style="font-variant:small-caps;">Herrera, M.; Lieberman, D.</span> Residues and principal values on complex spaces. *Math. Ann.* 194 (1971), 259–294.
<span style="font-variant:small-caps;">Ruppenthal, Jean; Samuelsson Kalm, H[å]{}kan; Wulcan, Elizabeth</span> Explicit Serre duality on complex spaces. *Adv. Math.* 305 (2017), 1320–1355.
|
---
abstract: 'Emergent collective group processes and capabilities have been studied through analysis of transactive memory, measures of group task performance, and group intelligence, among others. In their approach to collective behaviors, these approaches transcend traditional studies of group decision making that focus on how individual preferences combine through power relationships, social choice by voting, negotiation and game theory. Understanding more generally how individuals contribute to group effectiveness is important to a broad set of social challenges. Here we formalize a dynamic theory of interpersonal communications that classifies individual acts, sequences of actions, group behavioral patterns, and individuals engaged in group decision making. Group decision making occurs through a sequence of communications that convey personal attitudes and preferences among members of the group. The resulting formalism is relevant to psychosocial behavior analysis, rules of order, organizational structures and personality types, as well as formalized systems such as social choice theory. More centrally, it provides a framework for quantifying and even anticipating the structure of informal dialog, allowing specific conversations to be coded and analyzed in relation to a quantitative model of the participating individuals and the parameters that govern their interactions.'
author:
- 'Yaneer Bar-Yam$^1$ and David Kantor$^2$'
title: |
A Mathematical Theory of Interpersonal Interactions\
and Group Behavior
---
Introduction
============
Individuals interact to form couples, families, teams, organizations and societies. Understanding the shift from individuals to groups requires recognizing how collective behaviors arise and their properties. Understanding collective behaviors in many contexts is difficult due to the problem of observation of the processes involved [@dcs]. However, human groups are arguably the easiest complex system to observe as we see them all around us in professional and personal contexts. The challenge in this case is, at least in part, knowing what to look for and how to interpret what is observed.
One of the exciting developments in understanding group collectivity and effectiveness has been the identification by Wegner [@wegner1987transactive] of the concept of transactive memory, a form of “group mind" by which individuals who share experiences and know each other can recover memories more effectively than they can separately. Originally studied in couples, this concept has motivated wide ranging research on how it arises and its properties, including the use of transactive memory in professional teams [@hollingshead1998communication; @hollingshead1998retrieval; @brandon2004transactive; @lewis2003measuring; @lewis2004knowledge; @alavi2002knowledge; @moreland2006transactive; @austin2003transactive; @moreland2000exploring; @reagans2005individual; @mohammed2010metaphor; @anand1998organizational; @kanawattanachai2007impact].
The recovery of memories is one of many group behaviors that might be identified. A wider ranging body of research has considered a variety of group qualities linked to team task performance [@levi2015group; @stewart1999team; @thompson2008making; @levine1998small]. While it makes intuitive sense that some level of individual ability is linked to a group’s ability to perform certain tasks, other attributes have also been considered, including individual attributes associated to social behavior such as self-esteem, social sensitivity, social loafing, social status, and gender, and group processes such as goal setting, cohesion, trust, conflict, and sharing of communication time [@latane1979many; @tziner1985effects; @levine1990progress; @karau1993social; @o1994review; @mullen1994relation; @klein1995two; @dirks1999effects; @devine1999teams; @duffy2000performance; @duffy2002social; @locke2002building; @beal2003cohesion; @de2003task; @hong2004groups; @kerr2004group; @morris2005bringing; @ilgen2005teams; @van2007work; @horwitz2007effects; @lepine2008meta; @van2008group; @kleingeld2011effect; @chiaburu2008peers; @woolley2010evidence; @shaw2011contingency; @de2012paradox; @waring2012cooperation; @duffy2012social; @de2014team; @bates2017smart].
Another key group behavior that has been studied extensively is collective decision making. Two orthogonal mechanisms for group decisions are (1) assignment of decision making to an individual who makes decisions for the group as a whole, and (2) shared or collective decision making processes. The analysis of the former is through discussions of “power" in many forms [@hobbesleviathan; @john1959bases; @galbraith1983anatomy; @mann1984autonomous; @french2001bases; @keltner2003power; @fiske2007social; @michael1986sources; @mann1993sources; @mann2012sources3; @michael2012sources4; @baryam2018power], while the latter has been extensively studied through analysis of voting systems (social choice theory) and negotiation.
Decisions by voting and negotiation are generally considered to arise from what each individual would decide alone if given the decision authority (individual preference). Social choice theory considers how voting systems aggregate individual preferences with sometimes counterintuitive results [@de2014essai; @arrow2012social; @gibbard1973manipulation; @satterthwaite1975strategy; @elster1989foundations; @siegenfeld2018negative]. Negotiation, often defined as combining two or more divergent preferences into a joint agreement that must be unanimously accepted, originates in conflict in a zero-sum context [@pruitt2013negotiation; @lewicki2015negotiation; @fisher2011getting]. It is often considered similar to or even an example of game theory [@von2007theory; @myerson2013game; @luce2012games] in which the decisions autonomously adopted by each individual have consequences for others and therefore decisions by individuals are linked. Actions involve exchanging information about preferences, making promises or threats. Negotiation may, however, also include integrative (mutual benefit) considerations due to non-conflicted positive sum aspects of agreement or benefits of long term relationships [@pruitt2013negotiation; @lewicki2015negotiation; @fisher2011getting].
Another dimension of decision making in groups considers how one individual can influence another potentially resulting in fads and panics, and describing how common behavior and cultures arise [@bikhchandani1992theory; @kindleberger2003manias; @helbing2000simulating; @harmon2015anticipating; @Axelrod1997]. At the individual level, decision theory considers how individuals actually do or better should make decisions given individual values and uncertainty [@raiffa1974applied; @slovic1977behavioral; @einhorn1981behavioral; @berger2013statistical; @kahneman2013prospect; @talebincerto].
In this paper we are interested in how groups engage in decision making so as to benefit from combining individual capabilities beyond just their preference. For example, a group of individuals in a late afternoon meeting might decide to go out for dinner and where to go. While one person may have some of the information about what is a good decision, another may have additional information that modifies that decision. The best restaurant identified by one, might be closed due to renovation known to another. One knowledgeable individual may be reluctant to express their thoughts unless they are encouraged by another. The focus of this paper is on characterizing the sequence of communication among individuals that give rise to collective decision making in groups. Similar to transactive memory and group task performance as well as voting and negotiation we are not focused on the role of power itself, as in “Who makes the decision," [@davis1976decision; @solomon2014consumer] but on the way multiple individuals contribute to decision making. Recent discussions about debate in international relations point to several important distinctions in how people engage in collective decision making that are relevant [@risse2000let]. In the first approach, rational actors with fixed preferences choose actions to optimize their utility in a social context, an essentially game theoretic approach. In the second, social interactions follow a set of rules and procedures embodying an assumption that mutual goals exist and are at least in part embodied by those social rules. Finally, actors also engage in meta communication about what assumptions are correct and what rules of social discourse apply in specific contexts. The latter opens the door to changes in individual opinions that affect the collective outcomes.
Thus, collective decision making is not in general specified by rigid voting frameworks, and need not be either zero sum or reflect fixed and rigid individual priorities. Members of a group engaged in decisions need not be assumed to disagree. They may learn from each other. They may have values that are not entirely selfish, and may not even be focused on the particular decision at hand, considering other issues as equally or more important. For example, they may care more about the group cohesion or its decision making process. In the meta-view of their engagement in decisions, they may even consider each other to play a constructive part in a collective process of decision making that is shared. It is useful to consider examples that are mutual, like where to go for dinner, whether they may involve conflict at some point or not. Our interest is in the way interpersonal interactions give rise to collective decisions in such a context. The dynamics of communication arise from a much more fluid set of interactions than considered in voting. Kantor’s theory of interpersonal interactions[@kantor1975inside; @kantor2012reading] provides a framework for characterizing the set of communications that occur in such contexts. We formalize this framework within a mathematical theory of social collective behaviors. Compared to voting options in social choice theory, Kantor’s theory considers a larger set of possible actions that combine to form a dynamic decision making process. However, the theory still identifies a limited number of categories of actions, providing a way to clarify the dynamics of the large set of communications that can arise. The theory can be mapped onto real world decision making including both informal group discourse, and the more formal processes widely used, i.e. in Robert’s rules of order [@robert1896pocket]. The complexity of the latter reflects the need to capture important details that are not embodied in social choice theory.
In voting the available actions in a preset voting system are votes choosing among a predetermined set of options based on preferences, or abstention. This does not include the action by which the set of options are determined, an essential part of the process of decision making. Thus it is natural to include four possible options: move (propose a new action), follow (vote in the affirmative), oppose (vote in the negative), and by-stand (abstain). (We note that as in social choice theory sequential comparison of collective preferences may give rise to cyclical behaviors that do not converge to decisions. In expanding the set of options we make no a-priori judgement about whether the dynamics of the system is functional.) Kantor’s theory posits that these four actions constitute the essential categories of communications in a group. In addition to distinguishing these four actions, there are different domains in which such actions can take place that are directly or indirectly relevant to the action taken. Kantor’s theory presents three: power, meaning and affect. Others may be added in the fundamental description of the theory. Each domain can have each of the four possible actions.
The basis of our contribution is to show that Kantor’s theory is a universal characterization of the interactions within small groups of complex entities engaged in collective behavior with wide applicability to biological and social systems at all levels of organization. It is useful because it categorizes the many possible communications by individuals in a group to a small set that can be used to abstract communication patterns. The model is well suited to identifying when collective behaviors are functional or dysfunctional, for individuals or for collectives, in a context of environmental demands, individual and group needs.
More technically, the universal process of decision making we describe is subject to the assumptions that (1) there is a high dimensional space of possible decisions, and (2) there are sequential communications of individuals to the entire group. The relevance of communication by each individual to the entire group is given by the connection between high dimensional potential decisions requiring a lot of information and the relevance of the entire group to the ultimate decision. This model is complementary to other models of decision making in which (a) all individuals make a synchronous contribution to the decision making as is described in the case of many voting systems, but also in the case of collective dynamics that is found in neural networks where the input (sensory information) is a high dimensional state and output (decision or action) is either low dimensional (as in pattern recognition in feedforward networks, which combines sequential and parallel communications) or high dimensional (as in memory retrieval in attractor networks) [@dcs], and/or (b) there is a binary or few options of action. The latter is also consistent with a determination of a reduced dimensional set of options that is extrinsic to the process of decision making (this case is in part modeled by voting theory in which voting options are pre-specified). We will see that cases that are previously discussed in social choice theory arise as special cases of the asynchronous sequential dynamic decision making that we discuss here. Because of its sequentiality and high dimensional decision making the process we will describe is more relevant to small group decision making and so we consider it a universal process of small group decision making. A large group decision making process may be linked to small group decision making through the relevance of asynchronous communication to the small group that leads to a decision that is subsequently adopted by decision of a larger group through a synchronous decision making process (e.g. a vote). Or a hierarchical embedding of small within large or large within small group decisions can lead to hybrid processes. Thus a small number of large groups can interact sequentially overall and synchronously within, or a large collection of small groups can interact sequentially within and synchronously overall. Examples can be found in the real world.
Dynamic individuals and communications
======================================
Individuals and groups have complex behaviors whose complete description would require high dimensional time histories. However, remarkably, if we restrict our attention to characterizing the relationships among actions of individuals, we can project the description to a few dimensions. The relationships among actions of individuals are precisely the dimensions that are the building blocks of the collective behaviors of the group. This reduced set of dimensions, as well as how individuals determine their actions within these dimensions, and the relationship of these actions to group behavior, is the core of the theory by Kantor.
An individual can be quite generally characterized in an abstract fashion as a dynamical vector of attributes, $\psi_i(t)$, where the index $i$ indicates which individual. It is helpful to distinguish between slowly varying aspects, i.e. aspects that do not change much on a selected time scale of interpersonal interactions, e.g. one hour, and those which typically do change on that time scale or faster. Each of these rapidly changing attributes is modified in response to external stimuli that include the environmental conditions, and the internal dynamics of the individual. The internal dynamics include the consumption of energy as well as the mutual influences of neurons giving rise to the physiological basis of thought. The environmental conditions include the sensory impressions that arise from attributes of other individuals.
Collective movement: decision and follow-through
================================================
We consider the role of interpersonal interactions to determine a direction of change of individuals relative to each other, and in particular to determine the pattern of collective action. Thus the nature of interactions is that they can point to, and the individuals can subsequently perform, movements that are coordinated. This enables individuals to act together as a group.
To illustrate the process of collective interactions we consider a group of individuals in a late afternoon meeting discussing where to go for dinner. Given a set of possible options about places to eat, these individuals perform a set of interactions that result in them going together to a certain place for dinner, or the interactions may alternatively result in individuals going to different places for dinner. Either way, the framework of understanding the role of these interactions is to be considered in relation to the determination of where to go for dinner. The movement to dinner can be represented as a displacement of the attribute coordinates of the group of individuals, $\{\psi_i (t)\}$, which include as one attribute the spatial position of an individual. A displacement of all individuals from one original location to one final location constitutes a collective coherent motion which is one common pattern of collective behavior.
Generalizing this specific example, we can consider how individual actions aggregate to form collective actions. Quite generally, we argue that collective behaviors are formed when a set of individuals perform a set of interactions over a certain period of time. These interactions then determine a set of actions that result in attribute displacements that occur over a subsequent period of time. The dynamics can be considered as mathematically corresponding to the method of motion of certain bacteria through a liquid medium [@berg1972chemotaxis]. These bacteria perform a tumbling motion, followed by a straight line motion. This movement pattern can achieve rapid displacement while having the ability to direct the movement toward improved conditions such as presence of nutrients or absence of toxins. Since available information is local, the ultimate objective of movements is not known at the beginning of a set of movements, and episodic reassessment of the direction of motion is necessary. However, a continuous reassessment of direction of motion would limit progress much more severely. Thus a balance between frequency of assessment of direction, and period of motion is necessary to optimize the rate of progress, given properties of the environment such as the steepness of gradients of nutrients. Similar considerations apply to the decision making of groups of individuals and subsequent collective behaviors.
There are distinct attributes of the actions of individuals during the time of decision making and during the time of follow through activity. During the decision making period, individuals located together in a single space typically perform actions in sequence, one person at a time. This is due to the exclusivity of collective channels of communication in a single space (as well as the internal requirements of the sequential response dynamics described in the theory). During the follow through activity, generally individuals perform actions in parallel, unless the pattern of activity agreed upon specifically precludes it. Thus, during the decision about where to go for dinner, each person speaks in turn. During the period of walking to dinner, all are walking in parallel. The reason for the difference can be understood from the process of achieving consensus during the initial period, followed by the possibility of independent actions that are coherent in the second period.
The requirements of the process of alternation of decision with action also implies a need to ensure that decision processes achieve closure through a time bounded process. This is essential even in the absence of perfect decision making or complete information.
The example of people choosing to go to dinner is similar to the bacteria moving through a liquid medium in that both refer to spatial displacement. This should not be taken as a limitation. The same concepts can be applied quite generally to displacements in the very general attribute space having to do with the wide ranging set of possible activities engaged in by groups.
Discrete Communication Dynamics
===============================
The attributes of multiple individuals over time affect each other during the period of decision making in a group. Actions by one individual that impact another individual are generally considered to be communications. While we can formulate the theory directly in terms of the general individual attributes, $\psi_i(t)$, it is helpful to start by simplifying the discussion by mapping these attributes onto communications, discrete units of activity that have shared meaning among individuals, given by an attribute vector $\xi_i(t) = M(\psi_i(t)).$ The purpose of this mapping is to enable us to consider two statements or gestures that may appear quite different, but share a common meaning, as having a similar description. Thus, if one person says “Let’s eat at Sally’s diner" and “Let’s go to the same place we had dinner at last night" map onto a similar meaning if Sally’s was the place that dinner was eaten the previous night, even though the actual words spoken are different. Even if the words spoken are the same, their tonality, tempo and accent, varies among individuals, and for different instances by one individual. Similarly, it is possible that two statements have similar words but entirely different meanings, whether because of intonation (sarcasm) or that a single key word in the statement is different, i.e. insertion of the word “not." Unlike $\psi_i(t)$, for $\xi_i(t)$ we can assume similarity in the attribute vector implies similarity of meaning in the domain of individual and group behavior. While specifics of the meaning map are interesting, and may play a role in our discussion, we bypass these details because it is tangential to the current purpose. Thus, for example, the issues that arise when meaning maps are not coincident among individuals are important but can be considered as an additional overlay to the theory we will develop.
In particular we are interested in how one communication is related to previous communications: $$\xi_i(t) = H[\{\xi_j (t')\}]$$ where $\xi_i(t)$ is the communication of individual $i$, and $\{\xi_j (t')\}$ are the set of communications by individuals at prior times, and $H$ is a function that specifies how the current behavior is related to previous communications.
Such generalized models of dynamical behaviors have been used to describe the changing firing patterns of neurons in the brain, the development of color patterns on animal skins, the dynamics of panic in an auditorium, and other specific measures of individual and collective social attributes.[@dcs] The main difference in the human communication model is the possibility of complex high dimensional communications, which while present in other cases is frequently not described as behaviors are abstracted as represented by single bits at a specific time.
A specific statement may respond to only one of the previous communications. This would be written as: $$\xi_i(t) = H(\xi_{j(t)} (t'(t)))$$ The choice of which statement to respond to adds to the complexity of the communications. It is possible for an individual to respond to the immediately preceding statement, or to a specific statement that subsequently collects multiple responses until a different statement becomes the statement that is responded to. The individual who is speaking may choose which previous statement to respond to as well as choose the response. Alternatively, there may be a separate mechanism by which the statement to be responded to is determined, for example by rules of the group about “motions" [@robert1896pocket]. This gives rise to a first form of meta communications, identifying what is the topic of conversation. More generally, statements may be a response to a collection of previous statements. For a first order analysis we consider only the case where an individual chooses a response to a specific previous statement, without rules for its determination.
Action categories: Move, Follow, Oppose, Bystand
================================================
A key characteristic of a communication about a potential future action of the group (in brief, a conversational “action") is its relationship to previous communications by others. We distinguish four categories of action relationships. A communicated action that is orthogonal to a previous action is labeled a “Move", a parallel action is a “Follow" an anti-parallel action is an “Oppose" and a non-action is a “Bystand". This is an exhaustive set of possible of high dimensional changes in a simple typology that is concerned with the process of individuals determining whether to move together and the direction to move in. In greater detail:
- A significant action that is not related in a specific way to previous actions is an initiation of a direction of possible future movement of the group. This is labeled a “Move". We can visualize it as a step of an individual in a direction of a particular place to eat, or the verbal analog of such a step, “Let’s eat at Sally’s diner".
Mathematically this corresponds to a vector that is orthogonal to previous moves, i.e. is in a new direction: $$\begin{array}{ll}
\xi_i(t) \cdot \xi_j(t') &= 0 \\
| \xi_i(t) | &> 0
\end{array}$$ The first equation specifies that there is no projection of the action on the directions of previous action by others. The second equation specifies that the action has significant magnitude.
- A significant action that is in the same direction as a previous action is labelled a “Follow". It corresponds to a step consistent with participating in a coherent collective action.
Mathematically this corresponds to a vector that is parallel to the previous move responded to: $$\begin{array}{ll}
\xi_i(t) \cdot \xi_j(t') &> 0 \\
| \xi_i(t) | &> 0
\end{array}$$
- A significant action that is in the opposite direction of a previous action is labelled an “Oppose". It corresponds to a step inconsistent with participating in a collective action in the original direction.
Mathematically this corresponds to a vector that is anti-parallel to the previous move responded to: $$\begin{array}{ll}
\xi_i(t) \cdot \xi_j(t') &< 0 \\
| \xi_i(t) | &> 0
\end{array}$$
- Finally, a nonsignificant action corresponding to staying in place, is labeled a “Bystand".
Mathematically this corresponds to a vector of zero (small) magnitude: $$\begin{array}{ll}
| \xi_i(t) | &\approx 0
\end{array}$$
We might further characterize the magnitudes of specific action overlaps, and a single action may have nonzero projections along multiple prior directions of action. These can be considered as additions to the basic typology given above. In the basic typology we are concerned with the “sign" of the response by one individual to another, where sign in this case is a three valued function, taking the values $\pm 1$ as well as zero.
Synchronous or asynchronous communications
==========================================
Communication among individuals who are part of a group are to varying degrees parallel/synchronous or serial/asynchronous. Where a communication medium is exclusive at a particular time, the communications are properly modeled as serial. Where they are not exclusive they may be in parallel. Typically spoken auditory communication is considered to be exclusive, though it is possible for multiple individuals to speak at once—the difficulty with multiple parallel communications arises in the limitations on cognitive processing of the individual that receives multiple communications. Therefore, social norms more or less strictly limit spoken communications to be serial. Applause is another channel of auditory communication, which is considered inherently synchronous. Visual communication, i.e. facial expression or posture, might be considered to be synchronous because each individual has a posture or expression at all times, and one person can see multiple individual in their field of view. Still, expressions are often held for a period of time and changed at discrete times, whereby the changes of expression attract attention, and an individual generally only pays attention to one other individual at any particular time. Thus we see that these modalities are only approximately serial or parallel, but might be treated in one or the other way to first approximation.
The differences in serial and parallel communication channels are also relevant to the types of communication that occur. Moves are generally limited to be serial. Each Move, being a high dimensional communication distinct from prior actions, must be communicated and separately processed by the individuals who receive it. Moreover, the responses to one Move must be separated from the responses to other actions, in order to be meaningfully related to group behavior. On the other hand, Follows and Opposes can be highly parallel, as the amount of information communicated may be as simple as the fact of the follow or oppose, which can be communicated in a single bit of information. I.e., considering Follows and Opposes together, the minimal communication in response to a Move is a binary response which is a vote of Follow or Oppose. Along with a Bystand, we have a single “active" bit and a “null" or abstain passive response. While this is the minimum amount of information in a response to a Move, this does not imply that every Follow or Oppose is restricted to this amount of information. Where such a reduced information is applicable, the aggregation of information can be done through a tally (a vote) considered as a representation of the group preference. From this we see that the conventional abstraction of group interactions into proposal and voting, is a simplified limit of the more complete group communication action model of Kantor. We also note that the result of a tally is itself a statement that is distinct from other statement types and can be responded to by individuals of the group.
Typical small group Follow or Oppose, and even Bystand, communications have larger amounts of information than a single vote. Thus, the limitation of attention and processing by each individual restricts the number of parallel communications that can be effectively transmitted between individuals, especially when they are communicated among an entire group. Thus, it is reasonable to consider a serial model of communication and we will focus on such a model here. (We note that a fully parallel model could also be constructed. In such a model, each individual at each time would respond to the other individual’s actions at a previous time, or multiple previous times. Such a model of response of each individual to the set of prior actions of the whole group, would be similar to a dynamic models of a recursive neural network, or recursive dynamic model of pattern forming of pigment cells.[@dcs] However, as mentioned previously, in such models each communication by a cell is generally represented as a single bit and this simplifies the structure of the model in a different way.)
Asynchronous models have a key additional feature that must be considered—the order of response. In many abstract models of asynchronous communication, the next responder is chosen at random. However, in a social group context, individuals may have reasons or character traits that result in actions in a certain order. Some individuals may and generally do act more frequently than others. Identifying the dynamics of sequence must be considered part of the description of the group dynamics.
Thus, in an asynchronous model, parameters are needed to describe the degree to which an individual chooses to act, or not to act, i.e. their tendency toward inhibition. In the simplest model we will assume that there is a single parameter describing this tendency. Note that because a less reticent person will communicate before a more reticent person, and it is necessary at minimum to have two people to have a conversation, conversations may tend to be dominated by a few individuals, no matter how many people are present.
Moreover, since individuals are not required to act in most groups, it is generally not possible to know what is the response of an individual unless he or she chooses to voice it. This creates an additional dynamic of withheld or hidden information that is integral to group behavior and its very nature makes it difficult to characterize. While there are other mechanisms that obscure communications, including their degree of truthfulness, we see that the existence of hidden information is part of group communications even in the presence of truthfulness.
Indeed, the sequentiality and non-determinism of order give rise to a need for meta-communications about human group interactions. These meta-communications are directed at revealing hidden information, which may be key to group decision making, especially where exposing the participation or non participation of individuals in group behaviors under consideration is relevant to the ultimate outcome, and thus the consequences for the group. Sequentiality and meta-communications also give rise to the formal and informal structuring of conversations by “rules of order." This second topic of rules, where the structure of communications are formalized, are typically designed so that more individuals participate and the conversation is compelled to achieve closure. Methods, include polling of individuals, and social conventions about required participation in collective behaviors for certain poll results. Without such structures, it may be impossible to determine what the collective decision is, as only some of the individuals may choose to participate in the communications, and typically only a few will. Rules of order, structure the time for decision making, and compel the group to remain together despite dissent.
We see that social choice theory and the rules of order, can be considered founded on a more fundamental concept of group social interactions, which in a certain limit simplifies to those theories of group behavior. This suggests the underlying conceptual basis that we are developing can address important issues in group behavior.
Association dynamics
====================
The dynamics of group decision making is also inherently linked to the dynamics of group association, which characterizes the inclusion or exclusion of individuals, and the association or dissociation of the group as a whole. Among the relevant questions for our consideration are the coupling of group decision making to the group association, and the coupling of the behavior of individuals during group decision making to their individual choice to participate, or group choice to include or exclude them. Simply put, whether an individual’s preferences are aligned with group decisions affects both the group’s and the individual’s desire to be part of the group. For couples, this dynamics include whether to marry or divorce. For business this includes incorporation and disbanding of associations. Group association dynamics can be refined to include strength of association or level of group engagement.
A “zeroth" model of group association would have any decision be itself a decision to associate or dissociate. Each individual would act according to his/her own statements, resulting in the final mover and followers going together, the opposers going in different directions, and bystanders staying in place. A first model of group association would have a single parameter characterizing group adhesiveness or “surface tension," that tends to keep the group together. The strength of this force determines the degree to which opposers participate in the eventual group activity. Even a small amount of (positive) cohesiveness would imply bystanders participate.
Group decision making may also directly engage in determining group composition. This is the third type of meta communication we have identified. In the discussion below, we will see that specific personalities may dominate this domain of discourse due to preferential reactions to different individuals.
Organizational structures
=========================
The purpose of organizational structures is to habituate roles of individuals in social communications.
Quite generally, groups organized around functions have mechanisms for achieving collective actions. These mechanisms include the dynamics of social interaction. In order to streamline the process of decision making so that the resources it consumes are reduced, the organization structure can formalize roles of individuals. The “leader" engages in Moves, and others are followers engaging in Follows. Bystanders may exist to fulfill particular functions when needed. Consistent with the dynamics of progressive refinement of group association, individuals who tend to Oppose do not continue to be part of a group.
We will discuss later the functional benefits to the group of the dynamics of different roles under different circumstances. This will include the potential constructive role of opposers. Still, organizations that strive to include opposers, would tend to limit their roles in scope and effect.
Dynamic model
=============
A mathematical model of group decision dynamics constructs the type of actions that are taken in terms of types of actions that have been taken in the past. In the simplest form, this would depend only on the most recent move (i.e. a deterministic iterative map or a stochastic Markov chain). More generally, we can consider a dynamic model that depends on a certain number of previous messages, the messages within a time interval, the most recent message of each of the individuals in the team, the most recent message of a particular type, etc.
As a first approximation, it is conventional to consider a linear approximation of the dependency. We illustrate this for the case of a single communication depending on a single prior communication: $$\begin{array}{ll}
\xi_i(t) = h_i(t) + J_{i,j} \xi_j(t-1)
\end{array}$$ where $h_i(t)$ can be considered to be the intrinsic inclination of individual $i$, and $J_{i,j}$ is the (to first order fixed) linear response of individual $i$ to the comments of individual $j$. It is helpful to distinguish between three different cases for $J_{i,j}$. $J_{i,j}>0$ implies that $i$ tends to follow person $j$, $J_{i,j}<0$ implies that $i$ tends to oppose person $j$, and $J_{i,j} \approx 0$ implies $i$ tends to ignore person $j$.
Testing the projection of the current on the previous state identifies the type of communication at time $t$ in terms of the communication at time $t-1$. $$\begin{array}{ll}
\xi_i(t) \cdot \xi_j(t-1) &= h_i(t) \cdot \xi_j(t-1) + J_{i,j} \xi_j(t-1) \xi_j(t-1) \\
&= h_i(t) \cdot \xi_j(t-1) + J_{i,j} |\xi_j(t-1)|^2
\end{array}$$ Considering only the sign $\pm,0$, i.e. the type of communication: $$\begin{array}{ll}
\text{sign}(\xi_i(t) \cdot \xi_j(t-1)) &= \text{sign} (h_i(t) \cdot \xi_j(t-1)) + J_{i,j} |\xi_j(t-1)|^2)
\end{array}
\label{master}$$ There are two different cases to consider:
The first case is if the comment by individual $j$ happens to have a projection, either positive or negative, along the direction of the intrinsic preference of individual $i$. In this case, we have a competition between the two terms. If the projection is positive and $i$ is inclined to follow $j$ the response will be a Follow. If the projection is negative, and $i$ is inclined to oppose $j$ the response will be an Oppose. If the projection is positive, and $i$ is inclined to oppose, or if the projection is negative, and $i$ is inclined to follow, the result will be determined by a balance between the amplitudes of each effect.
The second case is if the comment by individual $j$ does not have a significant projection along the direction of the intrinsic desire of individual $i$. We note that because of the high dimensional nature of the possible communications, this can be expected to be the default case, i.e. a random choice of communication by individual $j$ would not have a projection along the intrinsic desires of individual $i$. (The magnitude of an inner product of two random vectors is the result of a random walk with a number of steps given by the number of vector components. The magnitude is proportional to the square root of the number of steps, which, for large number of components is small compared to inner products that scale as the number of components. [@dcs]) Only when the space of possible comments is for some reason limited, is it likely that the first case, where there is an intrinsic preference by the individual, applies. A simplest model would therefore set the first term to zero, eliminating the role of the intrinsic preference of the individual. In this case, we have the result that: $$\begin{array}{ll}
\text{sign}(\xi_i(t) \cdot \xi_j(t-1)) &= \text{sign}(J_{i,j}) (1 - \delta(\xi_j(t-1),0))
\end{array}
\label{master_response}$$ Neither factor depends on the specific content of the message. This equation has the immediate interpretation that to first (linear) order the projection onto the previous message, i.e. the category of the move, is independent of the message content. The response of one individual to a comment by another is then dependent only on the persistent relationship between the individuals. The final factor on the right of Eq. (\[master\]) implies that to linear order a null communication has a null response.
The analysis thus far focuses on the responses in terms of the prior communication that range from Follow to Oppose. A Move arises when the projection of $\xi_i(t)$ and $\xi_j(t-1)$ is small. This occurs for an individual for whom the influence of others is small ($J_{i,j} \approx 0$) or in a context in which the prior communication is itself of low magnitude $|\xi_j(t)|\approx0$. In this case the communication will be in the direction of the intrinsic desire of the individual, $h_i(t)$, with no significant projections on prior communications, i.e. a Move.
Finally, a Bystand occurs when an individual neither has a strong preference $h_i(t)$, nor a strong interaction $J_{i,j}$, or in case the two are balanced against each other. The case where both are small may also arise because of an inhibited response, which could be represented by an overall multiplicative factor that suppresses response. The inability, to first order, to observe the difference between inhibited responses and lack of preference is consistent with the actual challenge of understanding human response. The dynamics of sequentiality of acts must be considered carefully. People act or wait for a turn to act in the context of group interactions. Absence of a statement, maybe a wait state or may, under some conditions, constitute a Bystand — i.e. does the person have an intended action and is waiting for their turn to speak, or are they choosing not to act.
The model we have developed focuses our attention on the role of the intrinsic preference of the individual and the role of the intrinsic tendency of one individual in response to each of the other individuals independent of the content of the messages.
Power, Meaning, Affect
======================
When participants in a conversation make statements, their statements may be directed at moving the group to collective action directly, or may address aspects of cognitive, relational or other aspects of the system including meta-conversations about the subject, process of the dialog, or group dynamics. In general, a high dimensional attribute vector will have subdomains of attributes according to the modularity of the system. The nature of the modularity will vary among different kinds of systems. Here, some degree of non-universality enters into the discussion for application to different systems, though we expect that universality will still be significant. Statements can be categorized by what domain they are referring to. A different way to categorize the statements is by fundamental contributors of human response. For human social interactions, we identify three sub-domains of individual actions and their collective analogs. They are physical action oriented, cognitive meaning oriented, and affect oriented. From the point of view of the underlying vector of individual attributes, we can adopt conceptual or physiological languages to describe these different domains.
Conceptually, the action oriented domain is associated to group behavior that manifests in physical activity, that of cognitive meaning is associated to changes in the interpretation of external stimuli or human communication, that of affect is associated with how individuals in the group feel about each other. We note that affect can be involved in how someone feels about a particular course of action, but the distinguishing characteristic of affect is its role in group relationships: “I feel we should go together to Sally’s diner" is an affect statement, while “I feel we should go to Sally’s diner" may not be. In terms of neurophysiological function, the action domain characterizes motor activity of motor neurons and muscles, the meaning domain characterizes cognitive processing, the affect domain characterizes the coupling of neural and endocrine systems.
These three domains are identified by Kantor with the terms Power, Meaning and Affect. The use of the term “Power" is suggested by the direct relevance of statements in this domain to action and when simply articulated are directives to action, though actual power over decisions may be shared due to statements made by others. The term “Meaning" is suggested by the role of cognition in evaluating options for action. The term “Affect" is linked to the distinct importance of emotional aspects of interactions to decision making by the group.
Since communications among individuals are input through sensory channels, each of these domains is a characterization of part of the neural processing associated with response to environmental input, and may have associated output in the form of verbal or other communications. They are distinct in that, physical action is ultimately realized by non-verbal physical behavior, meaning is generally communicated verbally, affect is manifest through impacts on patterns of response and behavior.
We will see from later analysis of the underlying mathematics of these interactions that the roles of these statements have implications for (a) directing action, (b) directing preferences about actions, (c) directing preferences about relationships among individuals. These essential distinctions suggest a degree of universality for these categories.
Each of the domains interacts with the other domains. For example, cognitive processing impacts on both actions of the motor system and the emotions associated to the endocrine system. The emotional state impacts on cognitive interpretation and actions. Actions taken impact emotional states and cognitive processing both through their external manifestations (including communications) and internal consequences.
There are other aspects of discourse that may also be separated. One is that of values. Value statements potentially include a role in changing the group priorities that can affect all aspects of discourse including actions, preferences about actions, and preferences about relationships among individuals. In this paper we focus on Power, Meaning and Affect.
Mathematically, it is possible to write the vector of attributes of an individual, or the vector of attributes of the meaning of communications, as approximately partitioned into the three sub-domains, $\psi_i(t) = \{\psi^p_i(t),\psi^c_i(t),\psi^e_i(t)\}$ and $\xi_i(t) = \{\xi^p_i(t),\xi^c_i(t),\xi^e_i(t)\}$.
Specific models characterizing the ways each of these domains impacts on the other in terms of relevant mathematical equations can be constructed.
For this paper it is more central to note that while it is intuitive to consider collective behaviors as characteristic of physical action, it is possible for the group to collectively perform actions that are primarily cognitive or affective. Correspondingly, the types of acts described earlier: Move, Follow, Oppose, and Bystand, can occur separately in each of these domains. Thus it is possible to perform a Move in the emotional or meaning domain as well as in the physical action domain. The partition of the domains is useful for the analysis of patterns of social interaction. In particular, it enables classifying a wide range of communications whose roles would not be apparent if the only types of collective action considered would be that of physical action.
The interplay of Power, Meaning and Affect leads to a difficulty in disentangling them. By focusing on their primary roles, we can build a model that can be subsequently generalized to address other secondary roles that may be the most important under certain circumstances or for certain individuals. A first model of the dynamical roles of Power, Meaning and Affect is as follows:
Power characterizes behavior that has consequences measured directly in terms of goals of the system—typically in this society these include health, fame and fortune, among others, and may vary in other contexts. An example is “Let’s eat at Sally’s diner," which can be understood as aimed at achieving a goal oriented activity for the group aimed at health, assuming that it is dinner time and people are hungry, and the food at Sally’s diner is good.
Meaning actions are designed to impact on the members of the group and only indirectly on actions to achieve the goals of the group. In particular, they change the intrinsic desires of members of the group, $h_i$. An example of a Move in meaning space is “I saw a cockroach at Sally’s diner." The purpose of such a communication is two fold. First, so that the intrinsic preferences of individuals are modified so that subsequent actions quite generally and not just a specific one under discussion, are more likely to achieve goals (i.e. health). Second, so that the intrinsic preferences of the individuals of the group are more aligned with each other. Since a Move generally is in the direction of the intrinsic desire of an individual, this changes the analysis of the model in that $h_i(t)$ is partly aligned with $h_j(t)$. The result of such a Move is that future communications by one individual are more likely to be aligned with those of other individuals, leading to increased probability of collective behavior.
Affect actions are designed to impact on the interpersonal relations within the group, and similar to meaning actions, they only indirectly impact on actions to achieve the goals of the group. In particular, they change the interaction matrix $J_{i,j}$. The central aspect of affect in this model is whether an individual wants to do something with another individual. The role of affect can be better understood when it is recognized that there are multiple satisfying patterns of social interaction that involve consistency among the actions taken. In particular, two individuals who both desire to be together, or both desire not to be together, can both be satisfied. On the other hand, two individuals are in a mutually incompatible situation if the first desires to be with the second, while the second does not desire to be with the first. Thus, desires of individuals can result in functional or dysfunctional states, depending on whether they are mutually consistent. Since the differences between these states are not inherently contained in objective conditions, they cannot be resolved by cognitive analysis. Affect enables individual actions to be selected based upon the self-consistency of relationships. Acts in affect promote a self-consistent pattern of mutual support or mutual antagonism, by adjusting the sign of $J_{i,j}$ to be equal to the sign of $J_{j,i}$. An example of a Move in affect space is “I enjoy having dinner with you." The purpose is to communicate the positive sign of $J_{i,j}$ in order to prompt the second individual to recognize that mutuality is possible by a change in affect and therefore cause such a change in affect, i.e. a more positive $J_{j,i}$. Affect actions directly target mutuality and are inherently related to collectivity of a group, i.e. its tendency to act together or separately. Given that the high complexity of possible actions implies that most if not all Moves will be orthogonal to the intrinsic desires of others, as discussed before, the role of affect is critical in collective action.
In summary: Power actions are for achieving goals. Meaning is for aligning preferences with external / intrinsic consequences. Affect is for aligning desires of mutuality. We see that individual and collective movements can occur in these three domains, and can be analyzed in the same language. However, because each of these plays a different role in the individual, the significance of these individual and collective movements for the group are distinct.
Model parameters and their dynamics
===================================
We consider individual inclinations $h_i(t)$ to be time dependent in the sense that an individual may get hungry and want to go to eat dinner. Other aspects of the inclination of an individual may be more persistent, such as causes a person is devoted to. Also, as previously discussed, communications in Meaning can affect the individual inclinations.
We consider $J_{i,j}$ to be time dependent. A reasonable first model for such changes is that the presence of communications that are aligned (anti-aligned) of two individuals tends to make their $J_{i,j}$ more positive (more negative) respectively. This corresponds to mathematical models of Hebbian imprinting in neural learning where the individuals are analogs of neurons. This form of learning reinforces patterns of social interaction so that certain individuals tend to become more positive or negative relative to each other, causing those who tend to act in concert to be more likely to act in concert in the future, and similarly those who act in opposition will be more likely to act in opposition in the future., It also tends to make those who have compatible preferences tend also to have mutually reinforcing interpersonal relationships. Also, as previously discussed, communications in Affect can affect the individual inclinations.
Classification of individuals
=============================
Different individuals tend to adopt different types of actions, which leads to a means for classifying them. The reason for these choices are embedded in persistent properties of individuals rather than dynamic properties, which are therefore to be identified as personality types. We can identify the types of actions preferred by individuals from the categories of “Move, Follow, Oppose, Bystand."
In the mathematical model, an individual is identified by the parameters $h_i$ and $J_{i,j}$ for a particular value of $i$. These parameters generally vary for different values of the index, $j$, i.e. in relation to different people, and at different times. We identify personalities by postulating that for a particular individual these parameters have characteristic values that persist over time and satisfy identifiable patterns across people.
The identification of those who frequently perform just one of the types of actions can be readily identified:
A person with strong preferences $h_i$ and weak interactions $J_{i,j}$ is goal/content driven, and frequently Moves.
A person who has strong interpersonal interactions $J_{i,j}$ and weak preferences $h_i$ will respond to other moves. They Follow, if $J_{i,j}$ tends to be positive, or they Oppose, if $J_{i,j}$ tends to be negative.
A person with weak preferences and weak interactions tends to Bystand. As discussed previously, an additional overall amplitude parameter can be considered as describing an inhibition of response. Thus, a persistent bystander may have externally unexpressed preferences and interactions.
Among other insights, we see that the those who are frequent movers, often identified as “leaders," have a tendency toward more limited social interactions compared to those who are followers and opposers.
Certain combinations of parameters also tend to generate more complex response patterns, specifically when the tendency is to adopt more than one type of action according to specifics of the circumstances. A person with strong preferences and strong interactions has a varied response. Such a person, in order to reduce conflicts between preferences and interactions may also tend to engage in Meaning acts, through whose adoption by the group, the conflict will be reduced. A person with weak preferences and strong interactions that vary significantly among individuals, i.e. with strong interpersonal preferences, will tend to follow some and oppose others. In order to reduce conflicts, between following and opposing individuals, such a person may engage in Affective acts to reduce conflict. Alternatively, in some sense equivalently, such an individual may act in the dynamics of selecting group membership.
Summary
=======
The understanding of group interactions related to decision making may be advanced by considering a limited typography of communication acts. We have shown that it is possible from general mathematical considerations to identify four types of actions, $\{$Move, Follow, Oppose, Bystand$\}$, as universal, and using simple mathematical dynamic models to advance the understanding of group dynamics. We have explored some of the implications including identifying personality types and the relationship between types of actions taken and the types of personalities that are present as described by the model. The model we obtained follows closely the discussion and analysis of conversations in families and businesses by Kantor [@kantor1975inside; @kantor2012reading]. The usefulness of this model in diagnosing functional and dysfunctional behaviors and interventions has already been demonstrated.
[10]{}
Y. Baram, [*Dynamics of [C]{}omplex [S]{}ystems*]{} (Westview Press, Reading, MA, 1997).
D. M. Wegner, Transactive memory: A contemporary analysis of the group mind, in [*Theories of group behavior*]{} (Springer, 1987), pp. 185–208.
A. B. Hollingshead, Communication, learning, and retrieval in transactive memory systems, [*Journal of experimental social psychology*]{} [**34**]{}, 423 (1998).
A. B. Hollingshead, Retrieval processes in transactive memory systems., [ *Journal of personality and social psychology*]{} [**74**]{}, 659 (1998).
D. P. Brandon, A. B. Hollingshead, Transactive memory systems in organizations: Matching tasks, expertise, and people, [*Organization science*]{} [**15**]{}, 633 (2004).
K. Lewis, Measuring transactive memory systems in the field: scale development and validation., [*Journal of applied psychology*]{} [**88**]{}, 587 (2003).
K. Lewis, Knowledge and performance in knowledge-worker teams: A longitudinal study of transactive memory systems, [*Management science*]{} [**50**]{}, 1519 (2004).
M. Alavi, A. Tiwana, Knowledge integration in virtual teams: The potential role of KMS, [*Journal of the American Society for Information Science and Technology*]{} [**53**]{}, 1029 (2002).
R. L. Moreland, L. Thompson, Transactive memory: Learning who knows what in work groups and organizations, [*Small groups: Key readings*]{} [**327**]{} (2006).
J. R. Austin, Transactive memory in organizational groups: the effects of content, consensus, specialization, and accuracy on group performance., [ *Journal of Applied Psychology*]{} [**88**]{}, 866 (2003).
R. L. Moreland, L. Myaskovsky, Exploring the performance benefits of group training: Transactive memory or improved communication?, [*Organizational behavior and human decision processes*]{} [**82**]{}, 117 (2000).
R. Reagans, L. Argote, D. Brooks, Individual experience and experience working together: Predicting learning rates from knowing who knows what and knowing how to work together, [*Management science*]{} [**51**]{}, 869 (2005).
S. Mohammed, L. Ferzandi, K. Hamilton, Metaphor no more: A 15-year review of the team mental model construct, [*Journal of management*]{} [**36**]{}, 876 (2010).
V. Anand, C. C. Manz, W. H. Glick, An organizational memory approach to information management, [*Academy of management review*]{} [**23**]{}, 796 (1998).
P. Kanawattanachai, Y. Yoo, The impact of knowledge coordination on virtual team performance over time, [*MIS quarterly*]{} pp. 783–808 (2007).
D. Levi, [*Group dynamics for teams*]{} (Sage Publications, 2015).
G. L. Stewart, C. C. Manz, H. P. S. Jr., [*Team work and group dynamics*]{} (John Wiley and Sons, 1999).
L. L. Thompson, M. Thompson, [*Making the team: A guide for managers*]{} (Pearson/Prentice Hall, 2008).
J. M. Levine, R. L. Moreland, Small Groups: An Overview, [*Key Readings in Social Psychology*]{} p. 1 (1998).
B. Latan[é]{}, K. Williams, S. Harkins, Many hands make light the work: The causes and consequences of social loafing., [*Journal of Personality and Social Psychology*]{} [**37**]{}, 822 (1979).
A. Tziner, D. Eden, Effects of crew composition on crew performance: Does the whole equal the sum of its parts?, [*Journal of Applied Psychology*]{} [ **70**]{}, 85 (1985).
J. M. Levine, R. L. Moreland, Progress in small group research, [*Annual review of psychology*]{} [**41**]{}, 585 (1990).
S. J. Karau, K. D. Williams, Social loafing: A meta-analytic review and theoretical integration., [*Journal of Personality and Social Psychology*]{} [**65**]{}, 681 (1993).
A. M. O’Leary-Kelly, J. J. Martocchio, D. D. Frink, A review of the influence of group goals on group performance, [*Academy of Management Journal*]{} [**37**]{}, 1285 (1994).
B. Mullen, C. Copper, The relation between group cohesiveness and performance: An integration., [*Psychological Bulletin*]{} [**115**]{}, 210 (1994).
H. J. Klein, P. W. Mulvey, Two investigations of the relationships among group goals, goal commitment, cohesion, and performance, [*Organizational Behavior and Human Decision Processes*]{} [**61**]{}, 44 (1995).
K. T. Dirks, The effects of interpersonal trust on work group performance., [*Journal of Applied Psychology*]{} [**84**]{}, 445 (1999).
D. J. Devine, L. D. Clayton, J. L. Philips, B. B. Dunford, S. B. Melner, Teams in organizations: Prevalence, characteristics, and effectiveness, [*Small Group Research*]{} [**30**]{}, 678 (1999).
M. K. Duffy, J. D. Shaw, E. M. Stark, Performance and satisfaction in conflicted interdependent groups: When and how does self-esteem make a difference?, [*Academy of Management Journal*]{} [**43**]{}, 772 (2000).
M. K. Duffy, D. C. Ganster, M. Pagon, Social undermining in the workplace, [ *Academy of management Journal*]{} [**45**]{}, 331 (2002).
E. A. Locke, G. P. Latham, Building a practically useful theory of goal setting and task motivation: A 35-year odyssey., [*American Psychologist*]{} [ **57**]{}, 705 (2002).
D. J. Beal, R. R. Cohen, M. J. Burke, C. L. McLendon, Cohesion and performance in groups: a meta-analytic clarification of construct relations., [ *Journal of Applied Psychology*]{} [**88**]{}, 989 (2003).
C. K. De Dreu, L. R. Weingart, Task versus relationship conflict, team performance, and team member satisfaction: a meta-analysis., [*Journal of Applied Psychology*]{} [**88**]{}, 741 (2003).
L. Hong, S. E. Page, Groups of diverse problem solvers can outperform groups of high-ability problem solvers, [*Proceedings of the National Academy of Sciences*]{} [**101**]{}, 16385 (2004).
N. L. Kerr, R. S. Tindale, Group performance and decision making, [*Annual Reviews Psychology*]{} [**55**]{}, 623 (2004).
J. A. Morris, C. M. Brotheridge, J. C. Urbanski, Bringing humility to leadership: Antecedents and consequences of leader humility, [*Human Relations*]{} [**58**]{}, 1323 (2005).
D. R. Ilgen, J. R. Hollenbeck, M. Johnson, D. Jundt, Teams in organizations: From input-process-output models to IMOI models, [*Annual Reviews of Psychology*]{} [**56**]{}, 517 (2005).
D. Van Knippenberg, M. C. Schippers, Work group diversity, [*Annual Review of Psychology*]{} [**58**]{} (2007).
S. K. Horwitz, I. B. Horwitz, The effects of team diversity on team outcomes: A meta-analytic review of team demography, [*Journal of Management*]{} [ **33**]{}, 987 (2007).
J. A. LePine, R. F. Piccolo, C. L. Jackson, J. E. Mathieu, J. R. Saul, A meta-analysis of teamwork processes: tests of a multidimensional model and relationships with team effectiveness criteria, [*Personnel Psychology*]{} [**61**]{}, 273 (2008).
R. Van Dick, D. Van Knippenberg, S. H[ä]{}gele, Y. R. Guillaume, F. C. Brodbeck, Group diversity and group identification: The moderating role of diversity beliefs, [*Human Relations*]{} [**61**]{}, 1463 (2008).
A. Kleingeld, H. van Mierlo, L. Arends, The effect of goal setting on group performance: A meta-analysis., [*Journal of Applied Psychology*]{} [ **96**]{}, 1289 (2011).
D. S. Chiaburu, D. A. Harrison, Do peers make the place? Conceptual synthesis and meta-analysis of coworker effects on perceptions, attitudes, OCBs, and performance., [*Journal of Applied Psychology*]{} [**93**]{}, 1082 (2008).
A. W. Woolley, C. F. Chabris, A. Pentland, N. Hashmi, T. W. Malone, Evidence for a collective intelligence factor in the performance of human groups, [ *science*]{} [**330**]{}, 686 (2010).
J. D. Shaw, J. Zhu, M. K. Duffy, K. L. Scott, H.-A. Shih, E. Susanto, A contingency model of conflict and team effectiveness., [*Journal of Applied Psychology*]{} [**96**]{}, 391 (2011).
F. R. De Wit, L. L. Greer, K. A. Jehn, The paradox of intragroup conflict: a meta-analysis., [*Journal of Applied Psychology*]{} [**97**]{}, 360 (2012).
T. M. Waring, Cooperation dynamics in a multiethnic society: A case study from Tamil Nadu, [*Current Anthropology*]{} [**53**]{}, 642 (2012).
M. K. Duffy, K. L. Scott, J. D. Shaw, B. J. Tepper, K. Aquino, A social context model of envy and social undermining, [*Academy of Management Journal*]{} [**55**]{}, 643 (2012).
J. M. de la Torre-Ruiz, V. Ferr[ó]{}n-V[í]{}lchez, N. Ortiz-de Mandojana, Team decision making and individual satisfaction with the team, [*Small Group Research*]{} [**45**]{}, 198 (2014).
T. C. Bates, S. Gupta, Smart groups of smart people: Evidence for IQ as the origin of collective intelligence in the performance of human groups, [ *Intelligence*]{} [**60**]{}, 46 (2017).
T. Hobbes, [*Leviathan or The Matter, Forme and Power of a Common Wealth Ecclesiasticall and Civil*]{} (1651).
R. P. J. French Jr, B. H. Raven, The bases of social power, in [*Studies in Social Power*]{}, D. Cartwright, ed. (Institute for Social Research, Ann Arbor, MI, 1959), pp. 150–167.
J. K. Galbraith, [*The Anatomy of Power*]{} (Houghton Mifflin, Boston, 1983).
M. Mann, The Autonomous power of the state: Its origins, mechanisms and results, [*European Journal of Sociology*]{} [**25**]{}, 185 (1984).
J. R. French, B. Raven, The Bases of Social Power, [*Modern Classics of Leadership*]{} [**2**]{}, 309 (2001).
D. Keltner, D. H. Gruenfeld, C. Anderson, Power, approach, and inhibition., [*Psychological Review*]{} [**110**]{}, 265 (2003).
S. T. Fiske, J. Berdahl, Social power, [*Social Psychology: Handbook of basic principles (2nd ed.*]{} pp. 678–692 (2007).
M. Mann, [*The Sources of Social Power, Vol. 1: A history of power from the beginning to AD 1760*]{} (Cambridge, Cambridge University Press, 1986).
M. Mann, [*The Sources of Social Power, Vol. 2: The rise of classes and nation states, 1760-1914.*]{} (Cambridge: Cambridge University Press, 1993).
M. Mann, [*The Sources of Social Power, Vol. 3: Global empires and revolution, 1890-1945*]{} (Cambridge University Press, 2012).
M. Mann, [*The Sources of Social Power, Vol. 4: Globalizations, 1945-2011*]{} (Cambridge University Press, 2012).
Y. Bar-Yam, Power and Leadership: A complex systems science approach, Part I–Representation and dynamics, [*arXiv:1811.02896*]{} (2018).
N. De Condorcet, [*et al.*]{}, [*Essai sur l’application de l’analyse [à]{} la probabilit[é]{} des d[é]{}cisions rendues [à]{} la pluralit[é]{} des voix*]{} (de l’imprimerie royale, Paris, 1785).
K. J. Arrow, [*Social choice and individual values*]{} (Yale university press, 2012).
A. Gibbard, Manipulation of voting schemes: A general result, [ *Econometrica: Journal of the Econometric Society*]{} pp. 587–601 (1973).
M. A. Satterthwaite, Strategy-proofness and Arrow’s conditions: Existence and correspondence theorems for voting procedures and social welfare functions, [*Journal of Economic Theory*]{} [**10**]{}, 187 (1975).
J. Elster, A. Hylland, [*Foundations of social choice theory*]{} (CUP Archive, 1989).
A. Siegenfeld, Y. Bar-Yam, Negative Representation and Instability in Democratic Elections, [*arXiv:1810.11489*]{} (2018).
D. G. Pruitt, [*Negotiation behavior*]{} (Academic Press, 2013).
R. Lewicki, B. Barry, D. M. Saunders, [*Negotiation, 7th Ed*]{} (McGraw-Hill Education, 2015).
R. Fisher, W. L. Ury, B. Patton, [*Getting to yes: Negotiating agreement without giving in*]{} (Penguin, 2011).
J. Von Neumann, O. Morgenstern, [*Theory of games and economic behavior (commemorative edition)*]{} (Princeton university press, 2007).
R. B. Myerson, [*Game theory*]{} (Harvard university press, 2013).
R. D. Luce, H. Raiffa, [*Games and decisions: Introduction and critical survey*]{} (Courier Corporation, 2012).
S. Bikhchandani, D. Hirshleifer, I. Welch, A theory of fads, fashion, custom, and cultural change as informational cascades, [*Journal of Political Economy*]{} [**100**]{}, 992 (1992).
C. P. Kindleberger, R. Aliber, [*Manias, panics, and crashes*]{} (Springer, 2003).
D. Helbing, I. Farkas, T. Vicsek, Simulating dynamical features of escape panic, [*Nature*]{} [**407**]{}, 487 (2000).
D. Harmon, M. Lagi, M. A. de Aguiar, D. D. Chinellato, D. Braha, I. R. Epstein, Y. Bar-Yam, Anticipating economic market crises using measures of collective panic, [*PloS One*]{} [**10**]{}, e0131871 (2015).
R. Axelrod, The Dissemination of Culture: A Model with Local Convergence and Global Polarization, [*Journal of Conflict Resolution*]{} [**41**]{}, 203 (1997).
H. Raiffa, Applied statistical decision theory (1974).
P. Slovic, B. Fischhoff, S. Lichtenstein, Behavioral decision theory, [ *Annual Review of Psychology*]{} [**28**]{}, 1 (1977).
H. J. Einhorn, R. M. Hogarth, Behavioral decision theory: Processes of judgement and choice, [*Annual Review of Psychology*]{} [**32**]{}, 53 (1981).
J. O. Berger, [*Statistical decision theory and Bayesian analysis*]{} (Springer Science & Business Media, 2013).
D. Kahneman, A. Tversky, Prospect theory: An analysis of decision under risk, in [*Handbook of the fundamentals of financial decision making: Part I*]{} (World Scientific, 2013), pp. 99–127.
N. Taleb, Incerto pentology, [*Fooled by randomness, The black swan, The bed of procrustes, Antifragile, Skin in the Game*]{} (2018). http://www.fooledbyrandomness.com.
H. L. Davis, Decision making within the household, [*Journal of Consumer Research*]{} [**2**]{}, 241 (1976).
M. R. Solomon, D. W. Dahl, K. White, J. L. Zaichkowsky, R. Polegato, [ *Consumer behavior: Buying, having, and being*]{}, vol. 10 (Pearson London, 2014).
T. Risse, “Let’s argue!": communicative action in world politics, [ *International Organization*]{} [**54**]{}, 1 (2000).
D. Kantor, [*Inside the family*]{} (Jossey-Bass Incorporated Pub, 1975).
D. Kantor, [*Reading the room: Group dynamics for coaches and leaders*]{}, vol. 5 (John Wiley & Sons, 2012).
H. M. Robert, [*Pocket manual of rules of order for deliberative assemblies*]{} (SC Griggs, 1896).
H. C. Berg, D. A. Brown, [*et al.*]{}, Chemotaxis in Escherichia coli analysed by three-dimensional tracking, [*Nature*]{} [**239**]{}, 500 (1972).
|
---
author:
- '[Goran Jelic-Cizmek]{},'
- '[Francesca Lepori]{},'
- '[Camille Bonvin]{} and'
- '[Ruth Durrer]{}'
bibliography:
- 'refs.bib'
title: On the importance of lensing for galaxy clustering in photometric and spectroscopic surveys
---
Introduction {#s:intro}
============
At present, the most important data set to constrain cosmological models comes from observations of the cosmic microwave background (CMB) anisotropies and polarization [@Aghanim:2018eyx]. However, to optimally probe the growth of structure at late time, the CMB needs to be complemented by observations at low redshift. The 3-dimensional distribution of galaxies provides a powerful way of measuring the growth rate of structure in the late Universe, which is highly sensitive to the theory of gravity. This growth rate has been successfully measured by several surveys like SDSS [@10.1093/mnras/stu197; @10.1093/mnras/sty506], WiggleZ [@Blake_2011] and BOSS [@10.1093/mnras/sty453; @Li_2016], allowing us to test the consistency of General Relativity over a wide range of scales and redshifts. This success has triggered the construction of various large-scale structure (LSS) surveys, that are planed for the coming decade, like Euclid [@Laureijs:2011gra; @Amendola:2016saw; @Blanchard:2019oqi], LSST [@Abate:2012za; @Abell:2009aa], DESI [@Aghamousa:2016zmz], SKA [@Maartens:2015mra; @Santos:2015gra], SphereX [@Korngut2018SPHERExAA] and WFIRST [@akeson2019wide]. These surveys will observe much higher redshifts and larger volumes, improving the precision on tests of gravity.
Even though LSS data are more difficult to interpret than CMB data (non-linearities are relevant on small scales; we observe luminous galaxies made out of baryons, while we compute matter over-densities), on large scales or at higher redshifts we expect that linear perturbation theory or weakly non-linear schemes are sufficient to describe cosmic structure. The fact that LSS is a three dimensional dataset, and that it allows us to measure the cosmic density field, velocity field and the gravitational potential independently, makes it highly complementary to the CMB. For these reasons it is very important that we make best use of the LSS data soon to come.
To extract information from the distribution of galaxies, data have to be compressed using adequate estimators. Past and current surveys have mainly focused on the galaxy two-point correlation function in redshift-space and on its Fourier transform, the power spectrum, see e.g. [@Reid:2015gra; @Satpathy:2016tct; @Alam:2016hwk]. Since redshift-space distortions (RSD) break the isotropy of the correlation function, the optimal way of extracting information is to fit for a monopole, quadrupole and hexadecapole in the two-point function of galaxies (or equivalently in the power spectrum). In the linear regime, these three multipoles contain all the information about density and RSD. These multipoles have been measured successfully in various surveys and have been used to constrain the growth of structure and to determine the baryon acoustic oscillation (BAO) scale, see e.g. [@Aghanim:2018eyx] for an overview of present constraints.
However, it is well known that LSS observations are not only affected by RSD (due to the motion of galaxies) [@Kaiser1987], but that they are also affected by gravitational lensing due to foreground structures [@Matsubara:2004fr; @Scranton:2005ci; @Duncan:2013haa]. Gravitational lensing modifies the observed size of the solid angle in which we count how many galaxies we detect, consequently diluting the number of galaxies per unit of solid angle. In addition, it increases the apparent luminosity of galaxies, enhancing the number of galaxies above the magnitude threshold of a given survey. These two effects combine to distort the number counts of galaxies. As a consequence, gravitational lensing contributes to the observed two-point correlation function of galaxies.
In this paper, we investigate the relevance of gravitational lensing for the coming generation of LSS surveys. Various previous analyses have studied the impact of lensing for photometric surveys using the angular power spectrum, $C_\ell(z,z')$ [@Montanari:2015rga; @Raccanelli:2015vla; @Cardona:2016qxn; @DiDio:2016ykq; @Lorenz:2017iez; @Villa_2018]. These studies have shown that neglecting lensing in future photometric surveys like LSST and Euclid will significantly shift the determination of a host of cosmological parameters. In this paper, we focus instead on spectroscopic surveys and we study the impact of lensing on the measurement of the growth rate of structure, using the monopole, quadrupole and hexadecapole of the correlation function. We show that neglecting lensing in the modelling of these multipoles significantly shifts the determination of cosmological parameters for a survey like SKA2 [@Bull:2014rha]. In particular, if we assume $\Lambda$CDM and constrain the standard parameters $(\Omega_{\rm baryon}, \Omega_{\rm cdm}, h, n_s, A_s)$ and the galaxy bias, we find that neglecting lensing generates a shift in the determination of these parameters, which ranges from 0.1$\sigma$ to 0.6$\sigma$. More importantly, if we treat the growth rate $f$ and the galaxy bias, as free parameters in each redshift bins (which is what is routinely done in RSD surveys), we find a shift as large as 2.3$\sigma$ for $f$, and 3.1$\sigma$ for the galaxy bias $b(z)$ in the highest redshift bin of SKA2. Since the growth rate is used to test the theory of gravity, such a large shift could lead us to wrongly conclude that gravity is modified. Including lensing in the analysis, we moreover find that our imperfect knowledge of the magnification bias parameter, $s(z)$, degrades the constraints on the bias and the growth rate by up to 25%. This analysis therefore shows that lensing cannot be neglected in the modelling of multipoles for a survey like SKA2 and a good knowledge of the magnification bias, $s(z)$, is important.
We also perform a similar analysis for a survey like DESI [@Aghamousa:2016zmz]. In this case we find that neglecting lensing has almost no impact on the determination of cosmological parameters. This is mainly due to the fact that at high redshift, where lensing could be relevant, the dilution of the number of galaxies (due to volume effects) almost exactly cancels with the increase in the number of detected galaxies due to the amplification of the luminosity. Since this effect is directly related to the population of galaxies under consideration (more precisely to the slope of the luminosity function [which determines the value of $s(z)$]{}), it would be important to confirm this result using precise specifications for the various galaxy populations in DESI.
In addition to these analyses, we study how gravitational lensing changes the constraining power of spectroscopic and photometric surveys. We first compare the constraints on cosmological parameters, by including or omitting lensing in the modelling. We find that, for spectroscopic surveys, including lensing does not improve the constraints on cosmological parameters, even in the optimistic case where we assume that the amplitude of lensing (and therefore the magnification bias) is perfectly known. For a photometric survey like LSST [@Abell:2009aa], including lensing helps break the degeneracy between the bias and the primordial amplitude of perturbations $A_s$, and consequently improves the constraints on these parameters significantly (by a factor 3 to 9, depending on the size of the redshift bins), if the magnification bias parameter and the amplitude of the lensing potential are perfectly known. This improvement increases if we decrease the number of redshift bins used in the analysis, because this strongly reduces the RSD contribution, which also helps breaking the degeneracy between $A_s$ and the bias. On the other hand, if the magnification bias parameter $s(z)$, or the amplitude of the lensing potential $A_\mathrm{L}$, are considered as free parameters, we find only a mild (few %) improvement with respect to the case with no lensing.
Comparing constraints from SKA2 and LSST, including lensing in both cases, we find that, within $\Lambda$CDM, LSST provides better constraints than SKA2 [on $\Omega_{\rm cdm}, h$ and $n_s$.]{} [This is mainly due to the higher number of galaxies in LSST]{}. [On the other hand, SKA2 constrains better]{} $\Om_{\text{baryon}}$ (which relies on a good resolution of the baryon acoustic oscillations), $A_s$ [and the bias. This is due to the fact that $A_s$ and the bias are degenerate in the density contribution. RSD break this degeneracy in spectroscopic surveys, but only marginally in photometric surveys, where they are subdominant. Lensing helps breaking this degeneracy in photometric surveys, but to a lesser extent than RSD. Interestingly, we also find that the amplitude of the lensing potential, $A_{\rm L}$, is better constrained in SKA2 than in LSST, even though the signal-to-noise ratio of lensing is significantly larger in LSST. This is related to the fact that, in LSST, $A_{\rm L}$ is degenerate with $A_s$ and the bias, whereas in SKA2 this degeneracy is broken by RSD. Spectroscopic and photometric surveys are therefore highly complementary.]{} In general, the main advantage of spectroscopic surveys over photometric surveys is their ability to measure the growth of structure[, which is directly proportional to the RSD signal,]{} in a model-independent way, which is something that is not at all straightforward with photometric surveys using the angular power spectrum with relatively poor redshift resolution.
Finally, we propose a combined analysis using the multipoles of the correlation function in each redshift bin and the angular power spectrum between different bins, including lensing in both cases. We perform this analysis first for SKA2 alone, and then combining the correlation function from SKA2 with the angular power spectrum from LSST. Such an analysis provides an efficient way of combining RSD constraints (from the two-point correlation functions) with [the information coming from the cross-correlations between different bins.]{} Using only the specifications of the SKA2 survey, we find that including cross-correlations between different bins does not improve the constraints on the standard $\Lambda$CDM parameters. This indicates that RSD have already enough constraining power and that therefore adding lensing does not add new information. Note however that this result is specific to $\Lambda$CDM and will probably change in modified gravity models, where the relations between the gravitational potentials, and the density and velocity are modified.
Combining SKA2 and LSST, we find on the other hand that the cross-correlation between bins improves parameter constraints [by 10-20%]{}. The size of this improvement depends weakly on the assumptions made about the magnification bias parameters $[s_\mathrm{LSST}(z), s_\mathrm{SKA2}(z)]$. [This is due to the fact that the improvement in $\La$CDM parameters mainly comes from the density cross correlations which are still relevant in neighboring bins. Lensing is not very important for the five $\La$CDM parameters, but becomes of essence once we aim at determining the growth factor in each redshift bin.]{}
Throughout this paper we make the following assumptions. Firstly, we use linear perturbation theory, which we assume to be valid above separations of $30$[Mpc]{}/$h$ at $z=0$. We therefore limit our analysis to scales $d>d_{\rm NL}(z)$ with $d_{\rm NL}(0)=30\,{\rm Mpc}/h$. Adding non-linear effects may change the specific form of the correlation function and of the lensing contribution. However, we do not expect non-linearities to [significantly change the main results of this paper, i.e. to drastically reduce or increase the shift induced by lensing on the analysis.]{}
Secondly, we use a Fisher matrix analysis, and our estimates therefore always assume Gaussian statistics. We know that this typically underestimates both the error bars as well as the shift coming from neglecting, e.g., the lensing term [@Cardona:2016qxn]. Furthermore, when the shift becomes larger than $\sim 1\sigma$, Fisher analysis is not reliable anymore and an MCMC analysis should be performed instead to determine the value of the shift precisely. However, the fact that Fisher analyses find a large shift indisputably means that lensing cannot be neglected in a survey like SKA2.
Finally, we also neglect large scale relativistic effects in the number counts of galaxies, that have been derived in [@2009PhRvD..80h3514Y; @Yoo:2010ni; @Bonvin:2011bg; @Challinor:2011bk]. It has been shown in several papers that, while the large scale relativistic effects are very nearly degenerate with effects of primordial non-Gaussianity in the real space power spectrum [@Bruni:2011ta; @Camera:2014bwa; @Alonso:2015uua; @Baker:2015bva], they are rather difficult to detect in galaxy catalogs and usually require more sophisticated statistical methods like the use of different populations of galaxies [@DiDio:2013sea; @Bonvin:2013ogt; @Bonvin:2014owa]. We have checked that large scale relativistic effects do not influence the results reported in this work in a noticeable way.
The remainder of this paper is organized as follows. In the next section, we briefly review the expression for the number counts of galaxies. In Section \[s:cor-fctn\] we focus on the multipoles of the correlation function for spectroscopic surveys like DESI and SKA2. In Section \[s:Cls\] we study the angular power spectrum, for a photometric survey like LSST. In Section \[s:main\_result\] we combine the correlation function and the angular power spectrum[, and we conclude in Section \[s:con\]. A theoretical modelling of the magnification bias parameter $s(z)$, which is essential to assess the impact of lensing, is presented in the appendices, where we also provide more detail about the specifications of the surveys, and on the photometric analysis.]{}
[**Notation:**]{} We set the speed of light $c=1$. We work in a flat Friedmann Universe with scalar perturbations in longitudinal gauge such that the metric is given by ds\^2 = a\^2(t). The functions $\Phi$ and $\Psi$ are the Bardeen potentials. The variable $t$ denotes conformal time and ${\mathcal H}=\dot a/a= aH$ is the conformal Hubble parameter while $H$ is the physical Hubble parameter. The derivative with respect to $t$ is denoted by an overdot.
The galaxy number counts {#s:numbercounts}
========================
When we count galaxies, we observe them in a given direction and at a given redshift. The expression from linear perturbation theory for the over-density of galaxies at redshift $z$ and in direction ${{\bf n}}$ is given by [@2009PhRvD..80h3514Y; @Bonvin:2011bg; @Challinor:2011bk] \[e:DezNF\] (z, )&=&b+\_r\^2V +(5s-2)\_0\^[r]{}d r’ \_(+)\
&&-(1-5s-+ +f\_[evo]{} )\_rV-\_r+\_r\
&&+\_0\^[r]{}dr’(+)+(f\_[evo]{}-3)V++(5s-2)\
&&++(++5s -f\_[evo]{} ) ,where $r=r(z)$ is the comoving distance to redshift $z$. The functions $b(z)$, $s(z)$ and $f_{\rm evo}(z)$ are the galaxy bias, the magnification bias and the evolution bias respectively. They depend on the specifications of the catalog (which types of galaxies have been included) and on the instrument (what is the flux limit of the instrument in which frequency band). In Appendix \[ap:surveyspec\] we specify these functions for the surveys studied in this work.
The first term in Eq. , $\de$, is the matter density fluctuation in comoving gauge: on small scales it reduces to the Newtonian density contrast. The second term in the first line encodes the effect of RSD. The velocity is given by $v_i=-{\partial}_iV$, where $V$ is the peculiar velocity potential in longitudinal gauge. These first two terms are the ones that are currently used in the modelling of the two-point correlation function. The last term in the first line is the lensing contribution, that we are investigating in this paper. It contains the angular Laplacian, $\Delta_\Om$, transverse to the direction of observation ${{\bf n}}$. The last three lines are the so-called large-scale relativistic effects. The second line is suppressed by one power ${\mathcal H}/k$ with respect to the first line, and the last two lines are suppressed by $({\mathcal H}/k)^2$ with respect to the first line. More details on these terms are given in [@DiDio:2013sea; @DiDio:2013bqa; @Bonvin:2014owa]. On scales which are much smaller than the horizon scales, $k\gg {\mathcal H}$, only the first line contributes significantly and therefore we include only these terms in the present analysis. We have checked that including the other, large-scale terms, does not alter our results.
The correlations function – spectroscopic surveys {#s:cor-fctn}
=================================================
[In spectroscopic surveys, the standard estimators used to extract information from maps of galaxies are the multipoles of the correlation function.]{} The first three multipoles encode all the information about density and RSD in the linear regime, and they allow us to measure the growth rate of structure and the galaxy bias in a model-independent way (up to an arbitrary distance $r(z)$ which has to be fixed in each redshift bin and which in a given model is the comoving distance out to redshift $z$).
[The multipoles of the power spectrum are also routinely used. However, the power spectrum suffers from two limitations, that make it ill-adapted to future spectroscopic surveys. First, the power spectrum is constructed using the flat-sky approximation, which breaks down at large scales. Alternative power spectrum estimators have been constructed]{} to include wide-angle effects, but they are not straightforward since they require to vary the line-of-sight for each pixel [@Yamamoto:2005dz]. Second, the lensing contribution in $\Delta$ cannot be consistently accounted for in the power spectrum, since to calculate the power spectrum one needs to know the Fourier transform of the galaxy number counts, $\Delta$, over 3-dimensional hypersurfaces, whereas lensing can only be computed along our past light-cone. In contrast, the correlation function can consistently account for wide-angle effects and for gravitational lensing. A general expression for the correlation function without assuming the flat-sky approximation has been derived in [@Szalay:1997cc; @Szapudi:2004gh; @Papai:2008bd; @Campagne:2017wec] for density and RSD, and extended to lensing and large scale relativistic effects in [@Tansella:2018sld; @Bertacca:2012tp; @Raccanelli:2013gja]. The correlation function is therefore the relevant observable to use in future spectroscopic surveys that go to high redshift (where lensing is important) and cover large parts of the sky (where the flat-sky approximation is not valid).
In Section \[s:Cls\], we will discuss the use of the angular power spectrum, $C_\ell$, to extract information from galaxy surveys. The angular power spectrum has the advantage over the correlation function that it does not require a fiducial cosmology to translate angles and redshifts into distances. However, this problem can be circumvented in the correlation function by including rescaling parameters, that account for a difference between the fiducial cosmology and the true cosmology.
The angular power spectrum is not ideal for spectroscopic surveys, since it requires too many redshift bins in order to optimally profit from the redshift resolution. Indeed, in the $C_\ell$’s, the sensitivity to RSD is related to the size of the redshift bins, whereas in the correlation function it is related to the size of the pixels, that are typically of $\sim 2-8$Mpc/$h$. The split of the data into many redshift bins does very significantly enhance the shot noise per bin when one uses the $C_\ell$’s. Furthermore this makes the computation of the covariance matrix for the full set of angular power spectra challenging to invert. Finally, in the $C_\ell$’s, density and RSD are completely mixed up and distributed over the whole range of multipoles, whereas in the correlation function these terms can be separated by measuring the monopole, quadrupole and hexadecapole. As a consequence, the correlation function is mainly used for spectroscopic surveys, and the angular power spectrum for photometric surveys, where RSD are subdominant and the number of bins is small.
We now study how the lensing contribution affects the monopole, quadrupole and hexadecapole of the correlation function. For this, we use the code COFFE [@Tansella:2018sld], which computes the multipoles of the correlation function and their covariance matrices, and which is publicly available at <https://github.com/JCGoran/coffe>. We consider two spectroscopic surveys, one with DESI-like specifications, and a more futuristic one with SKA2-like specifications. The galaxy bias, magnification bias and the redshift distribution for these surveys are given in Appendix \[ap:surveyspec\]. As explained there, for DESI we use a weighted mean of the three different types of galaxies used in this survey to determine $b(z)$ and $s(z)$ in each redshift bin.
In order to mimic a real survey, we apply a window function to our galaxy density field, in the form of a spherical top-hat filter in real space: (R, x) = dx’ W(R; x - x’) (x’) , with a pixel size $R \equiv L_p = 5$ Mpc/$h$. As our analysis is based on linear perturbation theory, in order to curtail the effects of non-linearities, we implement a *non-linear cutoff scale*, $d_\mathrm{NL}(z)$, below which we assume we cannot trust our results, and we do not consider the correlation function on scales smaller than $d_\mathrm{NL}(z)$ in our Fisher matrix analysis. We parametrize the comoving non-linearity scale as follows: \[e:rNL\] d\_(z) = , where for the present cutoff scale we assume the value $d_\mathrm{NL}(z = 0) = 30$ Mpc/$h$. [^1]
$\Omega_\mathrm{baryon}$ $\Omega_\mathrm{cdm}$ $h$ $n_s$ $\ln(10^{10}A_s)$
-------------------------- ----------------------- -------- -------- -------------------
0.04841 0.25793 0.6781 0.9677 3.062
: The cosmological parameters used for the Fisher matrix analysis as well as their fiducial values. \[table:planck\_params\]
We use the best fit values from Planck [@Aghanim:2018eyx] for our fiducial values of the parameters, see Table \[table:planck\_params\]. The galaxy bias for the different surveys has been modelled as described in Appendix \[ap:surveyspec\]. To account for our limited knowledge of the bias, we multiply this redshift dependent bias by a parameter $b_0$, with fiducial value $b_0=1$, that we also vary in our Fisher forecasts. Finally, when estimating the impact of large scale relativistic effects, for the evolution bias $f_\mathrm{evo}$ we take a fiducial value $f_\mathrm{evo}=0$, but we have seen that varying $f_\mathrm{evo}$ in the interval $f_\mathrm{evo}\in [-5,5]$ does not affect the results.
Signal-to-noise ratio
---------------------
As a first estimate of the relevance of lensing as well as relativistic and wide-angle effects for future spectroscopic surveys, we compute the signal-to-noise ratio (SNR) of these contributions, for the monopole $(\ell=0)$, quadrupole $(\ell=2)$ and hexadecapole $(\ell=4)$ of the correlation function. The SNR in a redshift bin with mean $\bar z$ for the contribution $X=\{{\rm lens, rel}\}$ is given by \^X=\_[ijm]{}\^X\_(d\_i, |z)[cov]{}\[\^[std]{}\_, \^[std]{}\_m\]\^[-1]{}(d\_i,d\_j,|z)\^X\_m(d\_j, |z) , where the sum runs over the multipoles $\ell, m=0,2,4$ and over separations $d_i, d_j$ between $d_{\rm NL}$ and $d_{\rm max}$, where $d_{\rm max}$ is the largest separation available inside the redshift bin, ranging from 185Mpc/$h$ [at $\bar z=0.2$]{} to 440Mpc/$h$ [at $\bar z=1.85$]{}. Lensing does generate higher multipoles in the correlation function [@Tansella:2018sld], however since those are not usually measured in spectroscopic surveys, we do not include them in our calculation of the SNR.
The lensing correlation function is defined as \^[lens]{}(d,|z) + + , \[eq:lensing\_contrib\] where ${{\Delta_\mathrm{std}}}= \Delta_\mathrm{den} + \Delta_\mathrm{rsd}$ and $\Delta_{\rm lens}$ is given by the last term in the first line of Eq. . The redshifts $z$ and $z'$ inside the given redshift bin around $\bar z$ and $\cos\theta={{\bf n}}\cd{{\bf n}}'$ are such that $d=|r(z){{\bf n}}-r(z'){{\bf n}}'| = \sqrt{r(z)^2 + r(z')^2-2r(z)r(z')\cos\theta}$.
The large scale relativistic and wide-angle correlation function is defined as \^[rel]{}(d,|z) && + +\
&&+ \^[full-sky]{}- \^[flat-sky]{}, where the first line contains the large scale relativistic effects and $\Delta_{\rm rel}$ is given by the last three lines of Eq. , while the second line contains the wide-angle effects, i.e. the difference between the standard terms calculated in the full-sky and in the flat-sky.
\
![ Signal-to-noise ratio (SNR) in each redshift bin for SKA2, using 8 redshift bins (blue) and 11 redshift bins (orange). The left panel shows the SNR for the lensing contribution and the right panel the SNR for the large scale relativistic effects, [including wide-angle effects]{}. The horizontal lines depict the width of the redshift bins. We also indicate the cumulative SNR over all redshift bins, SNR$_{\rm tot}$. []{data-label="fig:ska2_snr"}](data/figures/ska2_snr_lensing_new.pdf "fig:"){width="\linewidth"}
\
![ Signal-to-noise ratio (SNR) in each redshift bin for SKA2, using 8 redshift bins (blue) and 11 redshift bins (orange). The left panel shows the SNR for the lensing contribution and the right panel the SNR for the large scale relativistic effects, [including wide-angle effects]{}. The horizontal lines depict the width of the redshift bins. We also indicate the cumulative SNR over all redshift bins, SNR$_{\rm tot}$. []{data-label="fig:ska2_snr"}](data/figures/ska2_snr_relativistic_new.pdf "fig:"){width="\linewidth"}
\
![ Signal-to-noise ratio (SNR) in each redshift bin for DESI, using 5 redshift bins (blue) and 8 redshift bins (orange). The left panel shows the SNR for the lensing contribution and the right panel the SNR for the large scale relativistic effects, [including wide-angle effects]{}. The horizontal lines depict the width of the redshift bins. We also indicate the cumulative SNR over all redshift bins, SNR$_{\rm tot}$. []{data-label="fig:desi_snr"}](data/figures/desi_snr_lensing_new.pdf "fig:"){width="\linewidth"}
\
![ Signal-to-noise ratio (SNR) in each redshift bin for DESI, using 5 redshift bins (blue) and 8 redshift bins (orange). The left panel shows the SNR for the lensing contribution and the right panel the SNR for the large scale relativistic effects, [including wide-angle effects]{}. The horizontal lines depict the width of the redshift bins. We also indicate the cumulative SNR over all redshift bins, SNR$_{\rm tot}$. []{data-label="fig:desi_snr"}](data/figures/desi_snr_relativistic_new.pdf "fig:"){width="\linewidth"}
The covariance matrix contains both shot noise and cosmic variance. In the calculation of the cosmic variance we include only the standard terms, since those are the dominant ones. Moreover, we use the flat-sky approximation to calculate the covariance matrix, since even for large separations most of the covariance comes from pixels that are close to each other. The covariance matrix accounts for both correlations between different separations, $d_i\neq d_j$, and for correlations between different multipoles, $\ell\neq m$. We consider different redshift bin configurations in order to test the sensitivity of our analyses to the binning. For SKA2, we use an 8 bin and an 11 bin configuration, and for DESI a 5 bin and an 8 bin configuration (see Appendix \[a:SKAII\] and \[a:DESI\] for more detail on these configurations). In the calculation of the cumulative SNR over all redshift bins, we do not include the correlations between different bins, since they are very small due to the relatively large size of the bins ($\Delta z \geq 0.08$).
The results of the SNR analysis for SKA2 are shown in Fig. \[fig:ska2\_snr\]. The cumulative SNR for lensing (left panel) is larger than 10 so that the lensing is clearly detectable in SKA2. Note that around $z=0.4$ the magnification bias parameter $s(z=0.4)\simeq 0.4$, and therefore the lensing contribution almost exactly vanishes. At higher redshifts, $s$ becomes much larger (see Fig. \[fig:ska2\_biases\] in Appendix \[a:SKAII\]), and at the same time the integral along the photons’ trajectory significantly increases, such that the lensing contribution becomes more and more important. At $z=1$, the SNR becomes larger than one, however the bulk of the contribution to the SNR comes from higher redshifts.
The SNR of the large scale relativistic effects (right panel of Fig. \[fig:ska2\_snr\]) always remains significantly below one. This indicates that these terms do not affect parameter estimation in any appreciable way and can be safely neglected in the data analysis. At low redshifts, the impact of large scale relativistic effects is larger due to one of the Doppler terms which is enhanced by a factor $1/(r(z){\mathcal H}(z))$ and whose contribution to the correlation function scales therefore as \^[Dopp]{}\^[Dopp]{} \~1/(r[H]{})\^2([H]{}/k)\^2\^[dens]{}\^[dens]{}\~(d/r)\^2\^[dens]{}\^[dens]{} , where we have used that $k\sim 1/d$. Therefore, at very low redshifts and large separations this term cannot be neglected, as has been already discussed in [@Papai:2008bd; @Raccanelli_2010; @Samushia_2012; @Tansella:2017rpi].
The same analysis using DESI specifications is shown in Fig. \[fig:desi\_snr\]. On the left panel, we see that the SNR for the lensing term remains always well below one, and that even the cumulative SNR over all redshift bins does not exceed one. This means that lensing will not be detectable with a survey like DESI. This is due to the fact that at high redshift ($z\geq 1$), when the integral along the photons’ trajectory becomes important, the magnification bias parameter becomes close to 0.4, such that $2-5s(z)$ is very small, see Fig. \[fig:sz-galaxysamples\] in Appendix \[a:DESI\]. Consequently, the lensing contribution to the number counts is strongly suppressed at high redshifts. This result is very sensitive to the value of the magnification bias parameter $s(z)$, which we have computed in Appendix \[a:DESI\], using a Schechter function to model the luminosity function, with parameters fitted to similar galaxy samples. This gives us a crude approximation of $s(z)$ for the different galaxy populations that will be detected by DESI. A more precise determination of $s(z)$ would be needed before one can definitely conclude that lensing is irrelevant for DESI.
On the right panel of Fig. \[fig:desi\_snr\], we show the SNR for large scale relativistic effects in DESI. Like for SKA2, the SNR remains well below one at all redshifts. Note however that since DESI starts observing at lower redshift than SKA2, the SNR in the lowest bin is larger.
For both surveys, large scale relativistic effects cannot be detected. These results confirm that in order to detect relativistic effects, we need [alternative estimators constructed from different populations of galaxies.]{} In the case of multiple populations, relativistic effects generate indeed odd multipoles in the correlation function, that have an SNR of the order of 7 for DESI [@Bonvin:2015kuc] and 46 for SKA2 [@Bonvin:2018ckp], making them clearly detectable with these surveys.
Shift of $\Lambda$CDM parameters {#s:shiftLCDM}
--------------------------------
![ Contour plot for $\Lambda$CDM parameters from the SKA2 Fisher analysis, using 11 redshift bins. Blue contours show the constraints on parameters when we consistently include lensing in our theoretical model. Red contours show the constraints on the parameters when we neglect lensing magnification. In the latter case, the best-fit parameters are shifted with respect to the fiducial values. In both cases we indicate 1$\sigma$ and $2\sigma$ contours. \[fig:ska2\_param\_errors\_std\_bins11\]](data/figures/ska2_bins11_bias.pdf){width="0.9\linewidth"}
Since lensing is detectable by a survey like SKA2, we now study its impact on parameter estimation. We use a Fisher matrix analysis to determine the error bars on each of the parameters $\theta_a\in (\Omega_{\rm baryon}, \Omega_{\rm cdm}, h, n_s, A_s, b_0)$. The Fisher matrix element for the parameters $\theta_a, \theta_b$ is given by F\_[ab]{}=\_[|[z]{}]{}\_[ijm]{}\[\^[std]{}\_, \^[std]{}\_m\]\^[-1]{}(d\_i,d\_j,|z) . We perform two analyses: one where the multipoles of the correlation function $\xi_\ell$ contains only the standard density and RSD contributions, and one where they also contain the lensing contribution. Comparing the error bars in these two cases allows us to understand if lensing brings additional constraining power. We also compute the shift of the central value of the parameters due to the fact that lensing is neglected in the modelling, whereas it is present in the signal. This is given by (see e.g. [@Cardona:2016qxn], appendix B): (\_a) = \_[b]{} (\^[-1]{})\_[ab]{} B\_b . Here we define B\_b \_[|z]{}\_[i j m]{} \_(d\_i, |z) \[\^[std]{}\_, \^[std]{}\_m\]\^[-1]{}(d\_i,d\_j,|z) , where $\Delta \xi_\ell$ denotes the difference between the model and the signal (i.e. in our case, the lensing term ), and the quantities with a tilde ($\sim$) are computed without lensing, i.e. according to the wrong model.
parameter $b_0$ $\Omega_\mathrm{baryon}$ $\Omega_\mathrm{cdm}$ $h$ $n_s$ $\ln 10^{10} A_s$
----------------- --------------------------------------- ------- -------------------------- ----------------------- ------ ------- -------------------
without lensing $\sigma(\theta_i) / \theta_i$ ($\%$) 0.37 1.32 0.37 1.26 1.05 0.70
with lensing $\sigma(\theta_i) / \theta_i$ ($\%$) 0.36 1.31 0.37 1.25 1.04 0.70
shift $\Delta(\theta_i) / \sigma(\theta_i)$ -0.05 0.29 -0.21 0.17 -0.57 -0.12
: The constraints and the shifts on $\Lambda$CDM parameters for SKA2, with 11 redshift bins
\[table:ska2\_fisher\_11bins\]
The results for SKA2 are shown in Table \[table:ska2\_fisher\_11bins\] and in Fig. \[fig:ska2\_param\_errors\_std\_bins11\]. Comparing the error bars with and without lensing, we see that the improvement brought by lensing is extremely small. This is due to the fact that density and RSD are significantly stronger than the lensing contribution and that they are very efficient at constraining the standard $\Lambda$CDM parameters. This is very specific to $\Lambda$CDM, in which the degeneracy between the bias $b_0$ and the primordial amplitude $A_s$ is broken by RSD, as can be seen in Fig. \[fig:ska2\_param\_errors\_std\_bins11\]. This result would change if we have a modified gravity model, where the relation between density, velocity and gravitational potentials is altered and for which lensing brings complementary information. We have also tested that our limited knowledge of the magnification bias parameter, $s(z)$, does not degrade the constraints on $\Lambda$CDM parameters. For this we parametrize $s(z)$ with four parameters (see Eq. ) that we include in the Fisher forecasts. We find that the constraints on $\Lambda$CDM parameters are the same up to 1% as in the situation where $s(z)$ is fixed.
From the third line of Table \[table:ska2\_fisher\_11bins\], we see that the parameter which is the most significantly shifted when lensing is neglected in SKA2 is $n_s$, which experiences a shift of $-0.57\si(n_s)$. For $\Om_\mathrm{baryon}$ and $\Omega_\mathrm{cdm}$ the shifts are smaller but also not negligible. A shift of less than one $\sigma$ is not catastrophic, but it still indicates that the analysis can be significantly improved by including lensing in the modelling. For example, such a shift could hide deviations from General Relativity if those deviations are in the opposite direction as the shift. Note also that the Fisher analysis that we have performed is not precise for a shift which approaches $1\sigma$. A more robust analysis using MCMC may give an even larger shift. Finally, we have checked that the results are very similar if we reduce the number of redshift bins from 11 to 8.
parameter $b_0$ $\Omega_\mathrm{baryon}$ $\Omega_\mathrm{cdm}$ $h$ $n_s$ $\ln 10^{10} A_s$
----------------- --------------------------------------- ------- -------------------------- ----------------------- -------- --------- -------------------
without lensing $\sigma(\theta_i) / \theta_i$ ($\%$) 0.75 3.50 0.85 3.30 2.65 1.75
with lensing $\sigma(\theta_i) / \theta_i$ ($\%$) 0.75 3.50 0.85 3.30 2.65 1.75
shift $\Delta(\theta_i) / \sigma(\theta_i)$ 0.014 -0.005 -0.017 -0.008 -0.0004 0.005
: The constraints and the shifts on $\Lambda$CDM parameters for DESI, with 8 redshift bins.
\[table:desi\_fisher\_8bins\]
A corresponding analysis for the DESI spectroscopic survey confirms the results already indicated by the small SNR of the lensing term: neglecting lensing shifts all cosmological parameter by less than 0.02$\sigma$, meaning that lensing can be neglected in the analysis. The results are summarized in Table \[table:desi\_fisher\_8bins\].
Shift of the growth rate {#s:growth}
------------------------
One of the main motivations to measure the multipoles of the correlation function is that they provide a model-independent way of measuring the growth of structure $f$ given by f = = , where $D_1$ is the linear growth function that encodes the evolution of density fluctuations: $\delta({{\bf k}},z)=D_1(z)/D_1(z=0)\delta({{\bf k}},z=0)$. This growth rate is very sensitive to the theory of gravity. It has been measured in various surveys and is used to test the consistency of General Relativity and to constrain deviations from it.
In $\Lambda$CDM, $f$ is fully determined by the matter density $\Omega_\mathrm{m} = \Omega_{\rm baryon}+\Omega_{\rm cdm}$. In modified gravity theories, the growth rate depends directly on the parameters of the theory. Here we take an agnostic point of view and simply consider $f$ in each redshift bin as a free parameter.
The monopole of the correlation function can be written as \_0(d,z)=(b\^2(z)++ )dk k\^2 P\_(k,z)j\_0(kd) , where $P_\delta$ is the density power spectrum and $j_0$ is the spherical Bessel function of order 0. Assuming that the growth of structure is scale-independent, we can relate the power spectrum at redshift $z$ to its value at a high redshift, $z_\star$, which we choose to be well in the matter era, before the acceleration of the Universe has started P\_(k,z)=()\^2P\_(k,z\_) . Similarly the parameter $\sigma_8(z)$, which denotes the amplitude of the mean matter fluctuation in a sphere of radius $8h^{-1}$Mpc evolves as \_8(z)=\_8(z\_) . With this, the monopole of the correlation function can be written as \_0(d,z)=(b(z)\^2++ )\_0(d,z\_) , \[e:mono\] where f(z)f(z)\_8(z)b(z)b(z)\_8(z) , and \_(d,z\_)=dk k\^2 j\_(kd) ,=0,2,4 . \[e:muell\] Similarly the quadrupole and hexadecapole of the correlation function take the form \_2(d,z)&=&-(+ )\_2(d,z\_) ,\[e:quad\]\
\_4(d,z)&=&\_4(d,z\_) .\[e:hexa\]
The functions $\mu_\ell(d, z_\star)$ encode the shape of the multipoles and they depend only on the physics of the early Universe, before acceleration has started. Here we assume that this physics has been determined with high precision by CMB measurements and we take these functions as fixed to their fiducial values. The parameters $\tilde f$ and $\tilde b$ govern the amplitude of the multipoles, and they depend on the growth of structure [at late time, when the expansion of the Universe is accelerating. By combining measurements of the monopole, quadrupole and hexadecapole, these two parameters can be measured directly. The can then be used to test the theory of gravity. This procedure is valid for any model of gravity or dark energy that does not affect the evolution of structures at early time before acceleration has started, i.e. that leaves the functions $\mu_\ell(d,z_\star)$ unchanged.]{} In this case we can treat $\tilde f$ and $\tilde b$ as free in each of the bins of the survey.
![ The parameters $\tilde{b}$ (left panel) and $\tilde f$ (right panel) plotted as a function of the mean redshift of the bin without lensing (blue), and with lensing (dashed orange), for SKA2 with 11 redshift bins. We also show the error bars, that are highlighted in red for visibility in the left plot. []{data-label="fig:ska2_tildes_plot1"}](data/figures/ska2_btilde_bins11_corrected.pdf "fig:"){width="0.45\linewidth"} ![ The parameters $\tilde{b}$ (left panel) and $\tilde f$ (right panel) plotted as a function of the mean redshift of the bin without lensing (blue), and with lensing (dashed orange), for SKA2 with 11 redshift bins. We also show the error bars, that are highlighted in red for visibility in the left plot. []{data-label="fig:ska2_tildes_plot1"}](data/figures/ska2_ftilde_bins11_corrected.pdf "fig:"){width="0.45\linewidth"}
As discussed before, lensing generates a new contribution to Eqs. , and . We now determine how this new contribution shifts the measurement of ${\tilde f}$ and ${\tilde b}$ in each redshift bin. In Fig. \[fig:ska2\_tildes\_plot1\], we show ${\tilde b}$ and ${\tilde f}$ inferred with and without the lensing contribution and we compare the difference with the size of the error bars. At high redshift, $z>1.2$ the shift from neglecting lensing is clearly larger than 1$\sigma$. The precise values of the shifts are given in Table \[t:SKA11bin-bf\] for each redshift bin. We see that in the highest bin, the shift reaches 3.1$\sigma$ for ${\tilde b}$, and -2.3$\sigma$ for the growth rate ${\tilde f}$. [The fact that ${\tilde b}$ is shifted towards a larger value, whereas ${\tilde f}$ is shifted towards a smaller value can be understood in the following way: the lensing contribution to the monopole, quadrupole and hexadecapole is positive. In all cases the signal is therefore larger than the model, which does not include lensing. To account for this, the combinations of ${\tilde b}$ and ${\tilde f}$ in Eqs. , and need to be shifted towards larger values for all multipoles. This shift is however not the same for all multipoles: in particular the impact of lensing on the quadrupole is larger than on the monopole. One way to account for this is to increase ${\tilde b}$ while decreasing ${\tilde f}$. Note that this leads to a shift in the wrong direction for the hexadecapole, but since the SNR of the hexadecapole is significantly smaller than that of the monopole and quadrupole, it has a much smaller impact on parameter estimation.]{}
Our analysis shows that neglecting lensing above redshift 1 is not possible for a survey like SKA2. Since the growth rate of structure is directly used to test the theory of gravity, such large shifts would wrongly be interpreted as a deviation from General Relativity.
[|c|c|c|c|]{}
-------------
$\bar{z}_i$
-------------
: Values of the parameters $\tilde b$ (left table) and $\tilde f$ (right table) in each redshift bin, for SKA2. We also show the relative error bars on these parameters, $\sigma/\theta$, and the ratio of the shift over the error bars, $\Delta/\sigma$, when lensing is neglected in the modelling of the multipoles.\[t:SKA11bin-bf\]
&
---------------
$\tilde{b}_i$
---------------
: Values of the parameters $\tilde b$ (left table) and $\tilde f$ (right table) in each redshift bin, for SKA2. We also show the relative error bars on these parameters, $\sigma/\theta$, and the ratio of the shift over the error bars, $\Delta/\sigma$, when lensing is neglected in the modelling of the multipoles.\[t:SKA11bin-bf\]
&
--------------------------------------------------------
$\displaystyle\frac{\sigma}{\theta}(\tilde{b}_i) (\%)$
--------------------------------------------------------
: Values of the parameters $\tilde b$ (left table) and $\tilde f$ (right table) in each redshift bin, for SKA2. We also show the relative error bars on these parameters, $\sigma/\theta$, and the ratio of the shift over the error bars, $\Delta/\sigma$, when lensing is neglected in the modelling of the multipoles.\[t:SKA11bin-bf\]
&
----------------------------------------------------
$\displaystyle \frac{\Delta}{\sigma}(\tilde{b}_i)$
----------------------------------------------------
: Values of the parameters $\tilde b$ (left table) and $\tilde f$ (right table) in each redshift bin, for SKA2. We also show the relative error bars on these parameters, $\sigma/\theta$, and the ratio of the shift over the error bars, $\Delta/\sigma$, when lensing is neglected in the modelling of the multipoles.\[t:SKA11bin-bf\]
\
0.20 & 0.52 & 2.55 & -0.02\
0.28 & 0.53 & 2.06 & -0.01\
0.36 & 0.54 & 1.19 & -0.01\
0.44 & 0.55 & 0.99 & -0.00\
0.53 & 0.57 & 0.80 & 0.01\
0.64 & 0.59 & 0.63 & 0.04\
0.79 & 0.62 & 0.47 & 0.15\
1.03 & 0.68 & 0.25 & 0.55\
1.31 & 0.76 & 0.28 & 1.39\
1.58 & 0.85 & 0.39 & 2.50\
1.86 & 0.97 & 0.50 & 3.10\
[|c|c|c|c|]{}
-------------
$\bar{z}_i$
-------------
: Values of the parameters $\tilde b$ (left table) and $\tilde f$ (right table) in each redshift bin, for SKA2. We also show the relative error bars on these parameters, $\sigma/\theta$, and the ratio of the shift over the error bars, $\Delta/\sigma$, when lensing is neglected in the modelling of the multipoles.\[t:SKA11bin-bf\]
&
---------------
$\tilde{f}_i$
---------------
: Values of the parameters $\tilde b$ (left table) and $\tilde f$ (right table) in each redshift bin, for SKA2. We also show the relative error bars on these parameters, $\sigma/\theta$, and the ratio of the shift over the error bars, $\Delta/\sigma$, when lensing is neglected in the modelling of the multipoles.\[t:SKA11bin-bf\]
&
--------------------------------------------------------
$\displaystyle\frac{\sigma}{\theta}(\tilde{f}_i) (\%)$
--------------------------------------------------------
: Values of the parameters $\tilde b$ (left table) and $\tilde f$ (right table) in each redshift bin, for SKA2. We also show the relative error bars on these parameters, $\sigma/\theta$, and the ratio of the shift over the error bars, $\Delta/\sigma$, when lensing is neglected in the modelling of the multipoles.\[t:SKA11bin-bf\]
&
----------------------------------------------------
$\displaystyle \frac{\Delta}{\sigma}(\tilde{f}_i)$
----------------------------------------------------
: Values of the parameters $\tilde b$ (left table) and $\tilde f$ (right table) in each redshift bin, for SKA2. We also show the relative error bars on these parameters, $\sigma/\theta$, and the ratio of the shift over the error bars, $\Delta/\sigma$, when lensing is neglected in the modelling of the multipoles.\[t:SKA11bin-bf\]
\
0.20 & 0.47 & 3.21 & 0.02\
0.28 & 0.48 & 2.61 & 0.01\
0.36 & 0.48 & 1.58 & 0.01\
0.44 & 0.48 & 1.33 & 0.00\
0.53 & 0.48 & 1.11 & -0.01\
0.64 & 0.47 & 0.91 & -0.04\
0.79 & 0.46 & 0.71 & -0.14\
1.03 & 0.44 & 0.46 & -0.46\
1.31 & 0.40 & 0.63 & -1.12\
1.58 & 0.37 & 1.08 & -1.97\
1.86 & 0.34 & 2.11 & -2.26\
The difference between the analysis presented here and the $\Lambda$CDM analysis of Section \[s:shiftLCDM\], is that in the $\Lambda$CDM analysis most of the constraining power on the parameters comes from small redshift, where shot noise is significantly smaller due to the large number of galaxies. At these small redshifts, lensing is still negligible and has therefore a relatively small impact on the determination of these parameters. On the other hand, in the model-independent analysis presented here, ${\tilde b}$ and ${\tilde f}$ are free in each redshift bin, and therefore the constraints on their value comes from the bin in question. As a consequence, at high redshift, where lensing is important, those constraints are very strongly affected by lensing.
As is clear from our derivation of the shift $\Delta$, the particular value of the shift can only be trusted for values which are (significantly) less that $1\si$. Hence our results which yield shifts of $1\si$ and more simply indicate that there is a large shift, that cannot be neglected. The precise value of this shift would have to be determined by an MCMC method which goes beyond the scope of the present work, see e.g. [@Cardona:2016qxn].
We now study how the constraints and shifts change if instead of fixing $s(z)$ we parametrize it with four parameters (see Eq. ) and include these parameters in the Fisher analysis. As can be seen from Table \[t:SKA11bin-bf\_magfree\], this degrades the constraints on $\tilde b$ and $\tilde f$ by up to 25%. We see that the low redshift bins are also affected, even though the lensing contribution is negligible there. This is simply due to the fact that adding extra parameters in the model has an impact on all the other parameters, including the values of $\tilde b$ and $\tilde f$ at small redshift. This degradation of the constraints could be mitigated by an independent measurement of $s(z)$. This parameter is indeed given by the slope of the luminosity function which can be measured from the population of galaxies at each redshift. Again we conclude that a precise measurement of $s(z)$ is crucial for an optimal analysis.
[|c|c|c|c|]{}
-------------
$\bar{z}_i$
-------------
: Same as Table \[t:SKA11bin-bf\], except with the magnification bias parameters $\{s_0, s_1, s_2, s_3\}$ marginalized over.\[t:SKA11bin-bf\_magfree\]
&
---------------
$\tilde{b}_i$
---------------
: Same as Table \[t:SKA11bin-bf\], except with the magnification bias parameters $\{s_0, s_1, s_2, s_3\}$ marginalized over.\[t:SKA11bin-bf\_magfree\]
&
--------------------------------------------------------
$\displaystyle\frac{\sigma}{\theta}(\tilde{b}_i) (\%)$
--------------------------------------------------------
: Same as Table \[t:SKA11bin-bf\], except with the magnification bias parameters $\{s_0, s_1, s_2, s_3\}$ marginalized over.\[t:SKA11bin-bf\_magfree\]
&
----------------------------------------------------
$\displaystyle \frac{\Delta}{\sigma}(\tilde{b}_i)$
----------------------------------------------------
: Same as Table \[t:SKA11bin-bf\], except with the magnification bias parameters $\{s_0, s_1, s_2, s_3\}$ marginalized over.\[t:SKA11bin-bf\_magfree\]
\
0.20 & 0.52 & 3.17 & -0.01\
0.28 & 0.53 & 2.51 & -0.01\
0.36 & 0.54 & 1.44 & -0.01\
0.44 & 0.55 & 1.26 & -0.00\
0.53 & 0.57 & 1.04 & 0.01\
0.64 & 0.59 & 0.81 & 0.03\
0.79 & 0.62 & 0.57 & 0.13\
1.03 & 0.68 & 0.30 & 0.47\
1.31 & 0.76 & 0.34 & 1.16\
1.58 & 0.85 & 0.47 & 2.07\
1.86 & 0.97 & 0.60 & 2.60\
[|c|c|c|c|]{}
-------------
$\bar{z}_i$
-------------
: Same as Table \[t:SKA11bin-bf\], except with the magnification bias parameters $\{s_0, s_1, s_2, s_3\}$ marginalized over.\[t:SKA11bin-bf\_magfree\]
&
---------------
$\tilde{f}_i$
---------------
: Same as Table \[t:SKA11bin-bf\], except with the magnification bias parameters $\{s_0, s_1, s_2, s_3\}$ marginalized over.\[t:SKA11bin-bf\_magfree\]
&
--------------------------------------------------------
$\displaystyle\frac{\sigma}{\theta}(\tilde{f}_i) (\%)$
--------------------------------------------------------
: Same as Table \[t:SKA11bin-bf\], except with the magnification bias parameters $\{s_0, s_1, s_2, s_3\}$ marginalized over.\[t:SKA11bin-bf\_magfree\]
&
----------------------------------------------------
$\displaystyle \frac{\Delta}{\sigma}(\tilde{f}_i)$
----------------------------------------------------
: Same as Table \[t:SKA11bin-bf\], except with the magnification bias parameters $\{s_0, s_1, s_2, s_3\}$ marginalized over.\[t:SKA11bin-bf\_magfree\]
\
0.20 & 0.47 & 3.98 & 0.01\
0.28 & 0.48 & 3.20 & 0.01\
0.36 & 0.48 & 1.91 & 0.01\
0.44 & 0.48 & 1.67 & 0.00\
0.53 & 0.48 & 1.37 & -0.01\
0.64 & 0.47 & 1.09 & -0.03\
0.79 & 0.46 & 0.83 & -0.12\
1.03 & 0.44 & 0.51 & -0.41\
1.31 & 0.40 & 0.70 & -1.00\
1.58 & 0.37 & 1.21 & -1.76\
1.86 & 0.34 & 2.29 & -2.08\
Finally, in Fig. \[fig:ska2\_tildes\_plot3\], we compare the shift of ${\tilde b}$ and ${\tilde f}$ for two configurations in SKA2: one with 11 redshift bins, and one with 8 redshift bins. We see that the results are very similar: in both cases the shift becomes larger than 1$\sigma$ above redshift $\sim$ 1.2.
![ The shift divided by the 1$\sigma$ error for the parameters $\tilde{b}$ (left panel) and $\tilde f$ (right panel), plotted as a function of the mean redshift of the bin, for SKA2. The 8 redshift bins configuration is shown in blue and the 11 redshift bins configuration in orange. The horizontal lines denote the widths of the redshift bins. []{data-label="fig:ska2_tildes_plot3"}](data/figures/ska2_btilde_plot3.pdf){width="\linewidth"}
![ The shift divided by the 1$\sigma$ error for the parameters $\tilde{b}$ (left panel) and $\tilde f$ (right panel), plotted as a function of the mean redshift of the bin, for SKA2. The 8 redshift bins configuration is shown in blue and the 11 redshift bins configuration in orange. The horizontal lines denote the widths of the redshift bins. []{data-label="fig:ska2_tildes_plot3"}](data/figures/ska2_ftilde_plot3.pdf){width="\linewidth"}
We have performed a similar analysis for DESI, and we found that the shifts on the parameters are completely negligible at all redshifts, as can be seen from Table \[t:desi8bin-bf\]. This has several reasons, all related to the fact that the lensing SNR for DESI is always below 1. First, DESI does not go out to redshifts as high as SKA2 and hence the lensing contribution is smaller. Furthermore, the error bars for DESI are simply larger than the ones for SKA2, due to the smaller sky coverage. But the most relevant point is that the prefactor $5s(z)-2$ which determines the amplitude of the lensing term is always significantly smaller for DESI than for SKA2. It is only for very low redshifts for which the integral along the photons’ trajectory is still small, that DESI has a relatively large $s(z)$. As discussed in the previous section, this result is based on our derivation of $s(z)$ for the different populations of galaxies in DESI, and should be confirmed by a more detailed modelling of $s(z)$.
[|c|c|c|c|]{}
-------------
$\bar{z}_i$
-------------
: Values of the parameters $\tilde b$ (left table) and $\tilde f$ (right table) in each redshift bin, for DESI with 8 redshift bins. We show also the relative error bars on these parameters, $\sigma/\theta$, and the ratio of the shifts over the error bars, $\Delta/\sigma$, when neglecting lensing in the modelling of the multipoles.\[t:desi8bin-bf\]
&
---------------
$\tilde{b}_i$
---------------
: Values of the parameters $\tilde b$ (left table) and $\tilde f$ (right table) in each redshift bin, for DESI with 8 redshift bins. We show also the relative error bars on these parameters, $\sigma/\theta$, and the ratio of the shifts over the error bars, $\Delta/\sigma$, when neglecting lensing in the modelling of the multipoles.\[t:desi8bin-bf\]
&
--------------------------------------------------------
$\displaystyle\frac{\sigma}{\theta}(\tilde{b}_i) (\%)$
--------------------------------------------------------
: Values of the parameters $\tilde b$ (left table) and $\tilde f$ (right table) in each redshift bin, for DESI with 8 redshift bins. We show also the relative error bars on these parameters, $\sigma/\theta$, and the ratio of the shifts over the error bars, $\Delta/\sigma$, when neglecting lensing in the modelling of the multipoles.\[t:desi8bin-bf\]
&
----------------------------------------------------
$\displaystyle \frac{\Delta}{\sigma}(\tilde{b}_i)$
----------------------------------------------------
: Values of the parameters $\tilde b$ (left table) and $\tilde f$ (right table) in each redshift bin, for DESI with 8 redshift bins. We show also the relative error bars on these parameters, $\sigma/\theta$, and the ratio of the shifts over the error bars, $\Delta/\sigma$, when neglecting lensing in the modelling of the multipoles.\[t:desi8bin-bf\]
\
0.10 & 1.07 & 3.08 & -0.00\
0.21 & 1.07 & 1.80 & 0.00\
0.42 & 1.21 & 0.46 & 0.05\
0.65 & 1.40 & 0.37 & 0.02\
0.79 & 0.87 & 0.72 & 0.03\
0.91 & 0.77 & 0.55 & 0.05\
1.07 & 0.67 & 0.65 & 0.01\
1.39 & 0.69 & 0.66 & 0.02\
[|c|c|c|c|]{}
-------------
$\bar{z}_i$
-------------
: Values of the parameters $\tilde b$ (left table) and $\tilde f$ (right table) in each redshift bin, for DESI with 8 redshift bins. We show also the relative error bars on these parameters, $\sigma/\theta$, and the ratio of the shifts over the error bars, $\Delta/\sigma$, when neglecting lensing in the modelling of the multipoles.\[t:desi8bin-bf\]
&
---------------
$\tilde{f}_i$
---------------
: Values of the parameters $\tilde b$ (left table) and $\tilde f$ (right table) in each redshift bin, for DESI with 8 redshift bins. We show also the relative error bars on these parameters, $\sigma/\theta$, and the ratio of the shifts over the error bars, $\Delta/\sigma$, when neglecting lensing in the modelling of the multipoles.\[t:desi8bin-bf\]
&
--------------------------------------------------------
$\displaystyle\frac{\sigma}{\theta}(\tilde{f}_i) (\%)$
--------------------------------------------------------
: Values of the parameters $\tilde b$ (left table) and $\tilde f$ (right table) in each redshift bin, for DESI with 8 redshift bins. We show also the relative error bars on these parameters, $\sigma/\theta$, and the ratio of the shifts over the error bars, $\Delta/\sigma$, when neglecting lensing in the modelling of the multipoles.\[t:desi8bin-bf\]
&
----------------------------------------------------
$\displaystyle \frac{\Delta}{\sigma}(\tilde{f}_i)$
----------------------------------------------------
: Values of the parameters $\tilde b$ (left table) and $\tilde f$ (right table) in each redshift bin, for DESI with 8 redshift bins. We show also the relative error bars on these parameters, $\sigma/\theta$, and the ratio of the shifts over the error bars, $\Delta/\sigma$, when neglecting lensing in the modelling of the multipoles.\[t:desi8bin-bf\]
\
0.10 & 0.45 & 10.17 & 0.00\
0.21 & 0.47 & 6.03 & -0.00\
0.42 & 0.48 & 2.26 & -0.05\
0.65 & 0.47 & 2.81 & -0.02\
0.79 & 0.46 & 2.33 & -0.03\
0.91 & 0.45 & 1.64 & -0.05\
1.07 & 0.43 & 1.68 & -0.01\
1.39 & 0.40 & 2.33 & -0.01\
Determination of the lensing amplitude {#s:SKA2_AL}
--------------------------------------
$b_0$ $\Omega_{\rm baryon}$ $\Omega_{\rm cdm}$ $h$ $n_s$ $\ln{10^{10}\, A_s}$ $A_\mathrm{L}$
------- ----------------------- -------------------- ------- ------- ---------------------- ----------------
0.37% 1.32% 0.37% 1.26% 1.05% 0.70% 5.46%
: $1\sigma$ relative error (in percent) for the standard $\Lambda$CDM parameters, the bias, and the amplitude of the lensing potential, $A_{\rm L}$, for SKA2 with 11 redshift bins.
\[table:ska2\_errors\_AL\]
Finally, we study how well the amplitude of the lensing potential can be measured with SKA2. We include a parameter $A_{\rm L}$ in front of the lensing potential, with fiducial value $A_{\rm L}=1$, and we let this parameter vary in the Fisher forecasts. Since the quantity which is measured with lensing is the combination $A_{\rm L}(5s(z)-2)$, we can only measure $A_{\rm L}$ if the magnification bias parameter is known. The results are shown in Table \[table:ska2\_errors\_AL\]. We see that SKA2 can measure $A_{\rm L}$ with a precision of 5.46%, which reflects the relatively large signal-to-noise ratio of lensing in SKA2. Comparing these results with those in Table \[t:SKA11bin-bf\], with $A_{\rm L}$ fixed, we see that adding this extra free parameter has almost no impact on the other constraints, that are degraded by less than 1%. This can be understood by the fact that the constraints on $\Lambda$CDM parameters come exclusively from density and RSD. Adding lensing does not improve the constraints on these parameters. Therefore whether $A_{\rm L}$ is fixed or not has no impact on the constraints of the $\La$CDM parameters.
The $C_\ell(z,z')$ angular power spectra – photometric surveys {#s:Cls}
==============================================================
![Contour plot for the LSST Fisher analysis, in the 12 bins configuration. Blue contours show the constraints on the parameters when we consistently include lensing in our theoretical model. Red contours show the constraints on the parameters obtained neglecting lensing magnification. In the latter case, the best-fit parameters are shifted with respect to the fiducial values. []{data-label="fig:lsst-fisher-12bins"}](data/figures/LSST_lens_test.pdf){width="\linewidth"}
For photometric surveys, where the redshift is not very well known, and the number of redshift bins is not exceedingly large, we base our parameter estimation on the angular power spectra, $C_\ell(z,z')$. The angular power spectrum is related to the two-point correlation of $\Delta$ through \[e:Cldef\] ([[**n**]{}]{},z)([[**n**]{}]{}’,z’) = \_(2+1)C\_(z,z’)P\_([[**n**]{}]{}[[**n**]{}]{}’) , where $P_\ell$ denotes the Legendre polynomial of degree $\ell$. The $C_\ell(z,z')$’s are well adapted to future surveys since they automatically encode wide-angle effects. Another advantage of the $C_\ell(z,z')$’s is that they refer only to directly measured quantities $z,z'$ and $\theta$ and can therefore be determined from the data in a completely model independent way. Moreover, lensing is easily included in their modelling. The $C_\ell(z,z')$’s provide therefore a new route, alternative to the shape measurements of background galaxies, to determine the lensing potential.
We investigate the precision with which a photometric galaxy catalog like LSST will be able to constrain cosmological parameters from the galaxy number counts. We consider three different configurations with 5, 8 and 12 redshift bins and perform a Fisher matrix analysis for the $\Lambda$CDM parameters given in Table \[table:planck\_params\], plus the bias with fiducial value $b_0=1$. The evolution of the bias with redshift is given in Appendix \[a:LSST\]. We first fix $s(z)$ as given in Appendix \[a:LSST\].
As for spectroscopic surveys, we perform two Fisher analyses: one where we neglect lensing, and one where we include it. This allows us to compare the error bars in these two cases and to determine whether lensing brings additional constraining power. We also compute the shift of the parameters when lensing is excluded. In all cases we include both the auto-correlation, $C_\ell(z,z)$, and the cross-correlation between different bins, $C_\ell(z,z')$ with $z\neq z'$. More details are given in Appendix \[ap:photo-alone\].
The results are shown in Fig. \[fig:lsst-fisher-12bins\]. We find that increasing the number of bins significantly reduces the error bars and in the following we show therefore only the optimal case, with 12 redshift bins. Results for the 5 and 8 bins configurations can be found in Appendix \[ap:photo-alone\]. Comparing the error bars with and without lensing, we see that adding lensing has a small impact on the constraints for most of the $\Lambda$CDM parameters, nearly as for spectroscopic surveys. The only constraints that are significantly improved when lensing is included are those on the bias $b_0$ and on the primordial amplitude $A_s$. This can be understood by the fact that the density contribution is sensitive to the combination $b_0^2 A_s$, whereas the lensing contribution is sensitive to $b_0 A_s$ (through the density-lensing correlation) and to $A_s$ (through the lensing-lensing correlation). As a consequence, the lensing contribution helps breaking the degeneracy between $A_s$ and $b_0$.
We have studied how this improvement changes if instead of fixing $s(z)$ we model it by three parameters (see Appendix \[a:LSST\]) that we let vary in our Fisher analysis. We find that in this case including lensing provides almost no improvement on the parameters constraints (see Fig. \[fig:lsst-fisher-sbias-margin\] in Appendix \[s:LSST\_constraints\]). A good knowledge of $s(z)$ is therefore crucial.
From Fig. \[fig:lsst-fisher-12bins\], we see that the shifts induced by neglecting lensing are somewhat larger than in spectroscopic surveys. This is due first to the fact that in photometric surveys the density contribution is smaller than in spectroscopic surveys, due to the large size of the redshift bins, which averages out the small-scale modes. In spectroscopic surveys, this effect is almost absent since the density contribution is averaged over the size of the pixels, which are usually very small, between $2-8$Mpc/$h$. The lensing on the other hand is almost unaffected by the size of the bins or pixels (see e.g. discussion in [@Bonvin:2011bg]), and therefore its relative importance with respect to the density contribution is larger in photometric surveys than in spectroscopic surveys. The second effect which reduces the importance of lensing in spectroscopic surveys is the fact that only the first three multipoles are measured there. Since lensing has a complicated dependence on the orientation of the pair of galaxies, these first three multipoles only encode part of the lensing signal, which contributes to much larger multipoles as has been shown in [@Tansella:2017rpi]. As a consequence, part of the lensing signal is removed in spectroscopic analyses. This is not the case in photometric surveys, where the angular power spectrum is used, which contains the full lensing contribution.
The fact that the lensing contribution shifts cosmological parameters in photometric surveys is consistent with previous studies [@Raccanelli:2015vla; @Alonso:2015uua; @Cardona:2016qxn; @Lorenz:2017iez], and it shows that lensing cannot be neglected in a survey like LSST. [The parameter which is the most shifted is]{} [the spectral index $n_s$, which is shifted by 1.3$\si$ when lensing is neglected (see Table 16 in Appendix \[s:LSST\_constraints\]). Lensing adds small scale power and if this is interpreted as coming from density fluctuations, a somewhat larger spectral index is inferred.]{} [The bias $b_0$ and the amplitude $A_s$ are also shifted by more than 1$\sigma$. We see that $A_s$ is shifted toward a larger value, whereas $b_0$ is shifted toward a smaller value. This can be understood in the following way: the lensing term contributes positively to the angular power spectrum at high redshift, when lensing is important. $A_s$ and $b_0$ must therefore be shifted to increase the amplitude of the $C_\ell$’s. Since lensing increases with redshift, this shift cannot be the same at all redshift, and it is therefore not possible to increase both $b_0$ and $A_s$. The solution is then to increase $A_s$ but to decrease $b_0$. By doing that, the density contribution, which is proportional to $A_s b_0^2$, increases less than the RSD contribution which is sensitive to $A_s b_0$ and $A_s$. Since the relative importance of RSD with respect to the density increases with redshift, this is the best way of mimicking the lensing signal. Note that the opposite happens, if instead of considering 12 redshift bins, we consider 5 redshift bins. In this case, $A_s$ is shifted toward a smaller value, while $b_0$ is shifted toward a larger value (see Table 16 in Appendix \[s:LSST\_constraints\]). For 5 redshift bins, RSD are completely subdominant, and therefore the redshift dependence of the lensing cannot be reproduced by a shift of $A_s$ and $b_0$. What governs the shift of $A_s$ and $b_0$ in this case is probably the redshift bin where lensing affects the constraints most.]{} Finally, $h$ is also shifted, by a larger amount than in the spectroscopic case. Only $\Om_{\rm cdm}$ and $\Om_{\rm baryon}$ are virtually unaffected. On the other hand, we have tested that including or not the large scale relativistic effects does not influence the parameter estimation appreciably. It is therefore justified to only consider density, RSD and lensing for the analysis.
![ The comparison of constraints on cosmological parameters in $\Lambda$CDM, including lensing, from: SKA2 spectroscopic using 11 bins (blue), DESI spectroscopic using 8 bins (red), and LSST photometric using 12 bins and the redshift range \[0, 2.5\] and using both auto- and cross-correlations (black), with the galaxy bias parameter $b_0$ marginalized. []{data-label="fig:comparison"}](data/figures/comparison_surveys.pdf){width="\linewidth"}
$b_0$ $\Omega_\mathrm{baryon}$ $\Omega_\mathrm{cdm}$ $h$ $n_s$ $\ln 10^{10} A_s$
------ ------- -------------------------- ----------------------- ------ ------- -------------------
DESI 0.75 3.50 0.85 3.30 2.65 1.75
SKA2 0.36 1.31 0.37 1.26 1.04 0.70
LSST 2.4 2.41 0.21 0.67 0.50 1.6
: Constraints (in percent) on $\Lambda$CDM parameters for DESI (8 bins), SKA2 (11 bins) and LSST (12 bins).
\[table:comparison\_surveys\_constraints\_3\]
In Fig. \[fig:comparison\] and Table \[table:comparison\_surveys\_constraints\_3\], we compare the size of the error bars for LSST, DESI and SKA2. [The error bars on $h$ and $n_s$ in LSST]{} are a factor of 5 times smaller than in DESI and a factor of 2 times smaller than in SKA2. [The constraints on $\Omega_{\rm cdm}$ are also better in LSST than in SKA2 and DESI.]{} This is simply due to the larger number of galaxies in photometric samples, which strongly reduces the shot noise compared to spectroscopic samples. [On the other hand, the constraints on $\Omega_{\rm baryons}, b_0$ and $A_s$ are significantly better in SKA2 than in LSST. For $\Omega_{\rm baryon}$ this is due to the fact that the constraints depend on a good resolution of the acoustic oscillations. For $A_s$ and $b_0$, the difference between SKA2 and LSST comes from the fact that these two parameters are degenerate in the density contribution. RSD break this degeneracy very strongly in SKA2, but only marginally in LSST, where they are subdominant. Lensing helps breaking this degeneracy in LSST, but less efficiently because of the size of this contribution. Generally, from Fig. \[fig:comparison\], we see that LSST and SKA2 are affected by different degeneracies between parameters, and that they are therefore highly complementary to constrain $\Lambda$CDM parameters.]{} As discussed in Section \[s:growth\], SKA2 has furthermore the advantage of measuring the growth rate of structure $f$ in a model-independent way, [something which is not achievable with a photometric survey like LSST.]{}
Finally, we have studied how well we can measure the lensing amplitude $A_{\rm L}$ with LSST. We find that $A_{\rm L}$ can be determined with a precision of 7.7% (see Table \[table:lsst\_errors\] in Appendix \[s:LSST\_AL\]). This is worse than the precision obtained on $A_{\rm L}$ with SKA2 (5.5%), which is somewhat surprising since lensing is stronger in the cross-correlation of the $C_\ell$’s than in the multipoles of the correlation function. It can however be understood from the fact that, in the $C_\ell$’s, RSD are strongly subdominant, whereas in the correlation function, RSD are extremely well detected. As a consequence, in the $C_\ell$’s, $A_s$ and $b_0$ are degenerate and this degeneracy is only broken if lensing is added with a known amplitude. If we do not know the lensing amplitude, adding lensing helps only marginally since lensing is now sensitive to the combinations $A_{\rm L}b_0A_s$ (through the density-lensing correlations) and to $A_{\rm L}^2 A_s$ (through the lensing-lensing correlations). In this case adding lensing improves the constraints on $A_s$ and $b_0$ by only a few percents, and $A_{\rm L}$ is less well determined than in a spectroscopic survey, where the degeneracy between $b_0$ and $A_s$ is already broken by RSD.
Combining angular power spectra and correlation function {#s:main_result}
========================================================
One drawback of the correlation function is that it does not account for correlations between different redshift bins. In this section, we propose an improved analysis that uses the multipoles of the correlation function in each redshift bin, and the angular power spectrum for the cross-correlations between different redshift bins. These two estimators can be considered as independent, since the correlation function probes the auto-correlation of density and RSD within a bin, and neglects correlation of these quantities between different redshift bins. On the other hand, the angular power spectrum probes the density and lensing cross-correlations between different bins, but neglects their correlation within the same redshift bin. We can therefore simply add these two estimators in the Fisher matrix. We consider the $\Lambda$CDM parameters of Table \[table:planck\_params\], plus the galaxy bias of SKA2.
We perform an analysis where we include lensing in the angular power spectrum but neglect it in the multipoles of the correlation function. We first study the shift inferred on cosmological parameters in this analysis. The motivation for this analysis is that it is much easier to include the lensing contribution in the $C_\ell(z,z')$ than in the correlation function. We want therefore to understand if this is enough to remove the shift of the whole analysis. We find that this is not the case, and that the shift is the same with and without the cross-correlations (see Fig. \[fig:ska2\_with\_cross\] in Appendix \[s:cross\]).
We then compare the constraints on $\Lambda$CDM cosmological parameters of this analysis with the ones obtained when only the correlation function is used. We find that adding the $C_\ell$’s improves the constraints by less than 6 %.
![ The constraints on cosmological parameters from SKA2 (blue) and SKA2 + LSST only cross-correlation (red) using a configuration with 8 redshift bins as described in \[a:SKAII\], plus the additional redshift bin for LSST with $\bar z = 2.25$ and $\Delta z = 0.5$. The shifts of the contours are computed neglecting lensing from the correlation function (SKA2), and consistently including it in the angular power spectrum (LSST). The black lines on the diagonal plots denote the fiducial values. []{data-label="fig:ska2_with_lsst"}](data/figures/ska2_with_without_lsst_bias_bins9.pdf){width="\linewidth"}
We then extend this combined analysis to the case where we have a spectroscopic and a photometric survey. For this we use equal redshift bins for both types of surveys. Inside one redshift bin we then consider only the spectroscopic survey with the correlation function, while for the cross-correlations of different bins we consider only the photometric survey with the angular cross-power spectrum, $C_\ell(z,z')$. To this we add the auto and cross-correlations of the $C_\ell(z,z')$ for the high redshift bins, $z\in[2,2.5]$, that are observed with LSST but not with SKA2 [^2]. As before we can neglect the covariance between the $C_\ell$’s and the correlation function. In principle, we could also add to this combined analysis the auto-correlation of the angular power spectrum for $z\leq 2$, which brings additional information since the sample of galaxies in the spectroscopic and photometric surveys are different. However, to do this in a consistent way we would have to account for the covariance between the $C_\ell$’s and the correlation function within the same redshift bin, since the different populations of galaxies trace the same underlying density field. This is beyond the scope of our paper.
We use the 8 bin configuration from SKA2, since the redshift resolution of LSST does not allow us to use the 11 bin configuration, and we add an extra bin for LSST with $\bar z = 2.25$ and $\Delta z = 0.5$. We have tested that the results for SKA2 are very similar for the 8 and 11 bins configurations. We consider the $\Lambda$CDM parameters of Table \[table:planck\_params\], plus the bias of SKA2 and the bias of LSST. Since the bias of SKA2 and the bias of LSST are independent, we cannot constrain the bias of LSST using the auto-correlation of SKA2. To solve this problem, we include in the entry of the Fisher matrix related to the bias of LSST the auto-correlation of the $C_\ell$’s. This can be consistently done since the two biases are independent and there is therefore no covariance between the $C_\ell$’s and the correlation function for this entry.
As before, we compute the shift generated by an analysis where we include lensing in the angular power spectrum but not in the correlation function. In Fig. \[fig:ska2\_with\_lsst\] and Table \[table:comparison\_surveys\_constraints\] we compare this analysis with the one of Section \[s:shiftLCDM\], where we consider only SKA2. We have marginalized over the clustering biases of the two surveys. [For the parameters, $n_s$ and $\Omega_{\rm baryon}$, adding the angular power spectrum reduces the shift generated by neglecting lensing. However, it does not completely remove it. Moreover, for $\Omega_{\rm cdm}$, $A_s$ and $h$, the shift even though still quite small, actually increases. Interestingly for $h$ and $A_s$ the shift changes sign. This is possible since the parameters are not independent. For example decreasing $\Omega_{\rm cdm}$ requires a larger amplitude $A_s$ and in the present case, the best fit actually request an $A_s$ which is about $0.35\si$ too large, while when only considering SKA2 one obtains a value which is $0.15\si$ too small.]{} [In Table \[table:comparison\_surveys\_constraints\] we also compare the shifts when the magnification bias of LSST is fixed, with the shifts when the magnification bias is parameterized with three parameters. We see that the shifts are very similar in the two cases.]{} These results show that including lensing only in the angular power spectrum is not a satisfactory solution. One needs to include it both in the angular power spectrum and in the multipoles of the correlation function.
$h$ $\ln 10^{10} A_s$ $n_s$ $\Omega_\mathrm{baryon}$ $\Omega_\mathrm{cdm}$
--------------------- ------------------- -------- ------------------- -------- -------------------------- -----------------------
SKA2 $\sigma\,[\%]$ 1.255 0.697 1.041 1.312 0.372
$\Delta / \sigma$ 0.198 -0.145 -0.606 0.320 -0.194
SKA2 + LSST-cross $\sigma\,[\%]$ 1.096 0.619 0.853 1.148 0.337
$s(z)$ fixed $\Delta / \sigma$ -0.225 0.347 -0.183 -0.069 -0.594
SKA2 + LSST-cross $\sigma\,[\%]$ 1.096 0.619 0.854 1.149 0.337
$s(z)$ marginalized $\Delta / \sigma$ -0.243 0.327 -0.165 -0.094 -0.585
: Constraints (in percent) and shifts in unit of $1\sigma$ on $\Lambda$CDM parameters for SKA2 (8 bins), SKA2 (8 bins) + LSST-cross (9 bins) with magnification bias fixed, SKA2 (8 bins) + LSST-cross (9 bins) with three parameters for the LSST magnification bias marginalized over.
\[table:comparison\_surveys\_constraints\]
We then compare the constraints in the best-fit parameters obtained from the SKA2 analysis, with the ones in the combined analysis. The results are shown in Table \[table:comparison\_surveys\_constraints\] for the two cases, i.e. when the magnification bias of LSST is fixed and when it is parameterized with three parameters. [^3] In both cases, we find that including the cross-correlations from LSST, the constraints on the parameters improve by 10–20%. [We find that, marginalizing over the magnification bias parameters does not have a significant impact on the combined analysis. This is due to the fact that the improvement in the constraints in the combined analysis is mainly driven by the density correlations between adjacent redshift bins and not by lensing.]{}
Note that here we have only studied the $\Lambda$CDM case. We defer an analysis, where we consider the growth of structure in each redshift bin as a free parameter, to a future paper. This will require to rewrite the angular power spectrum in terms of this growth rate and to determine how well the constraints on this quantity are improved by adding the $C_\ell$’s to the correlation function.
Conclusion {#s:con}
==========
In this paper we have studied the impact of gravitational lensing on the number counts of galaxies. We have considered both spectroscopic and photometric surveys. We have used a Fisher matrix analysis to determine the precision with which parameters can be constrained, and to compute the shift of these parameters when we neglect lensing. We have only considered quasi-linear scales. Including smaller scales with good theoretical control over the non-linear corrections will certainly improve the capacity of future surveys.
For a photometric survey like LSST, we have confirmed the results previously derived in the literature [@Montanari:2015rga; @Cardona:2016qxn; @DiDio:2016ykq; @Lorenz:2017iez; @Villa_2018], showing that lensing cannot be neglected in the measurement of standard $\Lambda$CDM parameters since this would induce a shift of these parameters as high as 1.3$\sigma$.
Our analysis for spectroscopic surveys is completely new, since lensing has never been included in the modelling of the multipoles of the correlation function so far. We have found that the importance of lensing on parameters estimation depends strongly on the cosmological model. For a $\Lambda$CDM analysis, neglecting lensing in a survey like SKA2 generates a shift of at most $0.6\sigma$. However, we argue that this is non-negligible, because such a shift could hide (or enhance) deviations from General Relativity. If instead we perform a model-independent analysis, where the parameters to measure are the growth rate of structure and the bias in each redshift bin, then neglecting lensing in SKA2 generates a shift of the growth rate as large as $2.3\sigma$ in the highest bins of the survey. Since the growth rate is directly used to test the theory of gravity, such a large shift would be wrongly interpreted as the breakdown of General Relativity. It is therefore of crucial importance to develop fast and efficient codes that include the lensing contribution, and that can be used in the analysis of future spectroscopic surveys.
Contrary to SKA2, we have found that lensing has almost no impact on a survey like DESI. This is mainly due to the fact that in DESI the value of the prefactor, $5s(z)-2$, is 6 times smaller than for SKA2 at $z>1$, where the lensing contribution could become relevant. However, a more detailed modelling of $s(z)$ is needed in order to confirm this result.
We have also compared the constraining power of DESI, SKA2 and LSST. A somewhat surprising result is that LSST promises the best constraints on [three of the standard cosmological parameters: $\Omega_{\rm cdm}, h$ and $n_s$]{} In particular, we have found that LSST can achieve about twice smaller error bars than the most futuristic spectroscopic survey presently planned, SKA2, [for $h$ and $n_s$]{}. The main reason for this is the number of galaxies which is about 10 times higher in LSST ($N\sim 10^{10}$) than in SKA2 ($N\sim 10^9$) yielding 3 times smaller shot noise errors. This more than compensates for the reduced redshift accuracy which is not so crucial for [$\Omega_{\rm cdm}, h$ and $n_s$. On the contrary, SKA2 constrains better $\Omega_{\rm baryon}$ (which relies on a good resolution of the baryon acoustic oscillations), $A_s$ and the bias. These last two parameters are degenerate in the density contribution, and therefore RSD, which are very prominent in a spectroscopic survey like SKA2, are important to break this degeneracy. We have seen that lensing does help breaking this degeneracy in LSST, and consequently improves the constraints on $A_s$ and the bias by a factor of 3. However, this is not sufficient to be competitive with SKA2. Interestingly, we have also found that the amplitude of the lensing potential, $A_{\rm L}$, is better determined with SKA2 than with LSST.]{}
[This comparison shows that spectroscopic and photometric surveys are highly complementary to probe $\Lambda$CDM, since they are affected differently by degeneracies between parameters.]{} Using photometric surveys not only for shear (weak lensing) measurements but also for galaxy number counts is therefore a very promising direction. We however stress that the best constraints for LSST are achieved if we use 12 redshift bins, hence good photometric redshifts are important. The errors on parameters using 8 redshift bins for LSST are substantially larger than those of SKA2 and comparable or larger than those of DESI.
The real advantage of spectroscopic surveys over photometric surveys is their capability to measure the growth rate of structure in a model-independent way, by combining measurements of the monopole, quadrupole and hexadecapole. For DESI we have found that for the redshift bins centered around $z=0.4$ and higher, DESI can measure the growth rate with an accuracy of 2–3% and the bias to better than 1%. SKA2 will improve on these constraints, measuring the growth rate in each bins with an accuracy of 0.5–3% for the growth rate, and of 0.2–1% for the bias.
We have not studied the constraint on the growth rate for photometric surveys since for the bin widths of $\De z\geq 0.2$ considered in LSST, RSD are washed out and the sensitivity to $f(z)$ is lost. However, as shown in [@Jalilvand:2019brk] (see their Fig. 9), already for bin widths of $\De z\leq 0.1$ the signal-to-noise ratio of RSD for a photometric survey with negligible shot noise can reach 10 for $z=0.5$ and larger for higher redshifts (up to 100 for $z=2$). Therefore, with sufficiently good photometric redshifts and sufficiently high numbers of galaxies, photometric surveys can also be used to determine the growth rate and provide a stringent test of gravity.
Finally, we have proposed a way of combining spectroscopic and photometric surveys, by using the multipoles of the correlation function within each redshift bin, and the angular power spectrum to cross-correlate different redshift bins. We have found that for SKA2 and LSST, neglecting lensing in the multipoles of the correlation function but including it in the $C_\ell$’s is not enough to completely remove the shift, which remains of the order of 0.6$\sigma$.
Therefore, lensing has to be included in the modelling of both the correlation function and the angular power spectrum. Moreover, for standard $\Lambda$CDM parameters, we have found that adding cross-correlations from LSST to SKA2 improves the constraints by 10–20%. This shows that some information is present in the cross-correlations between different redshift bins, that should not be neglected. [We have found that, for $\Lambda$CDM parameters, this information mainly comes from density correlations between neighbouring bins, rather than from lensing correlations.]{}
At the moment, such a combined analysis does not allow us to measure the growth rate in a model-independent way, since it is not clear how this growth rate can be modelled in the angular power spectrum. This would however be an optimal analysis, that would combine model-independent measurements of the growth rate and of the lensing amplitude, which can be measured with good accuracy from the cross-correlations of the $C_\ell$’s. Such a measurement will significantly increase our capability to test gravity, by probing the relation between density, velocity and gravitational potentials.
Acknowledgments {#acknowledgments .unnumbered}
===============
We thank Benjamin Bose for fruitful discussions. The confidence ellipses of Figs. \[fig:ska2\_param\_errors\_std\_bins11\], \[fig:lsst-fisher-12bins\], \[fig:comparison\], \[fig:ska2\_with\_lsst\], \[fig:ska2\_with\_cross\] were plotted using a customized version of the CosmicFish package for cosmological forecasts [@raveri2016cosmicfish]. This work is supported by the Swiss National Science Foundation.
Survey specifications and biases {#ap:surveyspec}
================================
In this appendix we specify the redshift bins, $z_i$, as well as the galaxy bias, $b(z_i)$, the magnification bias, $s(z_i)$, and the number of galaxies per bin, $N(z_i)$, for the three surveys considered in this paper. For LSST and SKA2 we can refer to the literature for estimations of the magnification bias $s(z)$. For DESI we derive an effective $s(z)$ by studying the three different galaxy populations which the survey will observe.
Magnification bias: generalities
--------------------------------
In order to compute the magnification bias for a galaxy population, we need to estimate the luminosity function, i.e. the comoving number density of sources in a certain luminosity range, for the type of galaxies under consideration. The luminosity function is modelled analytically with a Schechter function. In terms of the absolute magnitude $M$ it can be expressed as [@Montanari:2015rga] $$\Phi(M, z) dM = 0.4 \ln{(10)} \phi^* \left(10^{ 0.4 (M^* - M)}\right)^{\alpha + 1} \exp{\left[- 10^{0.4 (M^* - M)} \right]} dM\,.$$ The parameters $\phi^*$, $M^*$ and $\alpha$ are redshift dependent and they can be estimated from data for different types of galaxies.
The magnification bias, at a given redshift, is computed as in [@Montanari:2015rga] $$s(z, M_\text{lim}) = \frac{1}{\ln{10}}
\frac{\Phi(M_\text{lim}, z)}{\bar{n}(M<M_\text{lim})}\, ,
\label{eq:s-bias}$$ where $\bar{n}(M < M_\text{lim})$ is the cumulative luminosity function $$\bar{n}(M < M_\text{lim}) = \int^{M_\text{lim}}_{-\infty}
\Phi(M, z) dM\, .$$
The limiting magnitude, or flux, is fixed in the observer frame-band. Therefore, for each survey and galaxy sample we need to convert the apparent limiting magnitude $m_\text{lim}$ into the absolute magnitude at the redshift of the sources $M_\text{lim}$. These two quantities are related as follows [@Alonso:2015uua] $$M = m - 25 - 5 \log_{10}{\left[\frac{D_\text{L}(z)}{10 \,\,\text{Mpc}\,\, h^{-1}}\right]} + \log_{10}{h}
- K(z)\,.$$ Here $D_\text{L}$ is the luminosity distance and $K$ is the $K$-correction [@Peacock:879495] $$10^{0.4 K(z)} = \frac{\int T(\lambda) f_{\log}(\lambda) d\ln{\lambda}}{\int T(\lambda) f_{\log}(\lambda/(1+z)) d\ln{\lambda}}\, ,
\label{k-corr}$$ where $f_{\log} = \lambda f_\lambda$ is the logarithmic flux density and $T$ is the effective filter transmission function in a given band. The galaxy spectra are observed in a fixed waveband, while absolute magnitudes are affected by a shift of the spectrum in frequency. The $K$-correction accounts for this effect: it is an estimate of the difference between the observed spectrum at a given redshift and what would be observed if we could measure true bolometric magnitudes.
To sum up, in order to estimate the magnification bias, we need the luminosity function of the galaxy population, the limiting magnitude for a galaxy survey, and an estimate of the $K$-correction for the survey’s observed wavebands.
LSST {#a:LSST}
----
The Large Synoptic Survey Telescope (LSST) [@LSST] is a wide-angle deep photometric survey expected to be operational from 2022. In our forecast, we adopt the specifics for LSST described in Ref. [@Alonso:2015uua], i.e.we consider the so-called LSST ’gold’ sample: we assume a redshift range $z \in [0, 2.5]$ and a sky fraction $f_\text{sky} = 0.5$. The galaxy sample here considered includes approximately 3 billions galaxies.
The luminosity function of the sample is modelled as a Schechter function, with constant slope and redshift-dependent $M^*$ and $\phi^*$, while the $K$-correction is assumed to be proportional to the redshift of the sources (see Ref. [@Alonso:2015uua] for details).
The computation of the redshift distribution of the sources, the galaxy bias and the magnification bias has been implemented by the authors of Ref. [@Alonso:2015uua] in a public routine[^4]. The specifics for LSST here described are computed using this code.
![Redshift distribution for the LSST galaxy sample, for three values of the magnitude cut in the $r$-band $m_\text{lim}$. []{data-label="fig:lsst-dNdz"}](data/figures/dNdzdOmega-LSST){width="70.00000%"}
In Fig. \[fig:lsst-dNdz\] we show the redshift distribution for the LSST galaxy sample, for different values of the magnitude cut $m_\text{lim}$. The ’gold’ sample, which we use in our analysis, will adopt a magnitude-cut $m_\text{lim} = 26$.
[0.49]{} ![Clustering and magnification bias for LSST.[]{data-label="fig:lsst-biases"}](data/figures/bz-LSST.pdf "fig:"){width="\textwidth"}
[0.49]{} ![Clustering and magnification bias for LSST.[]{data-label="fig:lsst-biases"}](data/figures/sz-LSST.pdf "fig:"){width="\textwidth"}
In Fig. \[fig:lsst-biases\] we show the redshift dependence of galaxy bias (panel \[fig:b-lsst\]) and magnification bias (panel \[fig:s-lsst\]). The galaxy bias is modelled as $b(z) = 1 + 0.84\,\,z$. In our Fisher forecasts, we multiply this bias by a parameter $b_0$ with fiducial value $b_0=1$, that we let vary. The different markers denote the values of the galaxy bias $b(z)$ at the mean redshifts for the three configurations studied in Appendix \[ap:photo-alone\].
In our Fisher analyses we consider two cases for the magnification bias: one where we fix it to its fiducial value, and one where we model it with some parameters, that we let vary. For this purpose, we fit $s(z)$ from Fig. \[fig:s-lsst\] with an exponential s(z)=s\_0 e\^[s\_1 z + s\_2 z\^2]{}, \[e:fit\_s\_LSST\] where $s_0, s_1$ and $s_2$ are three free parameters with fiducial values $s_0 = 0.1405663$, $s_1 = 0.30373535$ and $s_2 = 0.2102986$.
DESI {#a:DESI}
----
The DESI (Dark Energy Spectroscopic Instrument) survey will be split into three types of galaxies: emission line galaxies (ELG), luminous red galaxies (LRG), and the bright galaxy sample (BGS), the distribution of which is shown in Fig. \[fig:desi\_number\_density\], with a total number of galaxies $N_\mathrm{tot} \sim 3.4 \times 10^7$.
![ Number density of the three types of galaxies for the DESI survey. []{data-label="fig:desi_number_density"}](data/figures/desi_number_density.pdf){width="70.00000%"}
Their galaxy biases are given by [@Aghamousa:2016zmz] b\_[ELG]{}(z) &=& 0.84 D\_1(z=0)/D\_1(z)\
b\_[LRG]{}(z) &=& 1.7 D\_1(z=0)/D\_1(z)\
b\_[BGS]{}(z) &=& 1.34 D\_1(z=0)/D\_1(z), where $D_1(z)$ denotes the growth factor.
We estimate the magnification bias for the three samples independently. For the emission line galaxies, we assume that DESI will mainly target OII galaxies. The OII luminosity function has been measured in Ref. [@Comparat:2014xza], and we use the best-fit parameters for the Schechter function reported in their Table 6. We estimate the ELG $K$-correction from Eq. , where $f_\text{log}(\lambda)$ is the typical rest-frame spectrum of an ELG galaxy and $T(\lambda)$ is the filter profile in a given band. The typical ELG spectrum and the filters profile have been extracted from Figure 3.9 in Ref. [@Aghamousa:2016zmz], for the $grz$ optical filters. For the three filters, we find that the ELG $K$-correction is proportional to the redshift $K(z) \sim -0.1\,z$. We estimate the magnification bias for the three optical filters from Eq. , with magnitude limits $m_\text{lim} = 24, \,23.4,\, 22.5$ for the $g$, $r$, $z$ bands, respectively [@Aghamousa:2016zmz]. The resulting magnification bias does not depend on the filter.
The luminosity function of the LRG sample has been measured from data in Ref. [@Cool:2008nv] at $z = 0.15, 0.25, 0.35, 0.8$. We approximate the Schechter parameters by fitting the estimated luminosity function after passive evolution correction (Figure 9 and Table 1 in in Ref. [@Cool:2008nv]). Note that the $K$-correction has already been applied.
$z$ $\alpha$ $M^*$ $\phi^* [h^3\,\mathrm{Mpc}^{-3}]$
------ ---------- -------- -----------------------------------
0.15 -0.170 -22.39 $2.4 \times 10^{-3}$
0.25 -1.497 -22.75 $3.9 \times 10^{-3}$
0.35 -1.593 -22.81 $3.1 \times 10^{-3}$
0.8 -3.186 -23.49 $1.1 \times 10^{-3}$
: Best fit parameters to the luminosity function for luminous red galaxies used to determine the magnification bias for DESI.
\[table:fit-lrg\]
In Table \[table:fit-lrg\] we show the values of the best-fit parameters. Since DESI will detect luminous red galaxies up to $z = 1$, we assume that the luminosity function for LRG does not evolve significantly between $z = 0.8$ and $z = 1$. DESI will detect LRG in the $r$, $z$ and $W1$ bands with different luminosity cuts [@Aghamousa:2016zmz]. Therefore, the magnification bias for the LRG catalogue will depend on the observed band. In our forecast, we use the $r$-band magnitude cut $m_\text{lim} = 23$.
The luminosity function for the bright galaxies sample is modelled following [@Blanton:2000ns], i.e. a Schechter function with constant parameters $\alpha = -1.20$, $\phi^* = 1.46 \times 10^{-2} h^{3}\, \mathrm{Mpc}^{-3}$ and $M^* = -20.83$. The $K$-correction has been estimated from [@Blanton:2000ns], by fitting the measured $K$-correction in their Figure 4 to a linear relation. Assuming a typical galaxy color $(g^*-r^*) = 0.6$, we find $K(z) \sim 0.87 \,z$. The magnitude limit for this sample is $m_\text{lim} = 19.5$ [@Aghamousa:2016zmz].
![ Magnification bias for the three galaxy populations that will be detected with DESI: emission line galaxies (ELG), luminous red galaxies (LRG), bright galaxy sample (BGS), as well as the weighted (effective) magnification bias. []{data-label="fig:sz-galaxysamples"}](data/figures/desi_mag_bias.pdf){width="70.00000%"}
In Fig. \[fig:sz-galaxysamples\] we show the magnification bias for the three galaxy populations targeted by DESI. While for the LRG and BGS the magnification bias is a monotonic function of redshift, the ELG sample peaks at $z \simeq 0.9$ and decreases at larger redshift.
As there is significant overlap in the redshift range of the different types of galaxies in the survey, their Fisher matrices cannot simply be added together because they are not independent measurements. One way of combining the data would be to compute the cross-correlation between all the different galaxy types in an overlapping bin, including both even and odd multipoles for consistency. The total Fisher matrix would then include the covariance between the different populations. Instead, we opt for a simpler strategy, by computing an effective galaxy and magnification bias for the entire sample in the survey. More precisely, the effective galaxy and magnification bias in a given redshift bin centered at $\bar z$ and with width $\Delta z$ are computed as a weighted sum of the different types of galaxies: s\_(|z) &= ,\
b\_(|z) &= , \[e:DESIbias\] where $N_i(\bar z, \Delta z)$ denotes the number of galaxies of type $i$ in that redshift bin detectable by the survey. It is clear that within a bin where only one type of galaxy has a nonzero number density, the effective bias reduces to the true bias of that particular type of galaxy. As for LSST, in our forecasts we multiply Eq. by a parameter $b_0$ with fiducial value $b_0=1$ that we let vary.
We explore two redshift binning configurations, with 5 and 8 bins respectively, which are detailed in Tables \[table:desi\_5bin\] and \[table:desi\_8bin\]. These bins have been chosen such that they contain a similar number of galaxies.
$\bar{z}_i$ 0.13 0.42 0.72 0.93 1.32
-------------- ------ ------ ------ ------ ------
$\Delta z_i$ 0.16 0.4 0.2 0.22 0.56
: DESI, 5 bin configuration
\[table:desi\_5bin\]
$\bar{z}_i$ 0.1 0.21 0.42 0.65 0.79 0.91 1.07 1.39
-------------- ----- ------ ------ ------ ------ ------ ------ ------
$\Delta z_i$ 0.1 0.1 0.32 0.16 0.1 0.14 0.2 0.42
: DESI, 8 bin configuration
\[table:desi\_8bin\]
SKA2 {#a:SKAII}
----
For the SKA2 survey, we consider two redshift binning strategies, with 8 and 11 redshift bins respectively, which are summarized in Tables \[table:ska2\_8bin\] and \[table:ska2\_11bin\], where $\Delta z_i$ again denotes the width of a redshift bin $i$ centered at a mean redshift $\bar z_i$. The first $N - 4$ redshift bins were constructed in such a way that the total number of galaxies per bin is equal in all of them, while the last 4 bins are equal sized in redshift space to avoid having bins which are too wide, for which we would need to use a more general estimator, for instance the $\Xi_\ell$ estimator, described in [@Tansella:2018sld].
$\bar{z}_i$ 0.22 0.35 0.48 0.66 0.92 1.23 1.54 1.85
-------------- ------ ------ ------ ------ ------ ------ ------ ------
$\Delta z_i$ 0.14 0.12 0.14 0.22 0.3 0.3 0.3 0.3
: SKA2, 8 bin configuration
\[table:ska2\_8bin\]
$\bar{z}_i$ 0.2 0.28 0.36 0.44 0.53 0.64 0.79 1.03 1.31 1.58 1.86
-------------- ----- ------ ------ ------ ------ ------ ------ ------ ------ ------ ------
$\Delta z_i$ 0.1 0.08 0.08 0.08 0.1 0.12 0.2 0.28 0.28 0.28 0.28
: SKA2, 11 bin configuration
\[table:ska2\_11bin\]
The other specifications for the survey follow [@Villa_2018] and [@Bull_2016], which we repeat here for completeness:
$$\begin{aligned}
f_\mathrm{sky} &= 0.73\, ,
\\
b(z) &= C_1 \exp{(C_2 z)}\, ,\label{e:biasSKA}
\\
s(z) &= s_0 + s_1 z + s_2 z^2 + s_3 z^3\, ,\label{e:magSKA2}\end{aligned}$$
with $s_0 = -0.106875$, $s_1 = 1.35999$, $s_2 = -0.620008$, and $s_3 = 0.188594$, as well as $C_1 = 0.5887$ and $C_2 = 0.8130$. The galaxy and magnification bias for SKA2 are shown in Fig. \[fig:ska2\_biases\], while the number density is shown in Fig. \[fig:ska2\_number\_density\]. The total number of galaxies observed is predicted to be $N_\mathrm{tot} \sim 9 \times 10^8$.
![ The galaxy bias (left panel) and magnification bias (right panel) for SKA2 used in the analysis. []{data-label="fig:ska2_biases"}](data/figures/ska2_galaxy_bias.pdf){width="\linewidth"}
![ The galaxy bias (left panel) and magnification bias (right panel) for SKA2 used in the analysis. []{data-label="fig:ska2_biases"}](data/figures/ska2_mag_bias.pdf){width="\linewidth"}
![ The number density of galaxies for SKA2. []{data-label="fig:ska2_number_density"}](data/figures/ska2_number_density.pdf){width="0.65\linewidth"}
Fisher matrix analysis for a photometric survey {#ap:photo-alone}
===============================================
In this Appendix we present a Fisher matrix analysis for the LSST galaxy survey. We want to address the following questions:
1. How accurately will LSST estimate cosmological parameters?
2. What is the impact of neglecting lensing magnification on the estimation of cosmological parameters and their errors?
3. Will the survey be able to measure the lensing potential with high significance?
The specifics assumed for the LSST galaxy survey have been described in Appendix \[a:LSST\]. We consider 3 binning configurations in the redshift range $z \in [0, 2.5]$:
- 5 redshift bins, equally spaced with width $\Delta z = 0.5$.
- 8 redshift bins, equally spaced with width $\Delta z =0.3125$.
- 12 redshift bins, with width $\Delta z = 0.1 (1+z)$. This configuration is the optimal binning for LSST, i.e. the half-width of the bins is the expected photometric redshift uncertainty for this galaxy sample.
The angular power spectra for the estimation of the derivatives and the covariance have been computed using the Cosmic Linear Anisotropy Solving System ([class]{}) code [@class1; @class2; @CLASSgal]. Since we are modelling a photometric survey, we used a Gaussian window function to model the redshift binning. The non-linearity scale $d_\mathrm{NL}(z)$ of Eq. translates into a redshift-dependent maximal multipole, $\ell_{\max}(z)$ given by \_(z) == , where $r(z)$ is the comoving distance to redshift $z$. In Fig. \[fig:lsst-lmax\] we plot $\ell_{\max}(z)$ as a function of redshift, which represents the maximum angular power spectrum that we are using in our analysis.
![The maximal multipole, $\ell_{\max}$ for which linear perturbation theory is applicable. []{data-label="fig:lsst-lmax"}](data/figures/lmax_fixed.pdf){width="0.6\linewidth"}
Impact of lensing on parameter constraints and best-fit estimation. {#s:LSST_constraints}
-------------------------------------------------------------------
We consider a $\Lambda$CDM cosmology + a bias parameter $b_0$ with fiducial value $b_0 = 1$, i.e. the galaxy bias in each redshift bin is $b_\text{gal}(z) = b_0 \times b_\text{LSST}(z)$.
$N_\text{bins}$ $h$ $\Omega_\text{baryon}$ $\Omega_{\rm cdm}$ $\ln{10^{10}\, A_s}$ $n_s$ $b_0$
------------------------------------------------------- ------------ ------------------------ -------------------- ---------------------- ------------ ------------
5 bins - $1\sigma_\text{lens}\,[\%]$ $6.2\, \%$ $8.5\, \%$ $1.3\, \%$ $3.6\, \%$ $3.0\, \%$ $2.4\, \%$
5 bins - $\sigma_\text{lens}/\sigma_\text{no-lens} $ 0.996 0.994 1.004 0.275 0.991 0.119
8 bins - $1\sigma_\text{lens}\,[\%]$ $3.0\, \%$ $3.4\, \%$ $0.5\, \%$ $2.3\, \%$ $1.7\, \%$ $2.4\, \%$
8 bins - $\sigma_\text{lens}/\sigma_\text{no-lens} $ 0.993 0.998 1.011 0.351 0.999 0.240
12 bins - $1\sigma_\text{lens}\,[\%]$ $0.7\, \%$ $2.4\, \%$ $0.2\, \%$ $1.6\, \%$ $0.5\, \%$ $2.4\, \%$
12 bins - $\sigma_\text{lens}/\sigma_\text{no-lens} $ 1.000 1.000 0.998 0.325 0.998 0.321
: $1\sigma$ relative error (in percent) estimated including lensing in our theoretical model (odd rows) and ratio between $1\sigma$ error estimated including and neglecting lensing in our model (even rows) for three different binning configurations. We include in the analysis the standard $\Lambda$CDM parameters and galaxy bias parameter.
\[table:lsst-error-st\]
We estimate the Fisher matrix and the constraints on cosmological parameters for two theoretical models:
1. a model that consistently includes lensing in the galaxy number counts
2. a model that neglects lensing in the galaxy number counts
In Table \[table:lsst-error-st\] we show the $1\sigma$ uncertainty on cosmological parameters when lensing is included in the theoretical model (rows 1, 3 and 5 for the 5, 8 and 12 bin configurations respectively) and the ratio between the constraints estimated when including or neglecting lensing (rows 2, 4 and 6).
The errors on the best-fit parameters decrease significantly as the number of bins increase. In fact, for large redshift bins, both the density and the RSD contribution to the angular power spectra are highly suppressed. Therefore, we expect an optimal analysis to adopt the largest number of bins allowed by the redshift resolution. In the optimal 12 bin configuration, we expect LSST to measure standard cosmological parameters at the percent/sub-percent level. In [all configurations, including lensing significantly improves the constraints on $A_s$ and on the galaxy bias $b_0$. The errors are largest for the 5 bin configurations and significantly smaller for the 12 bin one.]{} This is due to the fact that for the 5 and 8 bins configuration the RSD contribution to [the angular power spectra]{} is highly suppressed and, therefore, the amplitude of the primordial power spectrum is strongly degenerate with the galaxy bias. Indeed, the density contribution is only sensitive to $b^2_0A_s$, but not to each of the parameters separately. Including lensing in the analysis helps breaking this degeneracy, since the lensing-lensing correlation is sensitive to $A_s$, whereas the density-lensing correlation is sensitive to $b_0A_s$. This is clearly visible from Table \[table:lsst-error-st\], where we see that the errors on $b_0$ increase by a factor of 8 (5 bins), 4 (8 bins) and 3 (12 bins) when neglecting lensing in the analysis. The fact that the improvement decreases when the number of bins increases is due to the fact that RSD contributes more for thin redshift bins and that it helps breaking the degeneracy between $A_s$ and $b_0$. The impact of lensing magnification on the parameter constraints is therefore smaller in this case. See also [@Cardona:2016qxn] for a discussion of these points.
![Constraints on parameters for LSST with 12 redshift bins where: both the signal and the model have no lensing (black contours), lensing is included in the signal and in the model and the magnification bias parameter $s(z)$ is fixed (blue contours); and lensing is included in the signal and in the model and the magnification bias parameter $s(z)$ is modelled by four free parameters that are marginalized over (red contours). []{data-label="fig:lsst-fisher-sbias-margin"}](data/figures/LSST_sbias_marg_nolens){width="\linewidth"}
We study how this improvement changes if instead of fixing $s(z)$, we model it with three parameters, as proposed in Eq. , and we let these parameters vary in our Fisher analysis. In Fig. \[fig:lsst-fisher-sbias-margin\], we compare the constraints on parameters in this case. To ease the comparison of the error bars, we have removed the shift in the case “without lensing”, i.e. we assume there that there is no lensing in the signal and no lensing in the modelling.
$N_\text{bins}$ $h$ $\Omega_\text{baryon}$ $\Omega_{\rm cdm}$ $\ln{10^{10}\, A_s}$ $n_s$ $b_0$
----------------- --------- ------------------------ -------------------- ---------------------- -------- ---------
5 bins $-0.18$ $-0.16$ $1.68$ $-0.17$ $0.17$ $0.13$
8 bins $0.01$ $-0.22$ $1.56$ $0.66$ $0.79$ $-0.69$
12 bins $0.49$ $0.24$ $0.077$ $1.19$ $1.31$ $-1.28$
: Shift in the best-fit parameters that we expect if lensing is neglected in the theoretical model for LSST, in units of $1\sigma$. In the analysis we include the standard $\Lambda$CDM parameters and a galaxy bias parameter $b_0$.
\[table:lsst-shift\]
In Table \[table:lsst-shift\] we report the shifts of the best-fit parameters (in units of $1\sigma$), due to neglecting lensing in the theoretical model. The parameters most affected and the size of the shift depend strongly on the number of redshift bins. For 5 bins, $\Omega_{\rm cdm}$ experiences the largest shift, whereas for 12 bins it is $n_s$ which is more shifted. For all configurations, we see that a positive shift in $b_0$ requires a negative shift in $A_s$ and vice versa. This is due to the fact that the amplitude of the density term is given by $b^2_0A_s$.
For the optimal 12-bins analysis, $A_s$, $n_s$ and $b_0$ are all significantly shifted. Note, however, that shifts of order $1\si$ and larger cannot be trusted since our analysis gives the first term of a series expansion in $\De\theta_i/\si_i$ which is only reliable if $|\De\theta_i/\si_i|\ll 1$. Nevertheless, our findings show that neglecting lensing in LSST is not a valid option if we want to reach reliable % or sub-% accurate cosmological parameters.
Detection of the lensing potential with LSST {#s:LSST_AL}
--------------------------------------------
We extend the standard $\Lambda$CDM model adopted in the previous section, adding an extra parameter: the amplitude of the lensing potential $A_\text{L}$, which multiplies the lensing term and whose fiducial value in General Relativity is $A_\text{L} = 1$. The results are shown in Table \[table:lsst\_errors\] and Fig. \[fig:lsst-fisher-12bins-AL\]. We see that increasing the number of redshift bins strongly decreases the error on $A_{\rm L}$. For the optimal, 12 bins configuration, $A_{\rm L}$ can be measured with an accuracy of $7.7$%. Note that this result assumes that we know $s(z)$ perfectly well.
$N_\text{bins}$ $h$ $\Omega_\text{baryon}$ $\Omega_\text{cdm}$ $\ln{10^{10}\, A_s}$ $n_s$ $b_0$ $A_\mathrm{L}$
----------------- ------------ ------------------------ --------------------- ---------------------- ------------ ------------- ----------------
5 bins $6.2\, \%$ $8.5\, \%$ $1.3\, \%$ $13.1\, \%$ $3.0\, \%$ $20.1\, \%$ $20.2\, \%$
8 bins $3.0\, \%$ $3.4\, \%$ $0.5\, \%$ $6.5\, \%$ $1.7\, \%$ $9.9\, \%$ $10.2\, \%$
12 bins $0.7\, \%$ $2.4\, \%$ $0.2\, \%$ $4.7\, \%$ $0.5\, \%$ $7.3\, \%$ $7.7\, \%$
: $1\sigma$ relative error (in percent) for standard $\Lambda$CDM parameters + galaxy bias + amplitude of the lensing potential from LSST. We compare different binning configurations.
\[table:lsst\_errors\]
Comparing Table \[table:lsst\_errors\] with Table \[table:lsst-error-st\], we see that adding $A_{\rm L}$ as a free parameters degrades significantly the constraints on $A_s$ and $b_0$ for all configurations. For the 12-bins configuration, the degradation is however less severe. This can be understood by the fact that for a small number of bins adding $A_{\rm L}$ worsen the degeneracy between $A_s$ and $b_0$, since in the lensing contribution $A_{\rm L}$ is degenerated with $A_s$. For 12 bins, RSD partially break the degeneracy between $A_s$ and $b_0$. This in turns helps breaking the degeneracy between $A_{\rm L}$ and $A_s$ in the lensing contribution.
parameter $h$ $A_\mathrm{L}$ $\ln 10^{10} A_s$ $n_s$ $\Omega_\mathrm{baryon}$ $\Omega_\mathrm{cdm}$
-------------- ------- ---------------- ------------------- ------- -------------------------- -----------------------
SKA2 11 bins 1.259 5.462 0.699 1.046 1.317 0.372
LSST 12 bins 0.733 7.708 4.748 0.522 2.341 0.241
: Constraints on $\Lambda$CDM parameters + lensing amplitude (in percent), with marginalization over the galaxy bias (see Fig. \[fig:lsst-fisher-12bins-AL\])
\[table:ska2\_lsst\_errors\]
[As discussed in the main text, the multipoles of the correlation function can also be used to measure the amplitude of the lensing potential, $A_{\rm L}$. From Table \[table:ska2\_lsst\_errors\], we see that $A_{\rm L}$ is better measured with SKA2 than with LSST, even though the lensing contribution is more important in LSST. This is due to the fact that RSD in SKA2 help breaking the degeneracy between $A_{\rm L}$, $A_s$ and $b_0$.]{}
![Contour plot for the Fisher analysis (SKA vs. LSST) which includes an extra parameter for the amplitude of the lensing potential.[]{data-label="fig:lsst-fisher-12bins-AL"}](data/figures/ska_vs_lsst_galaxy_bias_marginalized){width="\linewidth"}
Combining angular power spectra and correlation function {#s:cross}
========================================================
[As described in the main text, we also perform an analysis where we combine the multipoles of the correlation function from SKA2 in each redshift bin, with the cross-correlation of the $C_\ell$’s between different redshift bins. We include lensing in the $C_\ell$’s but not in the correlation function. The main results are presented and discussed in detail in Section \[s:main\_result\]. In Fig. \[fig:ska2\_with\_cross\] we compare the constraints and shifts when we consider the correlation function of SKA2 alone, or when we combine the correlation function of SKA2 with the cross-correlation $C_\ell$’s for different redshift bins. We see that adding the $C_\ell$’s has almost no impact on the constraints or the shifts.]{}
[In Fig. \[fig:ska-lsst-fisher-89\] and Table \[fig:ska-lsst-fisher-89\], we compare the constraints from SKA2 alone, with the combined constraints from SKA2 and the $C_\ell$’s in LSST. Contrary to the results presented in the main text, we fix here the value of the bias $b_0=1$. Comparing with Table \[table:comparison\_surveys\_constraints\], we see that fixing the bias has almost no impact on the constraints or on the shifts. This is due to the fact that in SKA2, RSD are strong enough to break the degeneracy between $A_s$ and $b_0$, and both are very well measured in these surveys. ]{}
![ The constraints on cosmological parameters from SKA2 using the correlation function (blue) and SKA2 from the correlation function + $C_\ell$’s for the cross-correlations (red) using a configuration with 11 redshift bins as described in \[a:SKAII\]. The shifts of the contours are computed neglecting lensing from the correlation function, and consistently including it in the angular power spectrum. The black lines on the diagonal plots denote the fiducial values. []{data-label="fig:ska2_with_cross"}](data/figures/ska2_with_without_cross_bins11.pdf){width="\linewidth"}
![ Contour plot for the Fisher analysis (SKA2 8 bins vs. SKA2 8 bins + LSST 9 bins) for $\Lambda$CDM parameters. The black solid lines denote fiducial values. [The difference of this w.r.t. Fig \[fig:ska2\_with\_lsst\] is that here the galaxy bias has been fixed and not marginalized over. As one can compare, the constraints obtained are virtually identical.]{} []{data-label="fig:ska-lsst-fisher-89"}](data/figures/ska2_vs_ska2_plus_lsst_8bins_lcdm_FINAL){width="\linewidth"}
parameter $h$ $\ln 10^{10} A_s$ $n_s$ $\Omega_\mathrm{baryon}$ $\Omega_\mathrm{cdm}$
--------------------------- -------- ------------------- -------- -------------------------- -----------------------
SKA2 8 bins 1.255 0.675 1.036 1.309 0.356
$\Delta / \sigma$ 0.199 -0.169 -0.602 0.316 -0.181
SKA2 8 bins + LSST 9 bins 1.048 0.558 0.833 1.124 0.293
$\Delta / \sigma$ -0.096 0.165 -0.282 0.023 -0.423
: Constraints (in percent) and shift, for SKA2 and SKA2 + LSST, for the standard $\Lambda$CDM parameters, [when the biases are fixed to their fiducial value $b_0=1$]{} (see also Fig. \[fig:ska-lsst-fisher-89\])
\[table:ska2\_lsst\_errors\_bias\]
[^1]: This can be obtained by considering the evolution of $\delta$: in linear theory, $\delta(k, z) \sim D_1(z) \delta(k, 0) \sim \delta(k, 0) / (1 + z)$, hence $P(k, z) \sim P(k, 0) / (1 + z)^2$. On the other hand, in linear theory we also know that $P(k) \sim k^{-3}$ for large values of $k$, and thus combining these two expressions we obtain $k_\mathrm{NL} \sim (1 + z)^{2/3}$, and from $k_\mathrm{NL} \sim 1 / d_\mathrm{NL}$ we obtain the scaling behaviour as described in the text.
[^2]: We also consider the cross-correlations of these high bins with the ones common to LSST and SKA2.
[^3]: Note that we have verified that, for SKA2, neglecting lensing or parametrizing the magnification bias $s_\mathrm{SKA2}(z)$ and then marginalizing over it has almost not impact on the constraints.
[^4]: <http://intensitymapping.physics.ox.ac.uk/Codes/ULS/photometric/>
|
---
abstract: |
[*We show that all knots up to $6$ crossings can be represented by polynomial knots of degree at most $7$, among which except for $5_2, 5_2^*, 6_1, 6_1^*, 6_2, 6_2^*$ and $6_3$ all are in their minimal degree representation. We provide concrete polynomial representation of all these knots. Durfee and O’Shea had asked a question: Is there any $5$ crossing knot in degree $6$? In this paper we try to partially answer this question. For an integer $d\geq2$, we define a set $\pdt$ to be the set of all polynomial knots given by $t\mapsto\fght$ such that $\deg(f)=d-2,\,\deg(g)=d-1$1.5mm and1.5mm$\deg(h)=d$. This set can be identified with a subset of $\rtd$ and thus it is equipped with the natural topology which comes from the usual topology $\rtd$. In this paper we determine a lower bound on the number of path components of $\pdt$ for $d\leq 7$. We define path equivalence between polynomial knots in the space $\pdt$ and show that path equivalence is stronger than the topological equivalence.*]{}
[**Keywords:**]{}[ double points, crossing data, path equivalence]{}\
[**AMS Subject Classification: 57M25, 57Q45**]{}.
*2000 Mathematics Subject Classification: Primary 57M25; Secondary 14P25*.
---
Introduction {#sec1}
============
The idea of representing a long knot by polynomial embeddings was discussed by Arnold [@va1]. Later as an attempt to settle a long lasting conjecture of Abhyankar [@ssa] in algebraic geometry Shastri [@ars] proved that every long knot is ambient isotopic to an embedding given by $t\mapsto\fght$ where $f, g$ and $h$ are real polynomials. These kind of embeddings are referred as polynomial knots.
In his paper Shastri produced a choice of very simple polynomials $f, g$ and $h$ to represent the [*trefoil knot*]{} and the [*figure eight knot*]{}. He was hoping that once there are more examples available to represent various knot types, the conjecture of Abhyankar may be solved. This motivated the study of polynomial knots in a more rigorous and constructive manner. Explicit examples were constructed to represent a few classes of knots such as [*torus knots*]{} (see [@rm3] and [@rs2]) and [*two bridge knots*]{} (see [@pm1] and [@pm4]). To make the polynomials as simple as possible the notions of degree sequence and the minimal degree sequence were introduced. The minimality was with respect to the lexicographic order in $\mathbb{N}^3$. In this respect minimizing a degree sequence of a knot became a concern.
Around the same time Vassiliev [@va2] studied and discussed the topology of the space $\vd$ for $d\in\mathbb{N}$, where $\vd$ is the space (with a natural topology coming from $\mathbb{R}^{3d-3}$) of all polynomial knots $t\mapsto\fght$ such that $f, g$ and $h$ are monic polynomials, each of degree $d$ without constant terms. Later, Durfee and O’Shea [@do] studied the space $\kd$ for $d\in\mathbb{N}$, where the space $\kd$ is the space of all polynomial knots $t\mapsto\fght$ such that the highest degree among the degrees of $f, g$ and $h$ is exactly equal to $d$. For a nontrivial polynomial knot in $\kd$, by composing it with a suitable orientation preserving linear transformation, we get a polynomial knot $\phi=\fgh$ such that all the component polynomials have degree $d$. On the other hand if $\phi=(f,g,h)$ be a polynomial knot with $f,g,h$ have same degree $d$, by composing $\phi$ with a linear transformation of the form $\xyz\mapsto (x-\alpha z, y-\beta z, z)$ ($\alpha$ and $\beta$ being some suitable real numbers), we get a polynomial knot $t\mapsto\big(f_1(t), g_1(t), h(t)\big)$ with $\deg(f_1)$ and $\deg(g_1)$ being at most $d-1$, which by further composing with a linear transformation of the type $(x,y,z)\mapsto (x-\gamma y, y, z)$ gives a polynomial knot $t\mapsto\big(f_2(t), g_1(t), h(t)\big)$ with $\deg(f_2)$ at most $d-2$. These transformations are orientation preserving and hence the new polynomial knot obtained upon the compositions is topologically equivalent to the old one. Thus, if a nontrivial polynomial knot belongs to the space $\kd$, for $d\geq1$, it is equivalent to a polynomial knot with degree sequence $(d_1,d_2,d_3)$ such that $d_1<d_2<d_3\leq d$.
For a particular knot type, determining a polynomial representation with least degree is still an unsolved problem. Another important question that can be asked is: given any positive integer $d$ how many knots can be realized as a polynomial knot in degree $d$? It can be seen that for $d\leq 4$ there is only one knot, namely the unknot. There are three nonequivalent knots that can be realized for $d=5$, namely the unknot, the right hand trefoil and the left hand trefoil. Note that if a knot is realized in degree $d$, then it can be realized in degrees higher than $d.$ For degree $6$ we found an additional knot, namely the figure eight knot. In this connection Durfee and O’Shea asked: are there any $5$ crossing knot in degree $6$? We note that there are only two knots with $5$ crossings denoted as $5_1$ and $5_2$ in the Rolfsen’s table. Using a knot invariant known as [*superbridge index*]{}, we can prove that $5_1$ can not be represented in degree $6$. For $5_2$ knot the superbridge index is not known. We show that there exists a projection of $5_2$ knot given by $t\mapsto\fgt$ with $\deg(f)=4$ and $\deg(g)=5$ but there is no generic choice of coefficients of the polynomials $f$ and $g$ such that $5_2$ knot has a polynomial representation in degree $6$. We conjecture that there are no $5$ crossing knots in degree $6$. This will be more clear once the superbridge index of all the possible $3$-superbridge knots in the list of Jin and Jeon [@jj1] is completely known. We show that all $5$ crossing knots and all $6$ crossing knots (including the composite knots) are realized in degree $7$. We look at the spaces $\pd$ and $\pdt,$ where $\pd$ is the space of all polynomial knots $t\mapsto\fght$ with $\deg(f)<\deg(g)<\deg(h)\leq d$ and $\pdt$ is the space of of all polynomial knots $t\mapsto\fght$ such that $\deg(f)=d-2,\,\deg(g)=d-1$1.5mm and1.5mm$\deg(h)=d$ and define two polynomial knots in these spaces to be path equivalent if they belong to the same path component in that space.
This paper is organized as follows: In section \[sec2\], we discuss polynomial knots and introduce the spaces $\pd$ and $\pdt.$ We prove a few relevant results in connection with the polynomial representation of knots with a given crossing information. At the end of section 2, we show that for a generic choice of a regular projection $t\mapsto\fgt$ of the knot $5_2$ with $\deg(f)=4$ and $\deg(g)=5$ there does not exist a polynomial $h$ of degree $6$ such that $t\mapsto\fght$ is its polynomial representation, which partially answers the question asked by Durfee and O’Shea.
We divide the section 3 into five subsections. In section \[sec3.1\], we discuss the topology of the spaces $\pd$ and $\pdt$ for $d\geq 2$. In section \[sec3.2\], we estimate the path components of $\pd$ and $\pdt$ for $d\leq 4$. Sections \[sec3.3\], \[sec3.4\] and \[sec3.5\] are devoted towards estimating the path components of $\pdt$ for $d=5,6$ and $7$ respectively. We also provide polynomial knots belonging to each path components for each knot type and at the end of each subsections we summarize the number of path components in the form of a table. We conclude the paper in section \[sec4\] by mentioning a few remarks for the spaces $\pdt$, for $d>7$ and discussing about how the different topologies on the set $\spk$ of all polynomial knots can be given using different stratification and how they affect the path components of the resulting space.
Polynomial knots {#sec2}
================
A [long knot]{} is a proper smooth embedding $\phi:\ro\to\rt$ such that the map $t\mapsto\left\|\phi(t)\right\|$ is strictly monotone outside some closed interval of the real line and $\left\|\phi(t)\right\|\to\infty$ as $\lvert\hp t\hp\rvert\to\infty$.
Using the stereographic projection $\pi:\st\setminus\{(0,0,0,1)\}\to\rt$, we can identify the one point compactification of $\rt$ with $\st$. Thus, by this identification, any long knot $\phi:\ro\to\rt$ has a unique extension as a continuous embedding $\tilde{\phi}:\so\rightarrow \st$ which takes the north pole of $\so$ to the north pole of $\st$. The map $\tilde{\phi}$ is a tame knot and it is smooth everywhere except possibly at the north pole where it may have an algebraic singularity (see [@do], proposition 1).
\[def2\] Two long knots $\phi,\hf\psi:\ro\to\rt$ are said to be [topologically equivalent (simply, equivalent)]{} if there exist orientation preserving diffeomorphisms $F:\ro\to\ro$ and $H:\rt\to\rt$ such that $\psi=H\circ\phi\circ F$.
A [diffeotopy]{} (respectively, [homeotopy]{}) of $\rt$ is a continuous map $H:\zo\times\rt\to\rt$ such that: (i) $H_s=H(s,\hf\cdot\hf)$ is a diffeomorphism (respectively, homeomorphism) of $\rt$ for each $s\in\zo$ and (ii) $H_0$ is the identity map of $\rt$.
\[def4\] Two long knots $\tau,\sigma:\ro\to\rt$ are said to be [ambient isotopic]{} if there exists a diffeotopy $H:\zo\times\rt\to\rt$ such that $\sigma=H_1\circ\tau$.
For classical knots as tame embeddings of $\so$ in $\st$, the terms as in the definitions \[def2\] and \[def4\] can be defined using orientation preserving self-homeomorphisms of $\so$ and $\st$ and homeotopies of the ambient space $\st$. Using the standard results in topology, the following proposition is easy to prove.
\[thm1\] For long knots $\phi, \psi:\ro\to\rt$, the following statements are equivalent:3.5mm i) The knots $\phi$ and $\psi$ are equivalent.2.25mm ii) The knots $\phi$ and $\psi$ are ambient isotopic. iii) The extensions $\tilde{\phi}:\so\rightarrow\st$ and $\tilde{\psi}:\so\rightarrow\st$ are equivalent. 1.6mm iv) The extensions $\tilde{\phi}:\so\rightarrow\st$ and $\tilde{\psi}:\so\rightarrow\st$ are ambient isotopic.
A [polynomial map]{} is a map $\phi:\ro\to\rt$ whose component functions are univariate real polynomials.
A [polynomial knot]{} is a polynomial map which is an embedding.
A polynomial knot is a long knot. It has been proved (see [@ars]) that each long knot is topologically equivalent to some polynomial knot. Thus, each tame knot $\kappa:\so\to\st$ is ambient isotopic to the extension $\tilde{\phi}:\so\to \st$ of some polynomial knot $\phi:\ro\to\rt$.
A polynomial map $\phi=\fgh$ is said to have a [degree sequence $(\hs d_1, d_2, d_3\hs)$]{} if $\deg(f)=d_1,\, \deg(g)=d_2$ and $\deg(h)=d_3$.
A [polynomial degree]{} of a polynomial map $\varphi:\ro\to\rt$ is the maximum of the degrees of its component polynomials.
By composing with an orientation preserving tame[^1] polynomial automorphism of $\rt$, a nontrivial polynomial knot $\phi$ with degree $d$ acquires the form $\sigma=\fgh$ such that $\fghio$ and none of the degree lie in the semigroup generated by the other two (see [@do], section 5). For a sufficiently small $\varepsilon>0$, by adding $\varepsilon\hp t^{d-2},\,\varepsilon\hp t^{d-1}$1.5mm and1.5mm$\varepsilon\hp t^d$ in the respective components, one can make the degrees of $f,\,g$1.5mm and1.5mm$h$ respectively to be $d-2,\,d-1$1.5mm and1.5mm$d$ without changing the topological type of the knot. In other words, each polynomial knot $\phi$ of degree $d$ is topologically equivalent to a polynomial knot $\varsigma=\fghp$ with $\deg(f')=d-2,\,\deg(g')=d-1$1.5mm and1.5mm$\deg(h')=d$.
For any but fixed positive integer $d\geq 2$, consider a set $\ad$ of all polynomial maps $\fgh:\ro\to\rt$ with $\deg(f)\leq d-2$, $\deg(g)\leq d-1$ and $\deg(h)\leq d$. A typical element of this set would be a map $t\mapsto\abct$, where $a_i$’s, $b_i$’s and $c_i$’s are real numbers. The set $\ad$ can be identified with $\rtd$ and so it has a natural topology which comes from the usual topology of $\rtd$. Let $\pd$ be the set of all polynomial knots $\sigma=\fgh$ with $\fghio$ and let $\pdt$ be the set of all polynomial knots $\varsigma=\fghp$ with $\deg(f')=d-2,\,\deg(g')=d-1$1.5mm and1.5mm$\deg(h')=d$. Both $\pd$ and $\pdt$ are proper subsets of $\ad$; therefore, they have subspace topologies which comes from the topology of $\ad$. In other words, the spaces $\pd$ and $\pdt$ can be thought of as topological subspaces of $\rtd$ through the natural identification. Also, we may think the elements of the spaces $\pd$ and $\pdt$ as ordered $3d$-tuples of real numbers.
Note that $\spm\subsetneq\pn$, $\pnt\subsetneq\pn$ and $\pmt\nsubseteq\pnt$ for all $n>m\geq2$.
\[rem1\] For any $n>m$ and any $\phi=\fgh$ in $\pmt$, there exists a sufficiently small $\varepsilon>0$ such that a polynomial knot $\phi_\varepsilon\in\pnt$ given by $t\mapsto\big(\hp\varepsilon\hf t^{n-2}+f(t),\hs\varepsilon\hf t^{n-1}+g(t),\hs\varepsilon\hf t^n+h(t)\hp\big)$ is topologically equivalent to the polynomial knot $\phi$.
An ambient isotopy class $[\kappa]$ of a tame knot $\kappa:\so\to\st$ is said to have a [polynomial representation in degree $d$]{} if there is a polynomial knot $\phi:\ro\to\rt$ of degree $d$ such that its extension $\tilde{\phi}:\so\to\st$ is ambient isotopic to $\kappa$. In this case, the polynomial knot $\phi$ is called a [polynomial representation]{} of the knot-type $[\kappa]$.
A knot-type $[\kappa]$ is said to have [polynomial degree $d$]{} if it is the least positive integer such that there exists a polynomial knot $\phi$ of degree $d$ representing the knot-type $[\kappa]$. In this case, the polynomial knot $\phi$ is called a [minimal polynomial representation]{} of the knot-type $[\kappa]$.
\[rem4\] Obviously, the polynomial degree of a knot-type is a knot invariant.
\[rem5\] If a knot-type $[\kappa]$ is represented by a polynomial knot $\phi=\fgh$, then the knots $\mifgh,\, \fmgh,\, \fgmh$ and $\minusfgh$ represent the knot-type $[\kappa^*]$ of the mirror image of $\kappa$. Thus, a knot and its mirror image have same polynomial degree. This says that the polynomial degree can not detect the chirality of knots.
Certain numerical knot invariants can be inferred from the polynomial degree of a knot and vice-versa. In this connection some known useful results are summarized in the following proposition.
\[thm2\] For a classical tame knot $\kappa:\so\to\st$, we have the following: 1)2.5mm $c[\kappa]\leq \frac{(d-2)(d-3)}{2}$, 2)2.5mm $b[\kappa]\leq \frac{(d-1)}{2}$and 3)2.5mm $s[\kappa]\leq \frac{(d+1)}{2}$,where $c[\kappa],\, b[\kappa],\,s[\kappa]$ and $d$ denote respectively the crossing number, the bridge index, the superbridge index and the polynomial degree of the knot-type $[\kappa]$.
The first part of proposition \[thm2\] can be proved using the Bezout’s theorem. The proofs of the second and third parts are trivial. To get an idea about the proofs, one can refer the propositions $12,\,13$ and $14$ in [@do].
From the first result as mentioned in proposition \[thm2\], it is clear that in order to represent a knot with certain number of crossings the degree of its polynomial representation has some lower bound. The knots with same number of crossings may have different crossing patterns, i.e. over and under crossing information. The result below tells us how the degree relates to the nature of the crossings.
\[thm3\] Let $t\mapsto\fgt$ be a regular projection of a long knot $\kappa:\ro\to\rt$, where $f$ and $g$ are real polynomials; $\deg(f)=n$ and $\deg(g)=n+1$. Suppose the crossing data $\kappa$ is such that there are $r$ changes from over/under crossings to under/over crossings as we move along the knot. Then there exists a polynomial $h$ with degree $d\leq\min\{\hs n+2,r\hs\}$ such that the polynomial map $t\mapsto\fght$ is an embedding which is ambient isotopic to $\kappa$.
Let polynomials $f$ and $g$ be given by $f(t)=a_0 + a_1 t+\cdots + a_n t^n$ and $g(t)={b_0 + b_1 t+\cdots + b_{n+1} t^{n+1}}$. The double points of the curve $t\mapsto\fgt$ can be obtained by finding the real roots of the resultant $\varGamma$ of the polynomials $$\begin{aligned}
F(s, t)&=& a_1 +a_2\hp(s+t)+\cdots + a_n\hp(s^{n-1}+ s^{n-2}t+\cdots+t^{n-1})\hw\mbox{and}\\
G(s, t)&=& b_1 +b_2\hp(s+t)+\cdots + b_{n+1}\hp(s^n+ s^{n-1}t+\cdots+t^n)\hp.\end{aligned}$$ Let $[\hf a, b\hf]$ be an interval that contains all the roots of $\varGamma$. Let us call these roots as the crossing points. We divide the interval $[\hf a, b\hf]$ into sub intervals $a=a_0<a_1<\cdots<a_r=b$ in such a way that the $a_i$’s are not from the crossing points and within any sub interval $[\hf a_{i-1}, a_i\hf]$ all the crossing points are either under crossing points or over crossing points. Let $h_1(t)=\Pi_{i=1}^r (t-a_i)$. Clearly, the map $h_1$ is a polynomial of degree $r$ that has opposite signs at under crossing and over crossing points and thus $\phi_1=(\hs f,\hs g,\hs h_1\hs)$ represents the knot $\kappa$.
For $i\in\{1,2,\ldots,n\}$, let $(s_i, t_i)$ with $s_i<t_i$ and $s_1<s_2<\cdots<s_n$ be pairs of parametric values at which the projection has double points. Let $C_{n+2}\hp t^{n+2}+C_{n+1}\hp t^{n+1}+\cdots+C_1\hp t$ be a polynomial of degree $n+2$, where $C_i$’s are unknowns which we have to find by solving the system $$\begin{aligned}
C_{n+2}\hp( t_1^{n+2}-s_1^{n+2})+C_{n+1}\hp( t_1^{n+1}-s_1^{n+1} )+\cdots+C_1\hp( t_1-s_1 )&=& e_1\\
C_{n+2}\hp( t_2^{n+2}-s_2^{n+2})+C_{n+1}\hp( t_2^{n+1}-s_2^{n+1} )+\cdots+C_1\hp( t_2-s_2 )&=& e_2\\
\vdots\hskip36.3mm\vdots\hskip38mm\vdots\hskip7.3mm & &\hskip1mm\vdots\\
C_{n+2}\hp( t_n^{n+2}-s_n^{n+2})+C_{n+1}\hp( t_n^{n+1}-s_n^{n+1} )+\cdots+C_1\hp( t_n-s_n )&=& e_n\end{aligned}$$ of $n$ linear equations in $n+2$ unknowns. The numbers $e_i$’s are arbitrary but fixed non-zero real numbers and they are positive or negative according to which the crossing is under crossing or over crossing. The conditions $f(s_i)=f(t_i)$ and $g(s_i)=g(t_i)$ for $i\in\{1,2,\ldots,n\}$ imply that the coefficient matrix$$M=
\left[
{\begin{array}{cccc}
t_1^{n+2}-s_1^{n+2} & t_1^{n+1}-s_1^{n+1} & \cdots & t_1-s_1\\
t_2^{n+2}-s_2^{n+2} & t_2^{n+1}-s_2^{n+1} & \cdots & t_2-s_2\\
\vdots & \vdots & & \vdots\\
t_n^{n+2}-s_n^{n+2} & t_n^{n+1}-s_n^{n+1} & \cdots & t_n-s_n\\
\end{array} }
\right]$$of the above system has rank $n$. This says that the system of linear equations has infinitely many solutions. Let $C_1=c_1, C_2=c_2,\ldots, C_{n+2}=c_{n+2}$ be any one of the solution. With this solution, the polynomial $h_2(t)=c_1\hp t+c_2\hp t^2+\cdots+c_{n+2}\hp t^{n+2}$ is such that the embedding $\phi_2=(\hs f,\hs g,\hs h_2\hs)$ represents the knot $\kappa$.
In connection with polynomial representation of knots, the following questions are of interest namely:
Given a knot $\kappa$, what is the least degree $d$ such that it has a polynomial representation in the space $\pd$?
Given a positive integer $d$, what are the knots those can be represented in the space $\pd$?
Given a positive integer $d$, estimate the number of path components of the spaces $\pd$ and $\pdt$.
We have tried to answer these questions for $d\leq7$. In general, all these problems become difficult and answering one helps in answering the other two.
\[thm4\] The unknot is the only knot that can be represented as a polynomial knot in the space $\pd$ for $d\leq4$.
Let $\kappa$ be a knot which is represented by a polynomial knot in $\pd$ for $d\leq 4$. By proposition \[thm2\], the crossing number $c[\kappa]$ of the knot-type $[\kappa]$ satisfies the inequality $c[\kappa]\leq \frac{(d-2)(d-3)}{2}\leq \frac{(4-2)(4-3)}{2}=1$. Hence $\kappa$ must be the unknot.
\[thm5\] The unknot, the left hand trefoil and the right hand trefoil are the only knots those can be represented as polynomial knots in the space $\pf$.
Let $\kappa$ be a knot which is represented by a polynomial knot in $\pf$. By proposition \[thm2\], the crossing number $c[\kappa]$ of the knot-type $[\kappa]$ satisfies the inequality $c[\kappa]\leq \frac{(5-2)(5-3)}{2}=3$. Thus, $\kappa$ is either the unknot or one of the trefoil knot.
The figure eight knot has a polynomial representation (figure 3, section \[sec3.4\]) in the space $\pst$. However, since the knot $5_1$ is $4$-superbridge, so by proposition \[thm2\], it can not be represented in degree $6$. Regarding the knot $5_2$, we have the following:
\[thm6\] There exist polynomials $f$ and $g$ of degrees $4$ and $5$ respectively such that the map $t\mapsto\fgt$ represents a regular projection of $5_2$ knot.
Consider a plane curve $C$ given by the parametric equation $t\mapsto(\hp t^4,\hp t^5\hp)$. This curve has an isolated singularity at the origin. For such plane curves, there are two important numbers that remain invariant under any formal isomorphisms of plane curves. The first one is the Milnor number $\mu$ and the other is the $\delta$ invariant (see [@na2]). For a single component plane curve they satisfy the relation $2\delta =\mu$.
In the present case it turns out that the $\delta$ invariant is equal to $\frac{(5-1)(4-1)}{2}=6.$ The $\delta$ invariant of a plane curve which is singular at the origin measures the number of double points that can be created in a neighborhood of the origin. Note that, we have $\delta\geq 5$. Using a result of Daniel Pecker [@dp] we can deform the curve $C$ into a new curve $\tilde{C}$ given by $t\mapsto\abtf$ such that $\tilde{C}$ has $5$ real nodes and $1$ imaginary node. By continuity argument, we can choose the coefficients $a_i$’s and $b_i$’s such that the nodes occur in the order they are in the regular projection of the given knot.
In fact, we have found the curve $t\mapsto\big(\hp2(t - 2)(t + 4)(t^2 - 11),\hs t(t^2 - 6)(t^2 - 16)\hp\big)$ which represents a regular projection of $5_2$ knot as shown in the figure bellow:
![Projection of $5_2$ knot](5_2projection)
1.5mm
Let us consider the following sets: $$\begin{aligned}
U_1&=\big\{\hs\abf\in\ro^{11}\mid\mbox{a map}\hskip1.3mm t\mapsto\big(\hp a_4t^4+ a_3 t^3\\ &\hskip8.2mm+a_2 t^2+a_1t+a_0,\hp b_5t^5+b_4t^4+b_3t^3+b_2t^2+b_1t+b_0\hp\big)\hskip1.5mm\mbox{is a regular}\\ &\hskip9.2mm\mbox{projection of}\hskip1.3mm 5_2\hskip1.3mm \mbox{knot with}\hskip1.3mm 5\hskip1.3mm \mbox{double points}\hf\big\}.\\
U_2&=\big\{\hs\abf\in\ro^{11}\mid\mbox{a map}\hskip1.3mm t\mapsto\big(\hp a_4t^4+ a_3 t^3\\ &\hskip8.2mm+a_2 t^2+a_1t+a_0,\hp b_5t^5+b_4t^4+b_3t^3+b_2t^2+b_1t+b_0\hp\big)\hskip1.5mm\mbox{is a regular}\\ &\hskip9.2mm\mbox{projection of}\hskip1.3mm 5_2\hskip1.3mm \mbox{knot with}\hskip1.3mm 6\hskip1.3mm \mbox{double points}\hf\big\}.\end{aligned}$$ By proposition \[thm6\], it is easy to see that the set $U=U_1\cup U_2$ is nonempty. For any $\abf\in U$, we must have both $a_4$ and $b_5$ non zero, because otherwise by an application of the Bezout’s theorem there would be less than five crossings (see [@do], lemma 4) for the curve $t\mapsto\abtf$. For a projection $t\mapsto\fgt$ such that $f$ and $g$ are polynomials of degrees $4$ and $5$ respectively, we would like to find a polynomial $h$ of least possible degree such that $t\mapsto\fght$ represents $5_2$ knot. In this direction, we have the following theorem:
\[thm7\] For a generic choice of a regular projection $t\mapsto\fgt$ with $\deg(f)=4$ and $\deg(g)=5$, there does not exist any polynomial $h$ of degree $6$ such that a polynomial map $t\mapsto\fght$ represents the knot $5_2$.
For $f(t)=\atfr$ and $g(t)=\btf$, let $t\mapsto\fgt$ be a regular projection of $5_2$ knot. Note that $a_4b_5\neq0$. Suppose $h(t)=\cts$ be a degree $6$ polynomial such that $t\mapsto\fght$ represents the knot $5_2$. By composing this embedding with a suitable affine transformation, we can assume that the coefficients $c_0$, $c_4$ and $c_5$ are zero. Thus, we can take $h(t)=c_6t^6+c_3t^3+c_2t^2+c_1t$. Note that the projection has either $5$ or $6$ double points. So, we consider the following cases:
Case $i)$ If the projection has $5$ crossings:\
For $i\in\{1,2,\ldots,5\}$, let $(s_i, t_i)$ with $s_i<t_i$ and $s_1<s_2<\cdots<s_5$ be the pairs of parametric values at which the crossings occur in the curve $t\mapsto\fgt$. Since we want alternatively over and under crossings, so we should have $h(t_i)-h(s_i)$ is positive if $i$ is odd (i.e. the crossing is under crossing) and it is negative if $i$ is even (i.e. the crossing is over crossing). In other words, we have to find the coefficients $c_1,$ $c_2$, $c_3$ and $c_6$ such that for $i\in\{1,2,\ldots,5\}$ and $r_i\in\ro^+$, $h(t_i)-h(s_i)= r_i$ if $i$ is odd and $h(t_i)-h(s_i)= -r_i$ if $i$ is even. This gives us a system of $5$ linear equations in $4$ unknowns as follows: $$\begin{aligned}
c_6(t_1^6-s_1^6)+c_3(t_1^3-s_1^3)+c_2(t_1^2-s_1^2)+c_1(t_1-s_1)&=& r_1\label{eq2.1}\\
c_6(t_2^6-s_2^6)+c_3(t_2^3-s_2^3)+c_2(t_2^2-s_2^2)+c_1(t_2-s_2)&=& -r_2\\
c_6(t_3^6-s_3^6)+c_3(t_3^3-s_3^3)+c_2(t_3^2-s_3^2)+c_1(t_3-s_3)&=& r_3\\
c_6(t_4^6-s_4^6)+c_3(t_4^3-s_4^3)+c_2(t_4^2-s_4^2)+c_1(t_4-s_4)&=& -r_4\\
c_6(t_5^6-s_5^6)+c_3(t_5^3-s_5^3)+c_2(t_5^2-s_5^2)+c_1(t_5-s_5)&=& r_5\label{eq2.5}\end{aligned}$$ The rank of the coefficient matrix $$A=
\left[ {\begin{array}{cccc}
t_1^6-s_1^6 & t_1^3-s_1^3 & t_1^2-s_1^2 & t_1-s_1 \\
t_2^6-s_2^6 & t_2^3-s_2^3 & t_2^2-s_2^2 & t_2-s_2 \\
t_3^6-s_3^6 & t_3^3-s_3^3 & t_3^2-s_3^2 & t_3-s_3 \\
t_4^6-s_4^6 & t_4^3-s_4^3 & t_4^2-s_4^2 & t_4-s_4 \\
t_5^6-s_5^6 & t_5^3-s_5^3 & t_5^2-s_5^2 & t_5-s_5 \\
\end{array} } \right]$$of the system of linear equations \[eq2.1\] - \[eq2.5\] is at most $4$. The system has a solution if and only if the rank of $A$ is equal to the rank of the augmented matrix $\tilde{A}$. In other words, the system of linear equations \[eq2.1\] - \[eq2.5\] has no solution if $\det\hs(\tilde{A})\neq 0$. For $j\in\{1,2,\ldots,5\}$, let $A_j$ be a submatrix of $A$ obtained by deleting the $j^{th}$ row of $A$. It is easy to see that $$\det\hs(\tilde{A})=r_1\hs\det(A_{1})+r_2\hs\det(A_{2})+r_3\hs\det(A_{3}) +r_4\hs\det(A_{4})+r_5\det\hs(A_{5})\hp.$$ Note that for each $j$, $\det\hs(A_j)$ is an algebraic function of $t_i$’s and $s_i$’s which are actually analytic functions of the coefficients $a_k$’s of $f$ and the coefficients $b_k$’s of $g$. Thus $\det\hs(\tilde{A})$ is a non-constant analytic function of $a_k$’s, $b_k$’s and $r_k$’s. Hence the set $$V_1=\{\hf(\hp a_0,\ldots,a_4,b_0,\ldots,b_5, r_1,\ldots,r_5\hp)\in U_1\times(\mathbb{R}^+)^5\mid\det(\tilde{A})\neq0\hf\}$$ is an open and a dense subset of $U_1\times(\mathbb{R}^+)^5$. For any choice of an element in $V_1$, the system of linear equations \[eq2.1\] - \[eq2.5\] has no solution. Therefore, for a generic choice of a regular projection $t\mapsto\fgt$ with $5$ crossings and having $\deg(f)=4$ and $\deg(g)=5$, there does not exist any polynomial $h$ of degree $6$ such that $t\mapsto\fght$ represents the knot $5_2$.
Case $ii)$ If the projection has $6$ crossings:\
For $i\in\{1,2,\ldots,6\}$, let $(s_i, t_i)$ with $s_i<t_i$ and $s_1<s_2<\cdots<s_6$ be the pairs of parametric values at which the crossings occur in the curve $t\mapsto\fgt$. Let $e=(e_1,e_2,\ldots,e_6)$ be a pattern such that this together with the projection $t\to\fgt$ describe the knot $5_2$, where $e_i$ is either $1$ or $-1$ according to which the $i^{th}$ crossing is under crossing or over crossing. Let $U_e$ be a set of elements $\abf$ of $U_2$ such that the projection $t\to\abtf$ together with the pattern $e$ describe the knot $5_2$. We want to find the values of the coefficients $c_1,$ $c_2$, $c_3$ and $c_6$ such that for $i\in\{1,2,\ldots,6\}$ and $r_i\in\ro^+$, $h(t_i)-h(s_i)= e_ir_i$. This gives us a system of $6$ linear equations in $4$ unknowns as follows: $$\begin{aligned}
c_6(t_1^6-s_1^6)+c_3(t_1^3-s_1^3)+c_2(t_1^2-s_1^2)+c_1(t_1-s_1)&=& e_1r_1\label{eq2.6}\\
c_6(t_2^6-s_2^6)+c_3(t_2^3-s_2^3)+c_2(t_2^2-s_2^2)+c_1(t_2-s_2)&=& e_2r_2\\
c_6(t_3^6-s_3^6)+c_3(t_3^3-s_3^3)+c_2(t_3^2-s_3^2)+c_1(t_3-s_3)&=& e_3r_3\\
c_6(t_4^6-s_4^6)+c_3(t_4^3-s_4^3)+c_2(t_4^2-s_4^2)+c_1(t_4-s_4)&=& e_4r_4\\
c_6(t_5^6-s_5^6)+c_3(t_5^3-s_5^3)+c_2(t_5^2-s_5^2)+c_1(t_5-s_5)&=& e_5r_5\\
c_6(t_6^6-s_6^6)+c_3(t_6^3-s_6^3)+c_2(t_6^2-s_6^2)+c_1(t_6-s_6)&=& e_6r_6\label{eq2.11}\end{aligned}$$ The rank of the coefficient matrix $$B=
\left[ {\begin{array}{cccc}
t_1^6-s_1^6 & t_1^3-s_1^3 & t_1^2-s_1^2 & t_1-s_1 \\
t_2^6-s_2^6 & t_2^3-s_2^3 & t_2^2-s_2^2 & t_2-s_2 \\
t_3^6-s_3^6 & t_3^3-s_3^3 & t_3^2-s_3^2 & t_3-s_3 \\
t_4^6-s_4^6 & t_4^3-s_4^3 & t_4^2-s_4^2 & t_4-s_4 \\
t_5^6-s_5^6 & t_5^3-s_5^3 & t_5^2-s_5^2 & t_5-s_5 \\
t_6^6-s_6^6 & t_6^3-s_6^3 & t_6^2-s_6^2 & t_6-s_6 \\
\end{array} } \right]$$of the system of linear equations \[eq2.6\] - \[eq2.11\] is at most $4$. The system has a solution if and only if the rank of $B$ is equal to the rank of the augmented matrix $\tilde{B_e}$ (where the subscript $e$ denotes the dependence of $\tilde{B_e}$ on the pattern $e$). Thus, system of linear equations \[eq2.6\] - \[eq2.11\] has no solution if $\tilde{B_e}$ has full rank (i.e. $rank(\tilde{B_e})=5$).
Note that for each $i$, $t_i$ and $s_i$ are analytic functions of the coefficients $a_k$’s of $f$ and the coefficients $b_k$’s of $g$. Thus $\tilde{B_e}$ is a non-constant analytic function of $a_k$’s, $b_k$’s and $r_k$’s. Hence the set $$V_e=\{\hf(\hp a_0,\ldots,a_4,b_0,\ldots,b_5, r_1,\ldots,r_6\hp)\in U_e\times(\mathbb{R}^+)^6\mid\tilde{B_e}\hskip1.3mm \mbox{has full rank}\hf\}$$ is an open and a dense subset of $U_e\times(\mathbb{R}^+)^6$. It is clear that, for any choice of an element in $V_e$, the system of linear equations \[eq2.6\] - \[eq2.11\] has no solution.
Now consider the disjoint union $V_2=\bigsqcup_eV_e$ which is clearly an open and a dense subset of the disjoint union $U_3=\bigsqcup_eU_e\times(\mathbb{R}^+)^6$, where both the unions are taken over all the patterns $e$. Note that $U_2\times(\mathbb{R}^+)^6\subseteq U_3$. It is easy to see that, for any choice of an element in $V_2$, the corresponding system of linear equations has no solution. Hence, for a generic choice of a regular projection $t\mapsto\fgt$ with $6$ crossings and having $\deg(f)=4$ and $\deg(g)=5$, there does not exist a polynomial $h$ of degree $6$ such that $t\mapsto\fght$ represents the knot $5_2$.
\[coj1\] The knot $5_2$ can not be realized in the space $\ps$.
It is conjectured that the only 3-superbridge knots are $3_1$ and $4_1$ [@jw]. Once it is proved, then the conjecture \[coj1\] will follow trivially. Similarly, we conjecture that the knots $6_1, 6_2$ and $6_3$ can not be realized in $\ps$.
Spaces of polynomial knots {#sec3}
==========================
The spaces $\pd$ and $\pdt$ {#sec3.1}
---------------------------
For a positive integer $d$, Derfee and O’Shea [@do] have discussed the topology (it is inherited from $\ro^{3d+3}$) of the space $\kd$ of all polynomial knots with degree $d$ (that is, the degrees of the component polynomials are at most $d$ and at least one of the degree is $d$). Let $\spk$ denote the set of all polynomial knots. We can write $$\spk=\bigcup_{d\geq1}\kd\hf.$$ This set can be given the inductive limit topology; that is, a set $U\subseteq\spk$ is open in $\spk$ if and only if the set $U\cap\kd$ is open in $\kd$ for all $d\geq1$. Thus, we have the space $\spk$ of all polynomial knots.
\[def3.1\] Two polynomial knots $\phi,\psi\in\spk$ are said to be [polynomially isotopic]{} if there exists a one parameter family $\big\{\hf\Phi_s\in\spk\mid\! s\in\zo\hf\big\}$ of polynomial knots such that $\Phi_0=\phi$ and $\Phi_1=\psi$.
In the definition above, one has to note that a map $\Phi:\zo\times\ro\rightarrow\rt$ defined by $s\mapsto\Phi_s$ is continuous. It has been proved that if two polynomial knots are topologically equivalent as long knots then they are polynomially isotopic [@rs1]. Any polynomial isotopy within the space $\kd$ for a fixed $d$ gives rise to a smooth path inside $K_d$. However, two polynomial knots belonging to two different spaces $\kd$ and $\mathcal{K}_{d'}$ do not belong to the same path component of the space $\spk$ even though they are polynomially isotopic or topologically equivalent. This motivates us to define a equivalence based on knots belonging to the same path component inside a space of polynomial knots with a given condition on the degrees of the component polynomials. Recall that $\pd$ be the space of all polynomial knots $\sigma=\fgh$ with $\fghio$ and $\pdt$ be the space of all polynomial knots $\varsigma=\fghp$ with $\deg(f')=d-2,\,\deg(g')=d-1$1.5mm and1.5mm$\deg(h')=d$.
Two polynomial knots are said to be [path equivalent]{} in $\pd$ if they belong to the same path component of the space $\pd$.
Similarly, the path equivalence can be defined for the spaces $\pdt$ and $\kd$. Also, it can be defined for the space $\spk$ of all polynomial knots. Using advanced techniques of differential topology Durfee and O’Shea gave a proof (see [@do], proposition 9) for the following fact:
\[thm8\] If two polynomial knots are path equivalent in $\kd$, then they are topologically equivalent.
\[cor0\] If two polynomial knots are path equivalent in $\spk$, then they are topologically equivalent.
Since all the sets $\kd$, for $d\geq1$, are open and closed in $\spk$, so any path in $\spk$ joining two polynomial knots lies wholly in $\kd$ for some $d\geq1$. Thus, if two polynomial knots are path equivalent in $\spk$, then they are so in $\kd$ for some $d\geq1$ and hence they are topologically equivalent.
\[cor1\] If two polynomial knots are path equivalent in $\pdt$, then they are topologically equivalent.
Let $\phi$ and $\psi$ be polynomial knots belonging to a same path component of the space $\pdt$. Since $\pdt\subset\kd$, so $\phi$ and $\psi$ are members of $\kd$ belonging to its same path component. Therefore, by proposition \[thm8\], they are topologically equivalent.
\[thm9\] Suppose $\phi=\fgh\in\pdt$ be a polynomial representation of a classical tame knot $\kappa$, then $\phi$ and its mirror image $\psi=\fgmh$ belong to the different path components of the space $\pdt$.
Suppose contrary that $\phi=\fgh$ and $\psi=\fgmh$ belong to the same path component of $\pdt$ and let $\Phi:\zo\to\pdt$ be a path from $\phi$ to $\psi$. For $s\in\zo$, let $\Phi_s=\Phi(s)$ and let it be given by $$\begin{aligned}
\Phi_s(t)&=&\big(\ho\alpha_{d-2}(s)t^{d-2}+\cdots+\alpha_1(s)t+\alpha_0(s),\hs\beta_{d-1}(s)t^{d-1}+\cdots+\beta_1(s)t+\beta_0(s),\\ & & \hskip3mm\gamma_d(s)t^d+\cdots+\gamma_1(s)t+\gamma_0(s)\ho\big)\end{aligned}$$ for $t\in\ro$. The maps $\alpha_i$’s, $\beta_i$’s and $\gamma_i$’s are continuous. Let $f,\, g$ and $h$ be given by $$\begin{aligned}
f(t)&=&\at\hf,\\ g(t)&=&\bt\hw \mbox{and}\\ h(t)&=&\ct\hf.\end{aligned}$$ Since $\Phi_0=\phi=\fgh$ and $\Phi_1=\psi=\fgmh$, so for each $i$, we have $\alpha_i(0)=\alpha_i(1)=a_i,\hs\beta_i(0)=\beta_i(1)=b_i,\hs\gamma_i(0)=c_i$ and $\gamma_i(1)=-c_i$. In particular, $\gamma_d(0)=c_d$ and $\gamma_d(1)=-c_d$. Since $c_d\neq0$ and $\gamma_d$ is a continuous function, so by the intermediate value theorem, $\gamma_d(s_0)=0$ for some $s_0\in\ozo$. Therefore, the third component of $\Phi_{s_0}$ has degree less than $d$ and thus it does not a belong to the space $\pdt$. This is a contradiction to the fact that $\Phi$ is a path in $\pdt$.
\[cor2\] Let $\varphi=\fgh\in\pdt$ ($d\geq3$) be a polynomial representation of a tame knot $\kappa:\so\to\st$. Then we have the following:2.25mm i) If $\kappa$ is acheiral, then it corresponds to at least eight path components of $\pdt$. ii) If $\kappa$ is cheiral, then each $\kappa$ and $\kappa^*$ corresponds to at least four path components6.6mm of the space $\pdt$.
Using the argument as used in the proof of theorem \[thm9\], it is easy to see that $\varphi_1=\fgh,\,\varphi_2=(\hp -f,\hp -g,\hp h\hp),\,\varphi_3=(\hp -f,\hp g,\hp -h\hp),\,\varphi_4=(\hp f,\hp -g,\hp -h\hp),\,\varphi_5=\mifgh,\,\varphi_6=\fmgh,\,\varphi_7=\fgmh$ and $\varphi_8=\minusfgh$ belong to eight distinct path components of $\pdt$. If $\kappa$ is acheiral, then all the knots $\varphi_1, \varphi_2,\dots,\varphi_8$ represent it. If $\kappa$ is cheiral, then the knots $\varphi_1, \varphi_2, \varphi_3$ and $\varphi_4$ represent $\kappa$ and the knots $\varphi_5, \varphi_6, \varphi_7$ and $\varphi_8$ represent the mirror image $\kappa^*$ of $\kappa$.
\[rem6\] For $d\geq3$ and $\phi=\fgh\in\pdt$, there are eight distinct path components of the space $\pdt$ each of which contains exactly one of the knot $\phi_e=(e_1 f,e_2\hp g,e_3\hp h)$ for $e=(e_1,e_2,e_3)$ in $\{-1,1\}^3$. This shows that the total number of path components of the space $\pdt$, for $d\geq3$, are in multiple of eight.
\[rem7\] If $n$ distinct knot-types (up to mirror images) are represented in $\pdt$, then it has at least $8n$ distinct path components.
The spaces $\pd$ and $\pdt$ for $d\leq 4$ {#sec3.2}
------------------------------------------
For a polynomial map $\phi\in\ad$, let us denote its first, second and third components respectively by $f_\phi,\, g_\phi$ and $h_\phi$. Also, for $i=0,1,\ldots, d$, we denote the coefficients of $t^i$ in the polynomials $f_\phi,\, g_\phi$ and $h_\phi$ by $a_{i\phi},\,b_{i\phi}$ and $c_{i\phi}$ respectively. Sometimes we use letters $\varphi,\psi, \tau,\sigma,\varsigma$ and $\omega$ to denote the elements of $\ad$. In such cases, the corresponding components and their coefficients will be denoted using corresponding subscripts. For example, for $\sigma\in\mathcal{A}_4$, its second component will be denoted by $g_\sigma$ and $b_{2\sigma}$ will denote the coefficient of $t^2$ in the polynomial $g_\sigma$.
\[thm10\] The space $\pw$ is open in $\mathcal{A}_2$ and it has exactly four path components.
Note that $\pw=\big\{\hs\phi\in\mathcal{A}_2\mid b_{1\phi}\hp c_{2\phi}\neq0\hs\big\}$. By the identification of the space $\mathcal{A}_2$ with $\ro^6$, it is easy to see that the space $\pw$ is naturally homeomorphic to the open subset $\big\{\hs(\hp a_0,b_0,b_1,c_0,c_1,c_2 \hp) \in\ro^6\mid b_1c_2\neq0 \hs\big\}$ of the Euclidean space $\ro^6$. Therefore, the set $\pw$ is open in $\mathcal{A}_2$ and it has four path components as follows:
$\mathcal{P}_{21}=\big\{\hs\phi\in\mathcal{A}_2\mid b_{1\phi}>0$ & $c_{2\phi}>0\hs\big\}$, $\mathcal{P}_{22}=\big\{\hs\phi\in\mathcal{A}_2\mid b_{1\phi}>0$ & $c_{2\phi}<0\hs\big\}$,$\mathcal{P}_{23}=\big\{\hs\phi\in\mathcal{A}_2\mid b_{1\phi}<0$ & $c_{2\phi}>0\hs\big\}$, $\mathcal{P}_{24}=\big\{\hs\phi\in\mathcal{A}_2\mid b_{1\phi}<0$ & $c_{2\phi}<0\hs\big\}$.
\[rem8\] It is easy to see that the sets $\pw$ and $\pwt$ are equal and thus proposition \[thm10\] is also true if $\pw$ is replaced by $\pwt$.
\[thm11\] Let $X$ be a topological space and let $\mathfrak{F}$ be an arbitrary covering (with at least two members) for $X$ by its non-empty subsets. Let $U$ and $V$ be two distinct members of the covering $\mathfrak{F}$. Then for $X$ to be path connected, it is enough to satisfy the following conditions:2.25mm i) For any $u\in U$ and any $v\in V$ there is a path from $u$ to $v$. ii) For any $W\in\mathfrak{F}$ and any $x\in W$ there exists an element $y\in U\cup V$ such that6.6mm there is a path from $x$ to $y$.
If the cover $\mathfrak{F}$ contains only two non-empty distinct subsets $U$ and $V$, then for $X$ to be path connected it is sufficient to satisfy the first condition of the lemma.
\[thm12\] The space $\pt$ is path connected.
We consider the following sets: $U_1=\{\ho\varphi\in\mathcal{A}_3\mid\varphi$ has degree sequence $(\hp0,1,2\hp)\hs\}$,$U_2=\{\ho\varphi\in\mathcal{A}_3\mid\varphi$ has degree sequence $(\hp0,1,3\hp)\hs\}$,$U_3=\{\ho\varphi\in\mathcal{A}_3\mid\varphi$ has degree sequence $(\hp0,2,3\hp)\hs\}\cap\mathcal{P}_3$and$U_4=\{\ho\varphi\in\mathcal{A}_3\mid\varphi$ has degree sequence $(\hp1,2,3\hp)\hs\}=\ptt$.
It is easy to note that these sets are pairwise disjoint and their union is exactly equal to $\mathcal{P}_3$. To prove the theorem, we proceed as follows:
$i)$ Let $\phi\in U_1$ and $\psi\in U_4$ be arbitrary elements. Let $\Phi:\zo\rightarrow\mathcal{A}_3$ be given by $\Phi(s)=\Phi_s$ for $s\in\zo$, where$$\Phi_s(t)=(1-s)\hs\phi(t)+s\hs\psi(t)$$for $t\in\ro$. It is clear that $\Phi_0=\phi$ and $\Phi_1=\psi$. For $s\in\ozo$, $\Phi_s$ has degree sequence $(\hs1,2,3\hs)$ and thus it is an element of $U_4$. This shows that $\Phi$ is a path in $\mathcal{P}_3$ from $\phi$ to $\psi$.
$ii)$ Suppose $\tau=\fgh$ be an arbitrary element of $U_2\cup U_3$. Let us choose an element $e_2\in\{-1,1\}$ such that $e_2\hp b_{2\tau}\geq0$. Let $\Psi:\zo\rightarrow\mathcal{A}_3$ be given by $\Psi(s)=\Psi_s$ for $s\in\zo$, where $$\Psi_s(t)=\left(\hs (1-s)f(t)+s\hf t,\hs (1-s)g(t)+e_2\hp s\hf t^2,\hs h(t)\hs\right)$$
for $t\in\ro$. Note that $\Psi_0=\tau$ and $\Psi_1=\sigma$, where $\sigma$ is given by $\sigma(t)=\big(\hp t, e_2\hp t^2, h(t)\hp\big)$ for $t\in\ro$. It is easy to see that $\sigma\in U_4$ and $\Psi_s\in U_4$ for all $s\in\ozo$. Therefore, we have a path in $\mathcal{P}_3$ from $\tau$ to $\sigma$.
The first and second parts above satisfy respectively the first and second assumptions of lemma \[thm11\], so by this lemma, the space $\mathcal{P}_3$ is path connected.
\[thm17\] The space $\ptt$ has eight path components.
For $e=(e_1,e_2,e_3)$ in $\{-1,1\}^3$, consider the following set:
$\mathcal{\tilde{P}}_{3e}=\big\{\hs\phi\in\mathcal{A}_3\,\mid\, e_1\hp a_{1\phi}>0,\; e_2\hp b_{2\phi}>0\hw \text{and}\hw e_3\hp c_{3\phi}>0\hs\big\}$7.9mm$\cong\big\{\hp(\hp a_0,a_1,b_0,b_1,b_2,c_0,c_1,c_2,c_3\hp) \in\ro^9\mid e_1\hp a_1>0,\,e_2\hp b_2>0$ and $e_3\hp c_3>0\hp\big\}$,
where $\cong$ denotes the natural homeomorphism of the spaces under the identification of $\mathcal{A}_3$ with $\ro^9$. It is easy to see that $\ptt=\bigcup_e\mathcal{\tilde{P}}_{3e}$. For any $e\in\{-1,1\}^3$, the set $\mathcal{\tilde{P}}_{3e}$ is path connected. Also, for $e\neq e'$, there is no path in $\ptt$ from an element of the set $\mathcal{\tilde{P}}_{3e}$ to an element of the set $\mathcal{\tilde{P}}_{3e'}$. Thus, the sets $\mathcal{\tilde{P}}_{3e}$, for $e\in\{-1,1\}^3$, are nothing but the path components of the space $\ptt$.
\[thm13\] A polynomial map $\phi\in\mathcal{A}_4$ given by $t\mapsto(\hs t^2+a\hf t\hs,\hs t^3+b\hf t,\hs t^4+c\hf t\hs)$ is an embedding if and only if $3\hf a^2+4\hf b>0$ or $a^3+2\hf a\hf b+c\neq0$.
Note that, a map $\phi\in\mathcal{A}_4$ given by $t\mapsto(\hs t^2+a\hf t\hs,\hs t^3+b\hf t,\hs t^4+c\hf t\hs)$ is an embedding $\Leftrightarrow$ $\phi(s)\neq\phi(t)$ for all $s,t\in\ro$ with $s\neq t$ and $\phi'(u)\neq0$ for all $u\in\ro$ $\Leftrightarrow$ the equations $$\begin{aligned}
(s+t)+a&=& 0\hf,\label{eq1}\\
(s^2+s\hf t+t^2)+b&=& 0 \label{eq2}\hw\mbox{and}\\
(s^3+s^2\hf t+s\hf t^2+t^3)+c&=& 0\label{eq3}\end{aligned}$$ does not have a common real solution. Using \[eq1\] in \[eq2\], we get $$\begin{aligned}
t^2+a\hf t+(a^2+b)&=& 0\hp.\label{eq4}\end{aligned}$$
This quadratic equation has solutions $t=t_1$ and $t=t_2$, where $$\begin{aligned}
t_1=\dfrac{-a-\sqrt{-3\hf a^2-4\hf b}}{2}\hskip5mm \mbox{and} \hskip5mmt_2=\dfrac{-a+\sqrt{-3\hf a^2-4\hf b}}{2}\;.\end{aligned}$$ In order to prove the proposition, it is sufficient to check the following three statements (in fact, they are easy to check):1) $3\hf a^2+4\hf b>0$ $\Leftrightarrow$ the equation \[eq4\] has no real solution $\Leftrightarrow$ the equations \[eq1\] and \[eq2\] does not have a common real solution. 2) $3\hf a^2+4\hf b=0$ and $a^3+2\hf a\hf b+c\neq0$ $\Leftrightarrow$ $3\hf a^2+4\hf b=0$ and $a^3-2\hf c\neq0$ $\Leftrightarrow$ $t=-a/2$ is the only solution of the equation \[eq4\] and $a^3-2\hf c\neq0$ $\Leftrightarrow$ $(s,t)=(-a/2,-a/2)$ is the only common real solution of the equations \[eq1\] and \[eq2\], but it is not a solution of the equation \[eq3\].
3\) $3\hf a^2+4\hf b<0$ and $a^3+2\hf a\hf b+c\neq0$ $\Leftrightarrow$ $t=t_1$ and $t=t_2$ are two distinct solutions of the equation \[eq4\] and $a^3+2\hf a\hf b+c\neq0$ $\Leftrightarrow$ $(s,t)=(t_1,t_2)$ and $(s,t)=(t_2,t_1)$ are the only common real solutions of the equations \[eq1\] and \[eq2\], but no one is a solution of the equation \[eq3\].
\[cor3\] For $e_1, e_2, e_3\in\{\hs-1, 1\hs\}$ and $a,b,c\in\ro$, a polynomial map $\tau\in\mathcal{A}_4$ given by $t\mapsto(\hs e_1\hf t^2+a\hf t\hs,\hs e_2\hf t^3+b\hf t,\hs e_3\hf t^4+c\hf t\hs)$ is an embedding if and only if $3 a^2+4 e_2 b>0$ or $e_1 a^3+2 e_1 e_2\hp a\hp b+ e_3 c\neq0$.
It is easy to note that $\tau$ is an embedding $\Leftrightarrow$ $\phi\in\mathcal{A}_4$ given by $$t\mapsto\left(\hs e_1(e_1\hf t^2+a\hf t)\hs,\hs e_2(e_2\hf t^3+b\hf t),\hs e_3(e_3\hf t^4+c\hf t)\hs\right)$$ is an embedding. The map $\phi$ can be written as $t\mapsto\left(\hs t^2+e_1\hp a\hf t,\hf t^3+e_2\hp b\hf t,\hf t^4+e_3\hp c\hf t\hs\right)$. Thus, using proposition \[thm13\], one can say that $\phi$ is an embedding $\Leftrightarrow$ $3 a^2+4 e_2 b>0$ or $e_1 a^3+2 e_1 e_2\hp a\hp b+ e_3 c\neq0$.
\[thm14\] For $e_1, e_2, e_3\in\{\hs-1, 1\hs\}$ and $a,b,c\in\ro$, a polynomial knot $\varphi\in\pfrt$ given by $t\mapsto(\hs e_1\hf t^2+a\hf t,\hs e_2\hf t^3+b\hf t,\hs e_3\hf t^4+c\hf t\hs)$ is path equivalent (in $\pfr$) to at least one of the polynomial knot $t\mapsto(\hs0,\hs e_2\hf t,\hs e_3\hf t^2\hs)$ or $t\mapsto(\hs0,\hs -e_2\hf t,\hs -2\hf e_3\hf t^2\hs)$.
By corollary \[cor3\], for $\varphi\in\pfr$ given by $t\mapsto(\hs e_1\hf t^2+a\hf t,\hs e_2\hf t^3+b\hf t,\hs e_3\hf t^4+c\hf t\hs)$, we have $3 a^2+4 e_2 b>0$ or $e_1\hf a^3+2\hf e_1\hf e_2\hf a\hf b+e_3\hf c\neq0$. We now consider the following two cases:
$i)$ If $3 a^2+4 e_2 b>0$: Let $\Phi:\zo\rightarrow\mathcal{A}_4$ be a map which is given by $\Phi(s)=\Phi_s$ for $s\in\zo$, where$$\Phi_s(t)=\left(\hs (1-s)(e_1\hf t^2+a\hf t),\hs(1-s)(e_2\hf t^3+b\hf t)+e_2\hf s\hs t,\hs (1-s)(e_3\hf t^4+c\hf t)+e_3\hs s\hs t^2\hs\right)$$for $t\in\ro$. For $s\in\ozo$, the polynomial map $\Phi_s$ is an embedding $\Leftrightarrow$ the polynomial map $\frac{1}{1-s}\Phi_s\in\mathcal{A}_4$ given by $$t\mapsto\bigg(\hs e_1\hf t^2+a\hf t,\hs e_2\hf t^3+\Big(b+\dfrac{e_2\hf s}{1-s}\Big)t,\hs e_3\hf t^4+\dfrac{e_3\hf s}{1-s}\hf t^2+c\hf t\hs\bigg)$$ is an embedding $\Leftrightarrow$ the polynomial map $\Psi_s\in\mathcal{A}_4$ given by $$t\mapsto\bigg(\hs e_1\hf t^2+a\hf t,\hs e_2\hf t^3+\Big(b+\dfrac{e_2\hf s}{1-s}\Big)t,\hs e_3\hf t^4+\Big( c-\dfrac{e_1\hf e_3\hf a\hf s}{1-s}\Big) t\hs\bigg)$$ is an embedding. For $s\in\ozo$, since $3\hf a^2+4\hf e_2\big(b+\frac{e_2\hf s}{1-s}\big)=3\hf a^2+4\hf e_2\hf b+\frac{4\hf s}{1-s}>0$, so by corollary \[cor3\], the map $\Psi_s$ is an embedding and hence so is the map $\Phi_s$. Note that $\Phi_0=\varphi$ and $\Phi_1=\tau$, where $\tau$ is given by $t\mapsto(\hs0,\hs e_2\hf t,\hs e_3\hf t^2\hs)$. Thus, there is path in $\pfr$ from $\varphi$ to $\tau$.
$ii)$ If $e_1\hf a^3+2\hf e_1\hf e_2\hf a\hf b+e_3\hf c\neq0$: Let $\varUpsilon:\zo\rightarrow\mathcal{A}_4$ be a map which given by $\varUpsilon(s)=\varUpsilon_s$ for $s\in\zo$, where $$\varUpsilon_s(t)=\left((1-s)(e_1\hf t^2+a\hf t),\hf(1-s)(e_2\hf t^3+b\hf t)-e_2\hf s\hf t,\hf (1-s)(e_3\hf t^4+c\hf t)-2\hf e_3\hf s\hf t^2\right)$$ for $t\in\ro$. For $s\in\ozo$, the polynomial map $\varUpsilon_s$ is an embedding $\Leftrightarrow$ the polynomial map $\frac{1}{1-s}\varUpsilon_s\in\mathcal{A}_4$ given by $$t\mapsto\bigg(\hs e_1\hf t^2+a\hf t,\hs e_2\hf t^3+\Big(b-\dfrac{e_2\hf s}{1-s}\Big)t,\hs e_3\hf t^4-\dfrac{2\hf e_3\hf s}{1-s}\hf t^2+c\hf t\hs\bigg)$$ is an embedding $\Leftrightarrow$ the polynomial map $\varGamma_s\in\mathcal{A}_4$ given by $$t\mapsto\bigg(\hs e_1\hf t^2+a\hf t,\hs e_2\hf t^3+\Big(b-\dfrac{e_2\hf s}{1-s}\Big)t,\hs e_3\hf t^4+\Big( c+\dfrac{2\hf e_1\hf e_3\hf a\hf s}{1-s}\Big) t\hs\bigg)$$ is an embedding. For $s\in\ozo$, since $$e_1\hf a^3+2\hf e_1\hf e_2\hf a\Big(b-\dfrac{e_2\hf s}{1-s}\Big)+e_3\Big( c+\dfrac{2\hf e_1\hf e_3\hf a\hf s}{1-s}\Big) =e_1\hf a^3+2\hf e_1\hf e_2\hf a\hf b+e_3\hf c\neq0\hp,$$ so by corollary \[cor3\], the map $\varGamma_s$ is an embedding and hence so is the map $\varUpsilon_s$. Note that $\varUpsilon_0=\varphi$ and $\varUpsilon_1=\sigma$, where $\sigma$ is given by $t\mapsto(\hs0,\hs -e_2\hf t,\hs -2\hf e_3\hf t^2\hs)$. This shows that the map $\varUpsilon$ is a path in $\pfr$ from $\varphi$ to $\sigma$.
\[thm15\] For any $\phi\in\pfrt$, there exist $e_1, e_2, e_3\in\{\hs-1, 1\hs\}$ and $a,b,c\in\ro$ such that a polynomial map $\psi$ given by $t\mapsto(\hs e_1\hf t^2+a\hf t\hs,\hs e_2\hf t^3+b\hf t,\hs e_3\hf t^4+c\hf t\hs)$ is an embedding which is path equivalent (in $\pfrt$) to the polynomial knot $\phi$.
We prove this proposition in the following steps:$i)$ Let $\phi\in\pfrt$ be a polynomial knot and let it be given by $$t\mapsto\big(\hs a_2 t^2+a_1t+a_0,\hs b_3t^3+b_2t^2+b_1t+b_0,\hs c_4t^4+c_3t^3+c_2t^2+c_1t+c_0\hs\big).$$ We take a map $\Phi:\zo\rightarrow\mathcal{A}_4$ which is given by $\Phi(s)=\Phi_s$ for $s\in\zo$, where$\Phi_s(t)=\big(\hs a_2 t^2+a_1t+(1-s)\hf a_0,\hs b_3t^3+b_2t^2+b_1t+(1-s)\hf b_0,\hs c_4t^4+c_3t^3+$27.4mm $c_2t^2+c_1t+(1-s)\hf c_0\hs\big)$for $t\in\ro$. It is easy to check that $\Phi_s\in\pfrt$ for all $s\in\zo$. Let $\tau=\fgh$, where $f(t)=a_2 t^2+a_1t,\hs g(t)=b_3t^3+b_2t^2+b_1t$ and $h(t)=c_4t^4+c_3t^3+c_2t^2+c_1t$ for $t\in\ro$. Clearly $\Phi_0=\phi$ and $\Phi_1=\tau$. This gives a path in $\pfrt$ from $\phi$ to $\tau$.
$ii)$ Note that $a_2, b_3$ and $c_4$ all are nonzero. Let $\Psi:\zo\rightarrow\mathcal{A}_4$ be a map which is given by $\Psi(s)=\Psi_s$ for $s\in\zo$, where $$\Psi_s(t)=\Big( \hs f(t),\hs g(t)-\frac{b_2}{a_2}s\hf f(t),\hs h(t)+\frac{b_2c_3-b_3c_2}{a_2b_3}s\hf f(t)-\frac{c_3}{b_3}s\hf g(t)\hs\Big)$$ for $t\in\ro$. Note that $\Psi_s\in\pfrt$ for all $s\in\zo$. Let $\sigma=(\hp f, g_1, h_1\hp)$, where $$\begin{aligned}
g_1(t)&=b_3t^3+b_{11}t=b_3t^3+\Big(b_1-\frac{a_1\hf b_2}{a_2}\Big)\hp t\hw \mbox{and}\\
h_1(t)&=c_4t^4+c_{11}t=c_4t^4+\Big(c_1+\frac{a_1b_2c_3-a_1b_3c_2}{a_2b_3}-\frac{b_1c_3}{b_3}\Big)\hp t\end{aligned}$$ for $t\in\ro$. It is easy to check that $\Psi_0=\tau$ and $\Psi_1=\sigma$. So we have a path in $\pfrt$ from $\tau$ to $\sigma$.
$iii)$ Let $\varUpsilon:\zo\rightarrow\mathcal{A}_4$ be given by $\varUpsilon(s)=\varUpsilon_s$ for $s\in\zo$, where $$\varUpsilon_s(t)=\bigg(\hs\Big( 1-s+\dfrac{s}{|a_2|}\Big)f(t),\hs \Big( 1-s+\dfrac{s}{|b_3|}\Big) g_1(t),\hs\Big( 1-s+\dfrac{s}{|c_4|}\Big) h_1(t)\hs\bigg)$$ for $t\in\ro$. For $s\in\zo$, since $1-s+\frac{s}{|a_2|}>0,\ho 1-s+\frac{s}{|b_3|}>0$ and $1-s+\frac{s}{|c_4|}>0$, so $\varUpsilon_s$ is a polynomial knot in $\pfrt$. Let $\psi=(\hp f_2, g_2, h_2\hp)$, where $$\begin{aligned}
f_2(t)&=& e_1\hf t^2+a\hf t=\dfrac{a_2t^2+a_1t}{|a_2|}\,,\\
g_2(t)&=& e_2\hf t^2+b\hf t=\dfrac{b_3t^3+b_{11}t}{|b_3|}\hw \mbox{and}\\
h_2(t)&=& e_3\hf t^2+c\hf t=\dfrac{c_4t^4+c_{11}t}{|c_4|}\end{aligned}$$ for $t\in\ro$. It is easy to note that $\varUpsilon_0=\sigma$ and $\varUpsilon_1=\psi$. This shows that the map $\varUpsilon$ is a path in $\pfrt$ from $\sigma$ to $\psi$.
The following corollary follows trivially from propositions \[thm14\] and \[thm15\].
\[cor4\] Any polynomial knot $\phi\in\pfrt$ is path equivalent (in $\pfr$) to a polynomial knot $\psi\in\pfr$ having degree sequence $(\hp0,1,2\hp)$.
\[thm16\] The space $\pfr$ is path connected.
Consider the following sets: $V_1=\{\hf\phi\in\mathcal{A}_4\mid\phi$ has degree sequence $(\hp0,1,2\hp)\hf\}$,$V_2=\{\hf\phi\in\mathcal{A}_4\mid\phi$ has degree sequence $(\hp0,1,3\hp)\hf\}$,$V_3=\{\hf\phi\in\mathcal{A}_4\mid\phi$ has degree sequence $(\hp0,1,4\hp)\hf\}$,$V_4=\{\hf\phi\in\mathcal{A}_4\mid\phi$ has degree sequence $(\hp0,2,3\hp)\hf\}\cap\pfr$,$V_5=\{\hf\phi\in\mathcal{A}_4\mid\phi$ has degree sequence $(\hp0,2,4\hp)\hf\}\cap\pfr$,$V_6=\{\hf\phi\in\mathcal{A}_4\mid\phi$ has degree sequence $(\hp0,3,4\hp)\hf\}\cap\pfr$,$V_7=\{\hf\phi\in\mathcal{A}_4\mid\phi$ has degree sequence $(\hp1,2,3\hp)\hf\}$,$V_8=\{\hf\phi\in\mathcal{A}_4\mid\phi$ has degree sequence $(\hp1,2,4\hp)\hf\}$,$V_9=\{\hf\phi\in\mathcal{A}_4\mid\phi$ has degree sequence $(\hp1,3,4\hp)\hf\}$and8.6mm$V_{10}=\{\hf\phi\in\mathcal{A}_4\mid\phi$ has degree sequence $(\hp2,3,4\hp)\hf\}\cap\pfr=\pfrt$.
Note that these sets are pairwise disjoint and their union is exactly equal to $\pfr$. Using the similar argument as used in the proof of theorem \[thm12\], one can show that there is a path in $\pfr$ from an arbitrary element of $V_1$ to an arbitrary element of $V_9$. Also by the similar argument as used in the second part of the proof of theorem \[thm12\], it is easy to produce a path from an arbitrary element of $\bigcup_{i=2}^8 V_i$ to an element of $V_9$. Using corollary \[cor4\], one has a path from an arbitrary element of $V_{10}$ to an element of $V_1$. This satisfies both the assumptions of lemma \[thm11\] and hence $\pfr$ is path connected.
We have proved that the spaces $\pt$ and $\pfr$ are path connected, so in general, we would like to conjecture the following:
\[coj2\] The space $\pd$, for $d\geq3$, is path connected.
\[thm18\] For any fixed element $e=(e_1,e_2,e_3)$ in $\{-1,1\}^3$, the space $\mathcal{N}_{4e}=\big\{\hs \phi\in\mathcal{\tilde{P}}_4\mid\; t\xrightarrow{\phi}(\hp e_1t^2+a_\phi\hp t,e_2t^3+b_\phi\hp t,e_3t^4+c_\phi\hp t\hp)$for some$a_\phi,b_\phi,c_\phi\in\ro\hs\big\}$\
is path connected.
Suppose $e=(e_1,e_2,e_3)$ be an arbitrary element in the set $\{-1,1\}^3$. By corollary \[cor3\], an element $\phi\in\mathcal{A}_4$ given by $t\mapsto(\hp e_1t^2+a_\phi\hp t,e_2t^3+b_\phi\hp t,e_3t^4+c_\phi\hp t\hp)$ is an embedding if and only if $3a_\phi^2+4e_2b_\phi>0$ or $e_1a_\phi^3+2e_1e_2a_\phi b_\phi+e_3c_\phi\neq0$. Therefore, it is easy to see that the set $\mathcal{N}_{4e}$ is the union of the following sets:
$\hskip20mm \mathcal{N}_{4e}^1=\big\{\hf\phi\in\mathcal{N}_{4e}\mid 3a_\phi^2+4e_2b_\phi>0\hf\big\}$,
$\hskip20mm\mathcal{N}_{4e}^2=\big\{\hf\phi\in\mathcal{N}_{4e}\mid e_1a_\phi^3+2e_1e_2a_\phi b_\phi+e_3c_\phi>0\hf\big\}$ and
$\hskip20mm\mathcal{N}_{4e}^3=\big\{\hf\phi\in\mathcal{N}_{4e}\mid e_1a_\phi^3+2e_1e_2a_\phi b_\phi+e_3c_\phi<0\hf\big\}$.
We show that every element of the sets $\mathcal{N}_{4e}^1$, $\mathcal{N}_{4e}^2$ and $\mathcal{N}_{4e}^3$ is connected by a path in $\mathcal{N}_{4e}$ to a fixed element $\varphi_0\in\mathcal{N}_{4e}^1$ given by $t\mapsto\big(\hp e_1t^2,e_2t^3+e_2t,e_3t^4\hp\big)$.
$i)$ Let $\varphi$ be an arbitrary element of the space $\mathcal{N}_{4e}^1$. Suppose $F:\zo\to\mathcal{A}_4$ be a map which is given by $F(s)=F_s$ for $s\in\zo$, where $$F_s(t)=\big(\hp e_1t^2+a_\varphi s\hp t,\,e_2t^3+(b_\varphi s^2+ e_2-e_2s^2)t,\,e_3t^4+c_\varphi s\hp t\hp\big)$$ for $t\in\ro$. For $s\in\ozo$, since $3(a_\varphi s)^2+4e_2(b_\varphi s^2+ e_2-e_2s^2)=s^2(3a_\varphi^2+4e_2b_\varphi)+4(1-s^2)>0$, so the map $F_s$ is an element of the space $\mathcal{N}_{4e}^1$. Also, we have $F_0=\varphi_0$ and $F_1=\varphi$. This shows that $F$ is a path in $\mathcal{N}_{4e}^1$ from $\varphi_0$ to $\varphi$.
$ii)$ Let $\psi$ be an arbitrary element of the space $\mathcal{N}_{4e}^2$. We choose an element $\psi_0\in \mathcal{N}_{4e}^2$ given by $t\mapsto\big(\hp e_1t^2,e_2t^3,e_3t^4+e_3t\hp\big)$. Let a map $G:\zo\to\mathcal{A}_4$ be given by $G(s)=G_s$ for $s\in\zo$, where $$G_s(t)=\big(\hp e_1t^2+a_\psi s\hp t,\,e_2t^3+b_\psi s^2\hp t,\,e_3t^4+(c_\psi\hp s^3+ e_3-e_3s^3)\hp t\hp\big)$$ for $t\in\ro$. For $s\in\ozo$, the map $G_s$ is an element of the space $\mathcal{N}_{4e}^2$, because $e_1(a_\psi s)^3+2e_1e_2(a_\psi s)(b_\psi s^2)+e_3(c_\psi s^3+e_3-e_3s^3)=s^3(e_1a_\psi^3+2e_1e_2a_\psi b_\psi+e_3c_\psi)+1-s^3>0$. Also, since $G_0=\psi_0$ and $G_1=\psi$, so the map $G$ is a path in $\mathcal{N}_{4e}^2$ from $\psi_0$ to $\psi$. Now take a map $\varPhi:\zo\to\mathcal{A}_4$ given by $\varPhi(s)=\varPhi_s$ for $s\in\zo$, where $$\varPhi_s(t)=\big(\hp e_1t^2,e_2t^3+e_2(1-s)t,e_3t^4+e_3 s\hp t\hp\big)$$ for $t\in\ro$. For $s\in\ozo$, the map $\varPhi_s$ is an element of the space $\mathcal{N}_{4e}^1$. Note that $\varPhi_0=\varphi_0$ and $\varPhi_1=\psi_0$. This shows that $\varPhi$ is a path in $\mathcal{N}_{4e}$ from $\varphi_0$ to $\psi_0$. The maps $G$ and $\varPhi$ both together gives a path in $\mathcal{N}_{4e}$ from $\varphi_0$ to $\psi$.
$iii)$ Let $\sigma$ be an arbitrary element of the space $\mathcal{N}_{4e}^3$. Take an element $\sigma_0\in \mathcal{N}_{4e}^3$ given by $t\mapsto\big(\hp e_1t^2,e_2t^3,e_3t^4-e_3t\hp\big)$. Let $H:\zo\to\mathcal{A}_4$ be a map which is given by $H(s)=H_s$ for $s\in\zo$, where $$H_s(t)=\big(\hp e_1t^2+a_\sigma s\hp t,\,e_2t^3+b_\sigma s^2\hp t,\,e_3t^4+(c_\sigma s^3-e_3+e_3s^3) t\hp\big)$$ for $t\in\ro$. For $s\in\ozo$, since $e_1(a_\sigma s)^3+2e_1e_2(a_\sigma s)(b_\sigma s^2)+e_3(c_\sigma s^3-e_3+e_3s^3)=s^3(e_1a_\sigma^3+2e_1e_2a_\sigma b_\sigma+e_3c_\sigma)-1+s^3<0$, so the map $H_s$ is an element of the space $\mathcal{N}_{4e}^3$. Also $H_0=\sigma_0$ and $H_1=\sigma$, so $H$ is a path in $\mathcal{N}_{4e}^3$ from $\sigma_0$ to $\sigma$. Let us take a map $\varPsi:\zo\to\mathcal{A}_4$ given by $\varPsi(s)=\varPsi_s$ for $s\in\zo$, where $$\varPsi_s(t)=\big(\hp e_1t^2,e_2t^3+e_2(1-s)t,e_3t^4-e_3 s\hp t\hp\big)$$ for $t\in\ro$. It is easy to see that $\varPsi_0=\varphi_0$ and $\varPsi_1=\sigma_0$. For $s\in\ozo$, the map $\varPsi_s$ is an element of the space $\mathcal{N}_{4e}^1$. This shows that $\varPsi$ is a path in $\mathcal{N}_{4e}$ from $\varphi_0$ to $\sigma_0$. The maps $H$ and $\varPsi$ both together gives a path in $\mathcal{N}_{4e}$ from $\varphi_0$ to $\sigma$.
\[thm19\] The space $\pfrt$ has eight path components.
For an element $e=(e_1,e_2,e_3)$ in $\{-1,1\}^3$, we consider the following set:
$\mathcal{\tilde{P}}_{4e}=\big\{\hs \tau\in\mathcal{\tilde{P}}_4\mid\,e_1a_{2\tau}>0$, $e_2b_{3\tau}>0$ and $e_3c_{4\tau}>0\hs\big\}$.
Note that $\mathcal{\tilde{P}}_4=\bigcup_e\mathcal{\tilde{P}}_{4e}$. Also, for $e\neq e'$, there is no path in $\mathcal{\tilde{P}}_4$ from an element of the set $\mathcal{\tilde{P}}_{4e}$ to an element of the set $\mathcal{\tilde{P}}_{4e'}$. Now it is enough to prove that for each $e\in\{-1,1\}^3$, the space $\mathcal{\tilde{P}}_{4e}$ is path connected.
Let $e=(e_1,e_2,e_3)$ be an arbitrary element in the set $\{-1,1\}^3$. From the proof of proposition \[thm15\], one can see that every element in $\mathcal{\tilde{P}}_{4e}$ is connected by a path in $\mathcal{\tilde{P}}_{4e}$ to some element in $\mathcal{N}_{4e}$. Since by proposition \[thm18\], the space $\mathcal{N}_{4e}$ is path connected, so the space $\mathcal{\tilde{P}}_{4e}$ is also path connected.
The space $\pft$ {#sec3.3}
-----------------
From proposition \[thm5\], we can realize only the unknot $0_1$, the left hand trefoil $3_1$ and the right hand trefoil $3_1^*$ in degree $5$. In fact, Shastri [@ars] had shown a realization of the trefoil knot in degree $5$.
A [*mathematica*]{} plot of the Shastri’s trefoil $t\mapsto\big(\hp t^3-3\hp t,\hp t^4-4\hp t^2,\hp t^5-10\hp t\hp\big)$ and its mirror image $t\mapsto\big(\hp t^3-3\hp t,\hp t^4-4\hp t^2,\hp -(t^5-10\hp t)\hp\big)$ is shown in the figure bellow:
![Representations of $3_1$ and $3_1^*$](trefoilknots.png)
The polynomial degree of the trefoil knot is $5$. The degrees $3$ and $4$ of the first and second components are minimal in the sense that there is no polynomial representation of the trefoil knot belonging to the space $\pf\setminus\pft$. By corollary \[cor2\], the right hand trefoil and the left hand trefoil each corresponds to at least $4$ path components of the space $\pft$. For example, the knots\
[$t\mapsto\big(\hp t^3-3\hp t,\hp t^4-4\hp t^2,\hp t^5-10\hp t\hp\big)$,$t\mapsto\big(-(t^3-3\hp t),\hp-(t^4-4\hp t^2),\hp t^5-10\hp t\hp\big)$,\
$t\mapsto\big(-(t^3-3\hp t),\hp t^4-4\hp t^2,\hp-(t^5-10\hp t)\hp\big)$and$t\mapsto\big(\hp t^3-3\hp t,\hp-(t^4-4\hp t^2),\hp-(t^5-10\hp t)\hp\big)$]{}\
represent the same trefoil, but they lie in the different path components of the space $\pft$. Also, by the same corollary, the unknot has $8$ path components in the space $\pft$. Thus, the space $\pft$ has at least $16$ path components corresponding to the knots $0_1, 3_1$ and $3_1^*$. We summarize the details in the following table:
Sr. No. Knot type Polynomial degree of a knot type Number of path components of $\pft$ corresponding to a knot type
----------- ------------- ---------------------------------- ------------------------------------------------------------------
$1.$ $0_1$ 1 at least 8
$2.$ $3_1$ 5 at least 4
$3.$ $3_1^*$ 5 at least 4
at least 16
\[2.5mm\]
The space $\pst$ {#sec3.4}
-----------------
The knots which have polynomial representation in degree $5$ naturally have their representation in degree $6$ as well. By proposition \[thm2\], for a knot $\kappa$ having a polynomial representation in degree $6$, the minimal crossing number $c[\kappa]$ must be less than or equal to $6$. Since the knots $5_1, 5_1^*, 3_1\#3_1, 3_1^*\#3_1^*$ and $3_1\#3_1^*$ are $4$-superbridge, so they can not be represented in degree $6$ (see proposition \[thm2\]). Also, by theorem \[thm7\], it is almost impossible to represent the knots $5_2$ and $5_2^*$ in the space $\pst$. The same is true for the knots $6_1, 6_1^*, 6_2, 6_2^*$ and $6_3$. But, we can represent the figure-eight knot in the space $\pst$. In fact, we have a polynomial representation $t\mapsto\fght$ of the figure-eight knot ($4_1$ knot) with degree sequence $(4, 5, 6)$, where[$$\begin{aligned}
f(t)&=& (-4.8 + t)\ho (-0.3 + t)\ho (3.6 + t)\ho (10 + t)\hp,\\
g(t)&=& (-4.8 + t)\ho (-3.3 + t)\ho (-0.3 + t)\ho (2.3 + t)\ho (4.6 + t)\hw\mbox{and}\\
h(t)&=& 0.5\ho t\ho (-0.19 + t)\ho (21.22 - 9.19\ho t + t^2)\ho (17.78 + 8.42\ho t + t^2)\hp.\end{aligned}$$]{} A [*mathematica*]{} plot of this representation is shown in the following figure:
\[fig3\]
![Representation of $4_1$](4_1.png)
By proposition \[thm5\], it follows that the polynomial degree of the figure-eight knot is $6$. Note that in the polynomial representation of this knot, the degrees $4$ and $5$ of the first and second components are minimal in the sense that there is no polynomial representation of the figure-eight knot belonging to the space $\ps\setminus\pst$. By corollary \[cor2\], the space $\pst$ has at least $8$ path components corresponding to the figure-eight knot. Also, the knots $0_1, 3_1$ and $3_1^*$ can be realized in $\pst$ (since they have representation in $\pft$). The unknot $0_1$ corresponds to at least $8$ path components of the space $\pst$. The right hand trefoil $3_1$ and the left hand trefoil $3_1^*$, each corresponds to at least $4$ path components of the space $\pst$. Hence the space $\pst$ has at least $24$ path components. We summarize the details in the table bellow:
Sr. No. Knot type Polynomial degree of a knot type Number of path components of $\pst$ corresponding to a knot type
----------- ------------- ---------------------------------- ------------------------------------------------------------------
$1.$ $0_1$ 1 at least 8
$2.$ $3_1$ 5 at least 4
$3.$ $3_1^*$ 5 at least 4
$4.$ $4_1$ 6 at least 8
at least 24
\[2.5mm\]
The space $\psvt$ {#sec3.5}
------------------
By proposition \[thm2\], for a knot having a polynomial representation in degree $7$, the minimal crossing number must be less than or equal to $10$. In fact, we have produced some polynomial representations of the knots $5_1, 5_1^*, 5_2, 5_2^*, 6_1, 6_1^*, 6_2, 6_2^*, 6_3,$ $3_1\#3_1, 3_1^*\#3_1^*, 3_1\#3_1^*, 8_{19}$ and $8_{19}^*$ in the space $\psvt$. 1) A polynomial representation $t\mapsto\uvwt$ of the knot $5_1$ with degree sequence $(5, 6, 7)$ is given by[ $$\begin{aligned}
u(t)&=& 0.00001\,t^5 + 4\ho (-24.01 + t^2)\ho (-4 + t^2)\hp,\\
v(t)&=& 0.00001\,t^6 + t\ho (-30.25 + t^2)\ho (-12.25 + t^2)\hw\mbox{and}\\
w(t)&=& -\ho 0.1\ho t\ho (-26.8328 + t^2)\ho (-13.6702 + t^2)\ho (0.1135 + t^2)\end{aligned}$$]{}
![Representation of $5_1$](5_1.png)
2\) A polynomial representation $t\mapsto\xyzt$ of the knot $5_2$ with degree sequence $(5, 6, 7)$ is given by[ $$\begin{aligned}
x(t)&=& 0.00001\,t^5 + 20\ho(-17 + t)\ho(-10 + t)\ho(15 + t)\ho(21 + t)\hp,\\
y(t)&=& 0.00001\,t^6 + t\ho (-400 + t^2)\ho (-121 + t^2)\hw\mbox{and}\\
z(t)&=& -\ho 0.005\ho t\ho (-20.1133216 + t)\ho (0.0107598 - 0.0343124 \ho t + t^2)\\
& & (12.2430449 + t)\ho (20.5785825 + t)\ho (-14.260128 + t)\hp.\end{aligned}$$]{}
![Representation of $5_2$](5_2.png)
3\) A polynomial representation $t\mapsto\fght$ of the knot $6_1$ with degree sequence $(5, 6, 7)$ is given by[ $$\begin{aligned}
f(t)&=& 60\ho (-43.4 + t)\ho (-28 + t)\ho (5 + t)\ho (31.4 + t)\ho (47.6 + t)\hp,\\
g(t)&=& (-49 + t)\ho (-38 + t)\ho (-8 + t)\ho (-6 + t)\ho (28 + t)\ho (43.6 + t)\hw\mbox{and}\\
h(t)&=& -\ho 0.07\ho (-45.995024874 + t)\ho (5.231021635 + t) \ho (758.763745443 - 54.4650519227\ho t + t^2)\\ & & (19.036560084 + t)\ho (2059.948386689 + 90.4819595699\ho t + t^2)\hp.\end{aligned}$$]{}
![Representation of $6_1$](6_1.png)
4\) A polynomial representation $t\mapsto\uvwt$ of the knot $6_2$ with degree sequence $(5, 6, 7)$ is given by[ $$\begin{aligned}
u(t)&=& 4\ho (-39 + t)\ho (-5 + t)\ho (35 + t)\ho (-625 + t^2)\hp,\\
v(t)&=& 0.1\ho (-39 + t)\ho (-30 + t)\ho (-10 + t)\ho (20 + t)\ho (25 + t)\ho (41 + t)\hw\mbox{and}\\
w(t)&=& 0.005\ho t\ho (-39.8753791 + t)\ho (-27.4156408 + t)\ho (28.436878 + t)\\ && (37.25572585 + t)\ho (0.002423881 - 0.005429486\ho t + t^2)\hp.\end{aligned}$$]{}
![Representation of $6_2$](6_2.png)
5\) A polynomial representation $t\mapsto\xyzt$ of the knot $6_3$ with degree sequence $(5, 6, 7)$ is given by[ $$\begin{aligned}
x(t)&=& 15\ho (-29 + t)\ho (-20 + t)\ho (10 + t)\ho (30 + t)^2\hp,\\
y(t)&=& (-32 + t)\ho (-6 + t)\ho (4 + t)\ho (30 + t)\ho (-400 + t^2)\hw\mbox{and}\\
z(t)&=& -\ho 0.06\ho (376.737563885 - 37.8892469397\ho t + t^2)\ho (144.275534095 + 21.404400212\ho t + t^2)\\ & & (-33.329044815 + t)\ho (955.985733648 + 61.56649851\ho t + t^2)\hp. \end{aligned}$$]{}
![Representation of $6_3$](6_3.png)
6\) A polynomial representation $t\mapsto\fght$ of $3_1\#3_1$ knot with degree sequence $(5, 6, 7)$ is given by[ $$\begin{aligned}
f(t)&=& 5\ho t\ho (77.3 - 17.5\ho t + t^2)\ho (77.3 + 17.5\ho t + t^2)\hp,\\
g(t)&=& (-102.01 + t^2)\ho (-53.29 + t^2)\ho (-4.84 + t^2)\hw\mbox{and}\\
h(t)&=& -\ho 0.15\ho t\ho (-99.695462027 + t^2)\ho (-68.11720396 + t^2)\ho (0.025367747 + t^2)\hp.\end{aligned}$$]{}
![Representation of $3_1\#3_1$](3_1+3_1.png)
7\) A polynomial representation $t\mapsto\uvwt$ of $3_1\#3_1^*$ knot with degree sequence $(5, 6, 7)$ is given by[$$\begin{aligned}
u(t)&=& 30\ho (-32.5 + t)\ho (-21.3 + t)\ho (-3.3 + t)\ho (16.2 + t)\ho (28 + t)\hp,\\
v(t)&=& (-34 + t)\ho (-23 + t)\ho (-6.8 + t)\ho (12 + t)\ho (21.7 + t)\ho (33.1 + t)\hw\mbox{and}\\
w(t)&=& -\ho 0.03\ho t\ho (-32.807367 + t)\ho (-24.209735 + t)\ho (15.257278 + t)\\ & & (28.289226 + t)\ho (0.0043718 - 0.0082068\ho t + t^2)\hp.\end{aligned}$$]{}
![Representation of $3_1\#3_1^*$](3_1-3_1.png)
8\) A polynomial representation $t\mapsto\xyzt$ of the knot $8_{19}$ with degree sequence $(5, 6, 7)$ is given by[$$\begin{aligned}
x(t)&=& t^5 - 5.5\ho t^3 + 4.5\ho t\hp,\\
y(t)&=& t^6 - 7.35\ho t^4 + 14\ho t^2\hw\mbox{and}\\
z(t)&=& t^7 - 8.13297\ho t^5 + 18.5762\ho t^3 - 10.4337\ho t\hp.\end{aligned}$$]{}
![Representation of $8_{19}$](8_19.png)
Each of the knot $5_1, 5_1^*, 3_1\#3_1, 3_1^*\#3_1^*, 3_1\#3_1^*,$ $8_{19}$ and $8_{19}^*$ is $4$-superbridge, so by proposition \[thm2\], one can not represent any one by a polynomial knot with degree less than $7$. In other words, each has polynomial degree $7$. However, the polynomial degree the knots $5_2, 5_2^*, 6_1, 6_1^*, 6_2, 6_2^*$ and $6_3$ is either $6$ or $7$. For the polynomial representation of $8_{19}$, the degrees $5$ and $6$ of the first and second components are minimal in the sense that there is no representation of the knot $8_{19}$ belonging to the space $\psv\setminus\psvt$. The space $\psvt$ has at least $88$ path components corresponding to the knots $0_1, 3_1, 3_1^*,4_1, 5_1, 5_1^*, 5_2, 5_2^*,6_1, 6_1^*, 6_2, 6_2^*, 6_3, 3_1\#3_1, 3_1^*\#3_1^*, 3_1\#3_1^*, 8_{19}$ and $8_{19}^*$. A comprehensive table of an estimation of the number of path components of the space $\psvt$ is given below:
Sr. No. Knot type Polynomial degree of a knot type Number of path components of $\psvt$ corresponding to a knot type
----------- ---------------- ---------------------------------- -------------------------------------------------------------------
$1.$ $0_1$ 1 at least 8
$2.$ $3_1$ 5 at least 4
$3.$ $3_1^*$ 5 at least 4
$4.$ $4_1$ 6 at least 8
$5.$ $5_1$ 7 at least 4
$6.$ $5_1^*$ 7 at least 4
$7.$ $5_2$ 6 or 7 at least 4
$8.$ $5_2^*$ 6 or 7 at least 4
$9.$ $6_1$ 6 or 7 at least 4
$10.$ $6_1^*$ 6 or 7 at least 4
$11.$ $6_2$ 6 or 7 at least 4
$12.$ $6_2^*$ 6 or 7 at least 4
$13.$ $6_3$ 6 or 7 at least 8
$14.$ $3_1\#3_1$ 7 at least 4
$15.$ $3_1^*\#3_1^*$ 7 at least 4
$16.$ $3_1\#3_1^*$ 7 at least 8
$17.$ $8_{19}$ 7 at least 4
$18.$ $8_{19}^*$ 7 at least 4
at least 88
\[2.5mm\]
Conclusion {#sec4}
==========
We have seen that the space $\pfr$ is path connected where as the space $\pfrt$ has eight path components. Thus, it makes a difference if we are considering a space with fixed degrees of components polynomials or with a flexible range of degrees. We also see that the space $\spk$ of all polynomial knots can be stratified in two different ways such as $$\spk=\bigcup _{d\geq 1} K_d=\bigcup _{d\geq 2} O_d,$$ where $\od$ is the space of all polynomial knots $t\mapsto (f(t),g(t),h(t))$ with $\deg(f)\leq d-2$, $\deg(g)\leq d-1$ and $\deg(h)\leq d.$ We can provide the inductive limit topology using any of these stratifications. It needs to be observed that the number of path components in these resulting spaces are different. For a fixed $d$, there are many more interesting spaces of polynomial knots with conditions on the degrees of the component polynomials. Each one of them gives interesting topology. One may try to find a suitable topology on $\spk$ such that its path components correspond precisely to knot types. In general we might like to study the spaces $\mathcal{K}_{p,q,r}$ as the set of all polynomial knots $t\to (f(t),g(t),h(t))$ with $\deg(f)=p,\deg(g)=q, \deg(h)=r$ where $p,q$ and $r$ are any given positive integers. We would like to explore if the number of path components in $\mathcal{K}_{p,q,r}$ corresponding to a knot type gets affected in case $(p,q,r)$ is the minimal degree sequence for a given knot type.
[99]{}
A. R. Shastri, Polynomial representations of knots, Tôhoku Math. J. (2), Vol. 44, No. 1 (1992), 11-17.
Alan Durfee and Donal O’Shea, Polynomial knots, http://arxiv.org/pdf/math\
/0612803v1.pdf
C. B. Jeon and G. T. Jin, A computation of superbridge index of knots, Journal of Knot Theory and Its Ramifications, Vol. 11, No. 3 (2002), 461-473.
C. B. Jeon and G. T. Jin, There are only finitely many 3-superbridge knots, Journal of Knot Theory and Its Ramifications, Vol. 10, No. 2 (2001), 331-343.
Colin Adams and 2007 SMALL Research Group, Superbridge number of knots, Preprint, 2007.
D. Pecker, Sur le théorème local de Harnack, C. R. Acad. Sci. Paris, Vol. 326, Series 1, 1998, 573-576.
Donovan McFeron, The minimal degree sequence of the polynomial figure eight knot (REU 2002).
Jacob Wagner, Geometric degree of $2$-bridge knots, http://library.williams.\
edu/theses/pdf.php?id=481
Nicolaas H. Kuiper, A new knot invariant, Mathematische Annalen, Vol. 278 (1987), 193-209.
Norbert A’Campo, Le groupe de monodromie du déploiement des singularités isolées de courbes planes I, Mathematische Annalen, Vol. 213 (1975), 1-32.
Norbert A’Campo, Singularities and related knots, University of Tokyo, Notes by William Gibson and Masaharu Ishikawa.
P. Madeti and R. Mishra, Minimal degree sequence for 2-bridge knots, Fundamenta Mathematicae, 190 (2006) 191-210.
P. Madeti and R. Mishra, Minimal degree sequence for torus knots of type $(p, 2p-1)$, Journal of Knot Theory and Its Ramifications, Vol. 15, No. 9 (2006), 1141-1151.
P. Madeti and R. Mishra, Minimal degree sequence for torus knots of type $(p, q)$, Journal of Knot Theory and Its Ramifications, Vol. 18, No. 4 (2009), 485-491.
P. Madeti and R. Mishra, Polynomial representation of long knots, International Journal of Mathematical Analysis, Vol. 3, No. 7 (2009), 325-337.
Peter Kim, Lee Stemkoski and C. Yuen, Polynomial knots of degree five, MIT Undergraduate Journal of Mathematics, Vol. 3 (2001).
R. Mishra, Minimal degree sequence for torus knots, Journal of Knot Theory and Its Ramifications, Vol. 9, No. 6 (2000), 759-769.
R. Mishra, Polynomial representation of strongly invertible knots and strongly negative amphicheiral knots, Osaka Journal of Mathematics, Vol. 43, No. 3 (2006), 625-639.
R. Mishra, Polynomial representation of torus knots of type [$(p,q)$]{}, Journal of Knot Theory and Its Ramifications, Vol. 8, No. 5 (1999), 667-700.
R. Shukla, On polynomial isotopy of knot-types, Proc. Indian Acad. Sci. (Math. Sci.), Vol. 104, No. 3 (1994), 543-548.
R. Shukla and A. Ranjan, On polynomial representation of torus knots, Journal of Knot Theory and Its Ramifications, Vol. 5, No. 2 (1996), 279-294.
Riccardo Benedetti and Jean-Jacques Risler, Real algebraic and semi-algebraic sets, Hermann Éditeurs Des Sciences Et Des Arts, 1990.
S. S. Abhyankar, On the semigroup of meromorphic curves, Proceedings of the International Symposium of Algebraic Geometry-Part I, Kyoto, 1977, 240-414.
V. A. Vassiliev, Cohomology of knots spaces, Theory of singularities and its applications (V. I. Arnold, ed.), Advances In Soviet Maths, Vol. 1, 1990, 23-69 (AMS, Providence, RI).
V. A. Vassiliev, On spaces of polynomial knots, Sbornik: Mathematics, Vol. 187, No. 2 (1996), 193-214.
[^1]: A tame polynomial automorphism is a composition of orientation preserving affine transformations and maps which add a multiple of a positive power of one row to an another row.
|
---
abstract: 'In recent works by Isett [@Isett], and later by Buckmaster, De Lellis, Isett and Székelyhidi Jr. [@BDIS], iterative schemes were presented for constructing solutions belonging to the Hölder class $C^{\sfrac15-{\varepsilon}}$ of the 3D incompressible Euler equations which do not conserve the total kinetic energy. The cited work is partially motivated by a conjecture of Lars Onsager in 1949 relating to the existence of $C^{\sfrac13-{\varepsilon}}$ solutions to the Euler equations which dissipate energy. In this note we show how the later scheme can be adapted in order to prove the existence of non-trivial Hölder continuous solutions which for almost every time belong to the critical Onsager Hölder regularity $C^{\sfrac13-{\varepsilon}}$ and have compact temporal support.'
address: 'Institut für Mathematik, Universität Leipzig, D-04103 Leipzig'
author:
- Tristan Buckmaster
nocite: '[@bds2]'
title: 'Onsager’s conjecture almost everywhere in time'
---
Introduction
============
In what follows ${\ensuremath{\mathbb{T}}}^3$ denotes the $3$-dimensional torus, i.e. ${\ensuremath{\mathbb{T}}}^3 = {\mathbb S}^1\times {\mathbb S}^1 \times
{\mathbb S}^1$. Formally, we say $(v,p)$ solves the *incompressible Euler equations* if $$\label{eulereq}
\left\{\begin{array}{l}
\partial_t v+{\ensuremath{\mathrm{div\,}}}v\otimes v +\nabla p =0\\
{\ensuremath{\mathrm{div\,}}}v = 0
\end{array}\right..$$
Suppose $v$ is such a solution, then we define its *kinetic energy*, as $$E(t):=\frac{1}{2}\int_{{\ensuremath{\mathbb{T}}}^3} {\left|v(x,t)\right|}^2~dx.$$ A simple calculation applying integration by parts yields that for any classical solution of the kinetic energy is in fact conserved in time. This formal calculation does not however hold for distributional solutions to Euler (cf. [@Scheffer93; @Shnirelmandecrease; @DS1; @DS2; @Wiedemann; @DSsurvey]).
In fact in the context of 3-dimensional turbulence, flows *dissipating* energy in time have long been considered. A key postulate of Kolmogorov’s K41 theory [@Kolmogorov] is that for homogeneous, isotropic turbulence, the dissipation rate is non-vanishing in the inviscid limit. In particular, defining the *structure functions* for homogeneous, isotropic turbulence $$S_p(\ell):={\left\langle\left[(v(x+\hat \ell)-v(x))\cdot\frac {\hat \ell}{\ell}\right]^p\right\rangle},$$ where $\hat \ell$ denotes a spatial vector of length $\ell$, Kolmogorov’s famous four-fifths law can be stated as $$\label{e:four-fifths}
S_3(\ell)=-\frac45{\varepsilon}_d \ell,$$ where here ${\varepsilon}_d$ denotes the mean energy dissipation per unit mass. More generally, Kolmogorov’s scaling laws can be stated as $$\label{e:scaling_laws}
S_p(\ell)= C_p {\varepsilon}_d^{\sfrac p3} \ell^{\sfrac p3},$$ for any positive integer $p$.
A well known consequence of the above scaling laws is the Kolmogorov spectrum, which postulates a scaling relation on the ‘energy spectrum’ of a turbulent flow (cf. [@FrischBook; @EyinkSreenivasan]). It was this observation that provided motivation for Onsager to conjecture in his famous note [@Onsager] on statistical hydrodynamics, the following dichotomy:
1. Any weak solution $v$ belonging to the Hölder space $C^\theta$ for $\theta>\frac{1}{3}$ conserves the energy.
2. For any $\theta<\frac{1}{3}$ there exist weak solutions $v\in C^\theta$ which do not conserve the energy.
Part (a) of this conjecture has since been resolved: it was first considered by Eyink in [@Eyink] following Onsager’s original calculations and later proven by Constantin, E and Titi in [@ConstantinETiti]. Subsequently, this later result was strengthened by showing that under weakened assumptions on $v$ (in terms of Besov spaces) kinetic energy is conserved [@RobertDuchon; @CCFS2007].
Part (b) remains an open conjecture and is the subject of this note. The first constructions of non-conservative Hölder-continuous ($C^{\sfrac{1}{10}-{\varepsilon}}$) weak solutions appeared in work of De Lellis and Székelyhidi Jr. [@DS4], which itself was based on their earlier seminal work [@DS3] where continuous weak solutions were constructed. Furthermore, it was shown in the mentioned work that such solutions can be constructed obeying any prescribed smooth non-vanishing energy profile. In recent work [@Isett], P. Isett introduced a number of new ideas in order to construct non-trivial $1/5-{\varepsilon}$ Hölder-continuous weak solutions with compact temporal support. This construction was later improved by Buckmaster, De Lellis and Székelyhidi Jr. [@BDIS], following more closely the earlier work [@DS3; @DS4], in order construct $1/5-{\varepsilon}$ Hölder-continuous weak solution obeying a given energy profile.
In this note we give a proof of the following theorem.
\[t:main\] There exists is a non-trivial continuous vector field $v\in C^{\sfrac{1}{5}-{\varepsilon}} ( {\ensuremath{\mathbb{T}}}^3 \times (-1, 1), {\ensuremath{\mathbb{R}}}^3)$ with compact support in time and a continuous scalar field $p\in C^{\sfrac{2}{5}-2{\varepsilon}} ({\ensuremath{\mathbb{T}}}^3\times (-1,1))$ with the following properties:
- The pair $(v,p)$ solves the incompressible Euler equations in the sense of distributions.
- There exists a set $\Omega\subset (-1,1)$ of Hausdorff dimension strictly less than $1$ such that if $t\notin\Omega$ then $v(\cdot,t)$ is Hölder $C^{1/3-{\varepsilon}}$ continuous and $p$ is Hölder $C^{2/3-2{\varepsilon}}$ continuous.[^1]
[**Relation to intermittency.**]{} The theory of intermittency is born of an effort to explain the experimental and numerical evidence (e.g. [@benzi1993extended]) of measurable discrepancies from the scaling laws (cf. [@frisch1980fully]). In this direction, Mandelbrot conjectured [@mandelbrot1976turbulence] that at the inviscid limit, turbulence concentrates (*in space*) on a fractal set of Hausdorff dimension strictly less than 3.
It is interesting to note that the solutions constructed in order to prove the above theorem have a fractal structure in *time*: namely, the set of times for which $v$ is *not* Hölder $C^{1/3-{\varepsilon}}$ continuous is contained in a Cantor-like set with Hausdorff dimension strictly less than 1. Since the phenomena observed does not relate to the structure functions from which intermittency was originally postulated it is clearly far-fetched to label such a phenomena as intermittency. Nevertheless, it is the opinion of the author that the parallels to the notion of intermittency remain of interest.
Euler-Reynolds system and the convex integration scheme {#s:setup}
-------------------------------------------------------
In order to prove Theorem \[t:main\] we construct an iteration scheme in the style of [@BDIS], which is itself based on the schemes presented in [@DS3; @DS4]. At each step $q\in {\ensuremath{\mathbb{N}}}$ we construct a triple $(v_q, p_q, \mathring{R}_q)$ solving the Euler-Reynolds system (see Definition 2.1 in [@DS3]): $$\label{e:euler_reynolds}
\left\{\begin{array}{l}
\partial_t v_q + {\ensuremath{\mathrm{div\,}}}(v_q\otimes v_q) + \nabla p_q ={\ensuremath{\mathrm{div\,}}}\mathring{R}_q\\ \\
{\ensuremath{\mathrm{div\,}}}v_q = 0\, .
\end{array}\right.$$
The initial triple $(v_0,p_0,\mathring{R}_0)$ will be non-trivial with compact support in time; all triples thereafter will be defined inductively as perturbations of the proceeding triples. The perturbation $$w_q:=v_{q}-v_{q-1},$$ will be composed of weakly interacting perturbed Beltrami flows (see Section \[s:prelim\]) oscillating at *frequency* $\lambda_q$, defined in such a way to correct for the previous Reynolds error $\mathring{R}_{q-1}$.
In order to ensure convergence of the sequence $v_q$ to a continuous weak $C^{\sfrac15-{\varepsilon}}$ solution of Euler, we will require the following estimates to be satisfied $$\begin{aligned}
\|w_{q}\|_0 + \frac{1}{\lambda_q}\|\partial_t w_{q}\|_0 + \frac{1}{\lambda_q}\|w_{q}\|_1 &\leq \lambda_q^{-\sfrac15+{\varepsilon}_0} \label{e:v_iter}\\
{\left\|p_q-p_{q-1}\right\|}_0+ \frac{1}{\lambda_q}{\left\|\partial_t(p_q-p_{q-1})\right\|}_0+ \frac{1}{\lambda_q^2}{\left\|p_q-p_{q-1}\right\|}_2 &\leq \lambda_q^{-\sfrac25+2{\varepsilon}_0} \label{e:p_iter}\\
{\left\|\mathring R_q\right\|}_0+\frac{1}{\lambda_q}{\left\|\mathring R_q\right\|}_1&\leq \lambda_{q+1}^{-\sfrac25+2{\varepsilon}_0} \label{e:R_iter}\end{aligned}$$ for some ${\varepsilon}_0>0$ strictly smaller than ${\varepsilon}$. Here and throughout the article, ${\left\|\cdot\right\|}_\beta$ for $\beta=m+\kappa$, $\beta\in {\ensuremath{\mathbb{N}}}$ and $\kappa\in [0,1)$ will denote the usual *spatial* Hölder $C^{m,\kappa}$ norm. As a minor point of deviation from [@BDIS], we keep track of second order spatial derivative estimates of $p_q-p_{q-1}$, whereas in [@BDIS] first order estimates – which in the present work are implicit by interpolation – were sufficient. These second order estimates will be used in order to obtain slightly improved bounds on the Reynolds stress (see Section \[s:reynolds\]).
It is perhaps worth noting that aside from the second order estimate on $p_q-p_{q-1}$, up to a constant multiple, the above estimates are consistent with the estimates given in [@BDIS].[^2]
In order to ensure that our sequence convergences to a non-trivial solution, we will impose the addition requirement that $$\label{e:nontrivial_req}
\sum_{q=1}^{\infty} {\left\|w_q\right\|}_0 < \frac{1}{2}{\left\|v_0\right\|}_0,$$ for times $t\in [-\sfrac18,\sfrac18]$.
The principle new idea of this work is that in addition to the estimates given above, we will keep track of sharper, time localized estimates. As a consequence of these sharper estimates, it can be shown that for any given time $t\in(-1,1)$ outside a prescribed set $\Omega$ of Hausdorff dimension strictly less than $1$, there exists a $N=N(t)$ such that $$\begin{aligned}
\|w_{q}\|_0 + \frac{1}{\lambda_q}\|\partial_t w_{q}\|_0 + \frac{1}{\lambda_q}\|w_{q}\|_1 &\leq \lambda_q^{-\sfrac13+{\varepsilon}_0}\label{e:sharp1} \\
{\left\|p_q-p_{q-1}\right\|}_0+ \frac{1}{\lambda_q}{\left\|\partial_t(p_q-p_{q-1})\right\|}_0+ \frac{1}{\lambda_q^2}{\left\|p_q-p_{q-1}\right\|}_2 &\leq \lambda_q^{-\sfrac23+2{\varepsilon}_0} \\
{\left\|\mathring R_q\right\|}_0+\frac{1}{\lambda_q}{\left\|\mathring R_q\right\|}_1&\leq \lambda_{q+1}^{-\sfrac23+2{\varepsilon}_0}, \label{e:sharp3}\footnotemark\end{aligned}$$ for every $q\geq N$.
The main iteration proposition and the proof of Theorem \[t:main\]
------------------------------------------------------------------
\[p:iterate\] For every small ${\varepsilon}_0>0$, there exists an $\alpha>1$, $d<1$ and a sequence of parameters $\lambda_0,\lambda_1,\dots$ satisfying $\sfrac12 \lambda_0^{\alpha^q}<\lambda_q<2\lambda_0^{\alpha^q}$ such that the following holds. A sequence of triples $(v_q, p_q, \mathring{R}_q)$ can be constructed with temporal support confined to $[-\sfrac12,\sfrac12]$ solving and satisfying the estimates (\[e:v\_iter\]-\[e:nontrivial\_req\]). Moreover, for any $\delta>0$, there exists an integer $M$ such that if $\Xi^M$ denotes the set of times $t$ such that there exists a $q\geq M$ satisfying either $$\begin{split}\label{e:onsager_est}
\|w_{q}\|_0 + \frac{1}{\lambda_q}\|w_{q}\|_1 &> \lambda_q^{-\sfrac13+{\varepsilon}_0},\ \text{ or}\\
{\left\|p_q-p_{q-1}\right\|}_0+ \frac{1}{\lambda_q}{\left\|p_q-p_{q-1}\right\|}_1 &> \lambda_q^{-\sfrac23+2{\varepsilon}_0},
\end{split}$$ then there exists a cover of $\Xi^M$ consisting of a sequence of balls of radius $r_i$ such that $$\label{e:real_Haus}
\sum r_i^d < \delta.$$
Fix ${\varepsilon}_0=\sfrac{{\varepsilon}}{2}$ and let $(v_q, p_q,\mathring{R}_q)$ be a sequence as in Proposition \[p:iterate\]. It follows then easily that $(v_q, p_q)$ converge uniformly to a pair of continuous functions $(v,p)$ satisfying , having compact temporal support. Moreover, by interpolating the inequalities and we obtain that $v_q$ converges in $C^{\sfrac15-{\varepsilon}}$ and $p_q$ in $C^{\sfrac25-2{\varepsilon}}$.
In order to prove (ii) we first fix $\delta>0$ and let $M$ and $\Xi^M$ be as in Proposition \[p:iterate\]. Hence by assumption if $t\notin \Xi^M$ $$\begin{split}
\|w_{q}\|_0 + \frac{1}{\lambda_q}\|w_{q}\|_1 &\leq \lambda_q^{-\sfrac13+{\varepsilon}_0}\\
{\left\|p_q-p_{q-1}\right\|}_0+ \frac{1}{\lambda_q}{\left\|p_q-p_{q-1}\right\|}_1 &\leq \lambda_q^{-\sfrac23+2{\varepsilon}_0},
\end{split}$$ for all $q\geq M$. Thus interpolating the inequalities above we obtain that $v-v_M$ is bounded in $C^{\sfrac13-{\varepsilon}}$ and $p-p_M$ in $C^{\sfrac23-2{\varepsilon}}$. By and , the pair $(v_M,p_M)$ are $C^1$ bounded and thus it follows that $v$ and $p$ are bounded in $C^{\sfrac13-{\varepsilon}}$ and $C^{\sfrac23-2{\varepsilon}}$ respectively. Letting $\delta$ tend to zero we obtain our claim.
Plan of the paper
-----------------
After recalling in Section \[s:prelim\] some preliminary notation from the paper [@DS3], in Section \[s:const\_triple\] we give the precise definition of the sequence of triples $(v_{q}, p_{q}, \mathring{R}_{q})$. In Section \[s:ordering\] we list a number of inequalities that we will require on the various parameters of our scheme. The Sections \[s:perturbation\_estimates\] and \[s:reynolds\] will focus on estimating, respectively, $w_{q+1} =v_{q+1}-v_q$, and $\mathring{R}_{q+1}$. These estimates are then collected in Section \[s:conclusion\] where Proposition \[p:iterate\] will be finally proved. Throughout the entire article we will rely heavily on the arguments of [@BDIS] – in some sense the scheme presented here is a simple variant of that given in [@BDIS] – as such the present paper is intentionally structured in a similar manner to [@BDIS] in order to aide comparison.
Acknowledgments
---------------
I wish to thank Camillo De Lellis and László Székelyhidi Jr. for the enlightening discussions I had with them both. I would also like to thank Antoine Choffrut, Camillo De Lellis, Charles Doering and László Székelyhidi Jr. for their helpful comments and corrections regarding the manuscript. In addition, I would like to express my gratitude to the anonymous referee for his/her detailed comments and corrections.
This work is supported as part of the ERC Grant Agreement No. 277993.
Preliminaries {#s:prelim}
=============
Throughout this paper we denote the $3\times 3$ identity matrix by ${\ensuremath{\mathrm{Id}}}$. In this section we state a number of results found in [@DS3] which are fundamental to the present scheme as well its predecessors [@DS3; @DS4; @BDIS].
Geometric preliminaries
-----------------------
The following two results will form the cornerstone in which to construct the highly oscillating flows required by our scheme.
\[p:Beltrami\] Let $\bar\lambda\geq 1$ and let $A_k\in{\ensuremath{\mathbb{R}}}^3$ be such that $$A_k\cdot k=0,\,|A_k|=\tfrac{1}{\sqrt{2}},\,A_{-k}=A_k$$ for $k\in{\ensuremath{\mathbb{Z}}}^3$ with $|k|=\bar\lambda$. Furthermore, let $$B_k=A_k+i\frac{k}{|k|}\times A_k\in{\ensuremath{\mathbb{C}}}^3.$$ For any choice of $a_k\in{\ensuremath{\mathbb{C}}}$ with $\overline{a_k} = a_{-k}$ the vector field $$\label{e:Beltrami}
W(\xi)=\sum_{|k|=\bar\lambda}a_kB_ke^{ik\cdot \xi}$$ is real-valued, divergence-free and satisfies $$\label{e:Bequation}
{\ensuremath{\mathrm{div\,}}}(W\otimes W)=\nabla\frac{|W|^2}{2}.$$ Furthermore $$\label{e:av_of_Bel}
\langle W\otimes W\rangle= \fint_{{\ensuremath{\mathbb{T}}}^3} W\otimes W\,d\xi = \frac{1}{2} \sum_{|k|=\bar\lambda} |a_k|^2 \left( {\ensuremath{\mathrm{Id}}}- \frac{k}{|k|}\otimes\frac{k}{|k|}\right)\, .$$
\[l:split\] For every $N\in{\ensuremath{\mathbb{N}}}$ we can choose $r_0>0$ and $\bar{\lambda} > 1$ with the following property. There exist pairwise disjoint subsets $$\Lambda_j\subset\{k\in {\ensuremath{\mathbb{Z}}}^3:\,|k|=\bar{\lambda}\} \qquad j\in \{1, \ldots, N\}$$ and smooth positive functions $$\gamma^{(j)}_k\in C^{\infty}\left(B_{r_0} ({\ensuremath{\mathrm{Id}}})\right) \qquad j\in \{1,\dots, N\}, ~k\in\Lambda_j,~\footnotemark$$ such that
- $k\in \Lambda_j$ implies $-k\in \Lambda_j$ and $\gamma^{(j)}_k = \gamma^{(j)}_{-k}$;
- For each $R\in B_{r_0} ({\ensuremath{\mathrm{Id}}})$ we have the identity $$\label{e:split}
R = \frac{1}{2} \sum_{k\in\Lambda_j} \left(\gamma^{(j)}_k(R)\right)^2 \left({\ensuremath{\mathrm{Id}}}- \frac{k}{|k|}\otimes \frac{k}{|k|}\right)
\qquad \forall R\in B_{r_0}({\ensuremath{\mathrm{Id}}})\, .$$
The operator $\mathcal{R}$
--------------------------
The following operator will be used in order to deal the the Reynolds Stresses arising from our iteration scheme.
\[d:reyn\_op\] Let $v\in C^\infty ({\ensuremath{\mathbb{T}}}^3, {\ensuremath{\mathbb{R}}}^3)$ be a smooth vector field. We then define ${\ensuremath{\mathcal{R}}}v$ to be the matrix-valued periodic function $${\ensuremath{\mathcal{R}}}v:=\frac{1}{4}\left(\nabla{\ensuremath{\mathcal{P}}}u+(\nabla{\ensuremath{\mathcal{P}}}u)^T\right)+\frac{3}{4}\left(\nabla u+(\nabla u)^T\right)-\frac{1}{2}({\ensuremath{\mathrm{div\,}}}u) {\ensuremath{\mathrm{Id}}},$$ where $u\in C^{\infty}({\ensuremath{\mathbb{T}}}^3,{\ensuremath{\mathbb{R}}}^3)$ is the solution of $$\Delta u=v-\fint_{{\ensuremath{\mathbb{T}}}^3}v\textrm{ in }{\ensuremath{\mathbb{T}}}^3$$ with $\fint_{{\ensuremath{\mathbb{T}}}^3} u=0$ and ${\ensuremath{\mathcal{P}}}$ is the Leray projection onto divergence-free fields with zero average.
\[l:reyn\] For any $v\in C^\infty ({\ensuremath{\mathbb{T}}}^3, {\ensuremath{\mathbb{R}}}^3)$ we have
- ${\ensuremath{\mathcal{R}}}v(x)$ is a symmetric trace-free matrix for each $x\in {\ensuremath{\mathbb{T}}}^3$;
- ${\ensuremath{\mathrm{div\,}}}{\ensuremath{\mathcal{R}}}v = v-\fint_{{\ensuremath{\mathbb{T}}}^3}v$.
Schauder and commutator estimates on $\mathcal R$
-------------------------------------------------
We recall the following Schauder estimates (Proposition G.1 (ii), Appendix G of [@BDIS]) and commutator estimates (Proposition H.1 Appendix H of [@BDIS]).
\[p:stat\_phase\] Let $k\in{\ensuremath{\mathbb{Z}}}^3\setminus\{0\}$ be fixed. For a smooth vector field $a\in C^{\infty}({\ensuremath{\mathbb{T}}}^3;{\ensuremath{\mathbb{R}}}^3)$ let $F(x):=a(x)e^{i\lambda k\cdot x}$. Then we have $$\label{e:R(F)}
\|{\ensuremath{\mathcal{R}}}(F)\|_{\alpha}\leq \frac{C}{\lambda^{1-\alpha}}\|a\|_0+\frac{C}{\lambda^{m-\alpha}}[a]_m+\frac{C}{\lambda^m}[a]_{m+\alpha},$$ for $m=0,1,2,\dots$ and $\alpha\in (0,1)$, where $C=C(\alpha,m)$.
\[p:commutator\] Let $k\in{\ensuremath{\mathbb{Z}}}^3\setminus\{0\}$ be fixed. For any smooth vector field $a\in C^\infty ({\ensuremath{\mathbb{T}}}^3;{\ensuremath{\mathbb{R}}}^3)$ and any smooth function $b$, if we set $F(x):=a(x)e^{i\lambda k\cdot x}$, we then have $$\|[b, \mathcal{R}] (F)\|_\alpha \leq C\lambda^{\alpha-2} \|a\|_0\|b\|_1
+ C \lambda^{\alpha-m} \left(\|a\|_{m-1+\alpha} \|b\|_{1+\alpha} + \|a\|_{\alpha} \|b\|_{m+\alpha}\right)\label{e:main_est_commutator}$$ for $m=0,1,2,\dots$ and $\alpha\in (0,1)$, where $C=C(\alpha,m)$.
The construction of the triples $(v_q,p_q,\mathring R_q)$ {#s:const_triple}
=========================================================
The initial triple $(v_0,p_0,\mathring R_0)$
--------------------------------------------
Let $\chi_0$ be a smooth non-negative function, compactly supported on the interval $[-\sfrac14,\sfrac14]$, bounded above by $1$ and identically equal to $1$ on $[-\sfrac18,\sfrac18]$. We now set our initial velocity to be the divergence-free vector field $$v_0 (t,x) := \frac12\lambda_0^{-\frac15+{\varepsilon}_0}\chi_0(t)(\cos(\lambda_0 x_3),\sin(\lambda_0 x_3),0),$$ where here we use the notation $x=(x_1,x_2,x_3)$. The initial pressure $p_0$ is then defined to be identically zero. Finally if we set $$\mathring R_0 = \frac12\lambda_0^{-\frac65+{\varepsilon}_0} \chi'_0(t)
\begin{pmatrix}
0 & 0 & \sin(\lambda_0 x_3) \\
0 & 0 & -\cos(\lambda_0 x_3) \\
\sin(\lambda_0 x_3) & -\cos(\lambda_0 x_3) & 0
\end{pmatrix},$$ we obtain $$\partial_t v_0+{\ensuremath{\mathrm{div\,}}}(v_0\otimes v_0)+\nabla p_0= {\ensuremath{\mathrm{div\,}}}\mathring R_0.$$ Hence the triple $(v_0,p_0,\mathcal R_0)$ is a solution to the Euler-Reynolds system . Furthermore, it follows immediately that $${\left\|\mathring R_0\right\|}_0+\frac{1}{\lambda_0}{\left\|\mathring R_0\right\|}_1\leq C \lambda_0^{-\sfrac65+\epsilon_0}.$$ Thus if $\lambda_0$ is sufficiently large we obtain (\[e:v\_iter\]-\[e:R\_iter\]) for $q=0$.
The choice of initial triple $(v_0,p_0,\mathring R_0)$ is not of any great importance: any choice satisfying the conditions set out in Section \[s:setup\] and is such that ${\left|v_0\right|}\approx \lambda_0^{-\sfrac15+{\varepsilon}_0}$ for times $t\in[-\sfrac18, \sfrac18]$ should suffice.
The inductive step {#s:perturbations}
------------------
The procedure of constructing $(v_{q+1}, p_{q+1}, \mathring{R}_{q+1})$ in terms of $(v_q, p_q, \mathring{R}_q)$ follows in the same spirit as that of the scheme outlined in [@BDIS] with a few minor modifications in order to satisfy the specific requirements of Proposition \[p:iterate\].
We will assume that $\lambda_0$ is chosen large enough such that $$\label{e:summabilities}
\sum_{j< q} \lambda_j^{\sfrac23} \leq \lambda_q^{\sfrac23}\, \quad \text {and } \sum_{j=1}^{\infty} \lambda_j^{-\sfrac15+{\varepsilon}_0} \leq \frac{\lambda_0^{-\sfrac15+{\varepsilon}_0} }{4}\leq \frac18.$$ Notice as a direct consequence follows from and the definition of $v_0$.
We fix a symmetric non-negative convolution kernel $\psi$ with support confined to $[-1,1]$.
With a slight abuse of notation, we will use $(v,p,\mathring{R})$ for $(v_q, p_q, \mathring{R}_q)$ and $(v_1,p_1, \mathring{R}_1)$ for $(v_{q+1}, p_{q+1}, \mathring{R}_{q+1})$.
As was done in [@BDIS], we discretize time into intervals of size $\mu^{-1}$ for some large parameter $\mu$ to be chosen later.
The choice of cut-off functions $\chi=\chi^{(q+1)}$ used in this article will differ slightly to that described in [@BDIS]. Specifically, we define $\chi$ to be a smooth function such that for a small parameter ${\varepsilon}_1>0$ (to be chosen later) $\chi$ satisfies the following conditions:
- The support of $\chi$ is contained in $\left(-\frac12-\frac{\lambda_{q+1}^{-{\varepsilon}_1}}4, \frac12+\frac{\lambda_{q+1}^{-{\varepsilon}_1}}4\right)$.
- In the range $\left(-\frac12+\frac{\lambda_{q+1}^{-{\varepsilon}_1}}4, \frac12-\frac{\lambda_{q+1}^{-{\varepsilon}_1}}4\right)$ we have $\chi\equiv 1$.
- The sequence $\{\chi^2 (x-l)\}_{l\in{\ensuremath{\mathbb{Z}}}}$ forms a partition of unity of ${\ensuremath{\mathbb{R}}}$, i.e.$$\sum_{l\in {\ensuremath{\mathbb{Z}}}} \chi^2 (x-l) = 1.$$
- For $N\geq 0$ we have the estimates $${\left|\partial^N_x \chi\right|}\leq C \lambda_{q+1}^{N{\varepsilon}_1},$$ where the constant $C$ depends only on $N$ – in particular it is independent of $q$.
In [@BDIS], $\chi$ was simply chosen to be a $C_c^{\infty} (-\frac34, \frac34)$ function, independent of the iteration $q$, satisfying the third condition. Having defined $\chi$, we adopt the notation $\chi_l(t):=\chi(\mu t-l)$. The fundamental difference to choice of $\chi$ in [@BDIS] is the extra factor $\lambda_{q+1}^{-{\varepsilon}_1}$ appearing in the definition. A consequence of this modification is that the Lebesgue measure of the set $$\bigcap_{q=1}^\infty \bigcup_{q'=q}^\infty \bigcup_l \operatorname{support}(\chi_{q', l}')$$ is zero. We will see this will provide us with a key ingredient in order to prove a.e. in time $C^{\sfrac13-{\varepsilon}}$ convergence of the sequence $v_q$.
For each $l$ define the amplitude function $$\rho_l=2r_0^{-1} {\left\|\mathring{R}(\cdot,l\mu^{-1}) \right\|}_0.$$ The function $\rho_l$ will play a similar role to the $\rho_l$ found in [@BDIS]: the comparatively simpler definition above reflects the fact that we are only interested in correcting for the Reynolds error and are not attempting to construct a solution to Euler with a prescribed energy as was done in [@BDIS]. In particular, up to a constant multiple, the amplitude function $\rho_{l}$ defined here provides a lower bound for the amplitude defined in [@BDIS], and moreover is potentially significantly smaller.
Keeping in mind the new choices of $\rho_l$ and $\chi_l$, the construction of $(v_{1}, p_{1}, \mathring{R}_{1})$ proceeds in exactly the same manner as that described in [@BDIS], with the minor exception that the mollification parameter $\ell$ will be chosen explicitly to be
$$\label{e:ell_def}
\ell=\lambda_{q+1}^{-1+{\varepsilon}_1}.$$
In particular assuming $\frac{\alpha-1}2>{\varepsilon}_1$ and $\lambda_0$ is chosen sufficiently large, we have $$\label{e:ell_lambda}
\frac1{\lambda_{q}}\leq \ell \leq \frac1{\lambda_{q+1}}.$$ For comparison, the choice of $\ell$ taken in [@BDIS] was $\ell := \delta_{q+1}^{-\sfrac{1}{8}}\delta_{q}^{\sfrac{1}{8}}\lambda_{q}^{-\sfrac{1}{4}}\lambda_{q+1}^{-\sfrac{3}{4}}$. The parameter ${\varepsilon}_1$ may be taken arbitrarily small, and consequently, the choice of $\ell$ taken here will be significantly smaller than that taken in [@BDIS].
For completeness we recall the remaining steps required to construct the triple $(v_{1}, p_{1}, \mathring{R}_{1})$.
Having set $$R_l(x):= \rho_l {\ensuremath{\mathrm{Id}}}- \mathring{R} (x,l\mu^{-1})\,$$ and $v_{\ell}=v*\psi_{\ell}$, we define $R_{\ell,l}$ to be the unique solution to the following transport equation $$\left\{\begin{array}{l}
\partial_t R_{\ell,l} + v_{\ell}\cdot \nabla R_{\ell,l} =0\\ \\
R_{\ell,l}(\frac l{\mu},\cdot)=R_l*\psi_{\ell}\, .
\end{array}\right.$$
For every integer $l\in [-\mu, \mu]$, we let $\Phi_l: {\ensuremath{\mathbb{R}}}^3\times (-1,1)\to {\ensuremath{\mathbb{R}}}^3$ be the solution of $$\left\{\begin{array}{l}
\partial_t \Phi_l + v_{\ell}\cdot \nabla \Phi_l =0\\ \\
\Phi_l (x,l \mu^{-1})=x.
\end{array}\right.$$
Applying Lemma \[l:split\] with $N=2$, we denote by $\Lambda^e$ and $\Lambda^o$ the corresponding families of frequencies in ${\ensuremath{\mathbb{Z}}}^3$ and set $\Lambda := \Lambda^o$ + $\Lambda^e$. For each $k\in \Lambda$ and each $l\in {\ensuremath{\mathbb{Z}}}\cap[0,\mu]$ we then define $$\begin{aligned}
a_{kl}(x,t)&:=\sqrt{\rho_l}\gamma_k \left(\frac{R_{\ell,l}(x,t)}{\rho_l}\right),\\
w_{kl}(x,t)& := a_{kl}(x,t)\,B_ke^{i\lambda_{q+1}k\cdot \Phi_l(x,t)}.\end{aligned}$$ The perturbation $w=v_1-v$ is then defined as the sum of a “principal part” and a “corrector". The “principal part” being the map $$\begin{aligned}
w_o (x,t) := \sum_{\textrm{$l$ odd}, k\in \Lambda^o} \chi_l(t)w_{kl} (x,t) +
\sum_{\textrm{$l$ even}, k\in \Lambda^e} \chi_l(t)w_{kl} (x,t)\, .\end{aligned}$$ The “corrector" $w_c$ is then defined in such a way that the sum $w= w_o+w_c$ is divergence free: $$w_c
=\sum_{kl}\chi_l\Bigl(\frac{i}{\lambda_{q+1}}\nabla a_{kl}-a_{kl}(D\Phi_{l}-{\ensuremath{\mathrm{Id}}})k\Bigr)\times\frac{k\times B_k}{|k|^2}e^{i\lambda_{q+1}k\cdot\Phi_l}.$$
The new pressure is defined as $$p_1=p-\frac{|w_o|^2}{2} - \frac{1}{3} |w_c|^2 - \frac{2}{3} \langle w_o, w_c\rangle - \frac{2}{3} \langle v-v_\ell, w\rangle \,.$$ and finally we set $\mathring{R}_1= R^0+R^1+R^2+R^3+R^4+R^5$, where $$\begin{aligned}
R^0 &= \mathcal R \left(\partial_tw+v_\ell\cdot \nabla w+w\cdot\nabla v_\ell\right)\label{e:R^0_def}\\
R^1 &=\mathcal R {\ensuremath{\mathrm{div\,}}}\Big(w_o \otimes w_o- \sum_l \chi_l^2 R_{\ell, l}
-\textstyle{\frac{|w_o|^2}{2}}{\ensuremath{\mathrm{Id}}}\Big)\label{e:R^1_def}\\
R^2 &=w_o\otimes w_c+w_c\otimes w_o+w_c\otimes w_c - \textstyle{\frac{|w_c|^2 + 2\langle w_o, w_c\rangle}{3}} {\rm Id}\label{e:R^2_def}\\
R^3 &= w\otimes (v - v_\ell) + (v-v_\ell)\otimes w
- \textstyle{\frac{2 \langle (v-v_{\ell}), w\rangle}{3}} {\ensuremath{\mathrm{Id}}}\label{e:R^3_def}\\
R^4&=\mathring R- \mathring{R}* \psi_\ell \label{e:R^4_def}\\
R^5&=\sum_l \chi_l^2 (\mathring{R}_{\ell, l} + \mathring{R}*\psi_\ell)\label{e:R^5_def}\, .\end{aligned}$$
Compact support in time
-----------------------
By construction it follows that if for each integer $j$ the triple $(v_j,p_j,\mathring R_j)$ is supported in the time interval $[T,T']$ then $(v_{j+1},p_{j+1},\mathring R_{j+1})$ is supported in the time interval $[T-\mu_{j+1}^{-1}, T'+\mu_{j+1}^{-1}]$. Therefore since $(v_0,p_0,\mathring R_0)$ is supported in the time interval $[-\sfrac14,\sfrac14]$, it follows by induction that if we assume $$\label{e:mu_lower}
\mu_j\geq 2^{j+2}$$ then triple $(v_q,p_q,\mathring R_q)$ is supported in the time interval $$\big[-\sfrac14-\sum_{j=1}^{q} \mu_j^{-1},\sfrac14+\sum_{j=1}^{q} \mu_j^{-1}\big]\subset[-\sfrac12, \sfrac12].$$
Ordering of parameters {#s:ordering}
======================
In order to better aid comparison to arguments of [@BDIS], we introduce a sequence of *strictly decreasing* parameters $\delta_q<1$. In Section \[s:conclusion\] we will provide an explicit definition of $\delta_q$, but for now we restrict ourselves to specifying a number of inequalities that $\delta_q$ will need to satisfy. Analogously to [@BDIS] we will assume the following estimates $$\begin{aligned}
\frac{1}{\lambda_q}{\left\|v_q\right\|}_1&\leq \delta_q^{\sfrac12}\label{e:delta_cond_first}\\
\frac{1}{\lambda_q}{\left\|p_q\right\|}_1+\frac{1}{\lambda_q^2}{\left\|p_q\right\|}_2&\leq \delta_q\label{e:delta_cond_second}\\
{\left\|\mathring R_{q}\right\|}_0+\frac{1}{\lambda_{q}}{\left\|\mathring R_{q}\right\|}_1&\leq \frac{1}C_0 \delta_{q+1}\label{e:iter_rey}\\
{\left\|\partial_t+v\cdot\nabla \mathring R_q\right\|}_0&\leq \delta_{q+1}\delta_q^{\sfrac12}\lambda_q,\label{e:delta_cond_last}\end{aligned}$$ where $C_0>1$ is a large number to be specified in the next section.
Furthermore, we will assume in addition that the following parameter inequalities are satisfied $$\label{e:conditions_lambdamu_2}
\begin{split}\sum_{j< q} \delta_j \lambda_j \leq \delta_q \lambda_q,\qquad \frac{\delta_{q}^{\sfrac{1}{2}}\lambda_q\ell}{\delta_{q+1}^{\sfrac{1}{2}}}\leq1, \\
\frac{\delta_q^{\sfrac12}\lambda_q}{\mu}\leq \lambda_{q+1}^{-{\varepsilon}_1}\quad\mbox{and}\quad
\frac{1}{\lambda_{q+1}}\leq \frac{\delta_{q+1}^{\sfrac12}}{\mu}.\end{split}$$
The sequence $\delta_q$ will be applied in the context of proving $\sfrac15-{\varepsilon}$ convergence of the velocities $v_q$; however note that unlike the case in [@BDIS], the sequence does not appear explicitly in the definition of the triples $(v_q,p_q,\mathring R_q)$.
In order to prove a.e. time $\sfrac13-{\varepsilon}$ convergence, we will require localized estimates (in time). To this aim, we fix a time $t_0\in(-1,1)$ and set $l_{q+1}$ to be the unique integer such that $\mu t_0\in [-\sfrac12+l_{q+1},\sfrac12+l_{q+1})$. We now introduce a new sequence of strictly decreasing parameters $\delta_{q,t_0}\leq \delta_q$ such that for a given time $t$ satisfying ${\left|\mu t-l_{q+1}\right|}\leq 1$ we have the following estimates $$\begin{aligned}
\frac{1}{\lambda_q}{\left\|v_q\right\|}_1&\leq \delta_{q,t_0}^{\sfrac12}\label{e:delta_cond_first2}\\
\frac{1}{\lambda_q}{\left\|p_q\right\|}_1+\frac{1}{\lambda_q^2}{\left\|p_q\right\|}_2&\leq \delta_{q,t_0}\label{e:delta_cond_second2}\\
{\left\|\mathring R_{q}\right\|}_0+\frac{1}{\lambda_{q}}{\left\|\mathring R_{q}\right\|}_1&\leq\frac{1}{C_0} \delta_{q+1,t_0}\label{e:iter_rey2}\\
{\left\|\partial_t+v\cdot\nabla \mathring R_q\right\|}_0&\leq \delta_{q+1,t_0}\delta_{q,t_0}^{\sfrac12}\lambda_q.\label{e:delta_cond_last2}\end{aligned}$$
Analogously to we assume the following inequalities are satisfied $$\label{e:conditions_lambdamu_3}
\sum_{j< q} \delta_{j,t_0} \lambda_j \leq \delta_{q,t_0} \lambda_q, \quad
\frac{\delta_{q,t_0}^{\sfrac{1}{2}}\lambda_q\ell}{\delta_{q+1,t_0}^{\sfrac{1}{2}}}\leq1, \quad\text{and}\quad
\frac{\delta_{q,t_0}^{\sfrac12}\lambda_q}{\mu} \leq \lambda_{q+1}^{-{\varepsilon}_1}.$$ The last inequality being a trivial consequence of and the inequality $\delta_{q,t_0}\leq \delta_q$. Observe that we *do not* assume a condition akin to the last inequality of . This remark is worth keeping in mind as we will apply the arguments of [@BDIS] extensively, where such a condition was present. Luckily, this condition is only really required at one specific point in the paper: the estimation of $${\left\|\partial_t \mathring R_1+ v_1\cdot \nabla \mathring R_1 \right\|}_0,$$ for which on a subset of time we will present sharper estimates. This condition was also used in a few isolated cases in [@BDIS] in order to simplify a number of terms arising from estimates, however this was primarily done for aesthetic reasons.
Estimates on the perturbation {#s:perturbation_estimates}
=============================
In order to bound the perturbation, we apply nearly identical arguments used in Section 3 of [@BDIS].
We recall the following notation from [@BDIS] $$\begin{aligned}
\phi_{kl}(x,t)&:= e^{i\lambda_{q+1}k\cdot[\Phi_l(x,t)-x]},\\
L_{kl}&:=a_{kl}B_k+\Bigl(\frac{i}{\lambda_{q+1}}\nabla a_{kl}-a_{kl}(D\Phi_{l}-{\ensuremath{\mathrm{Id}}})k\Bigr)\times\frac{k\times B_k}{|k|^2}.\end{aligned}$$ The perturbation $w$ can then be written as $$w=\sum_{kl}\chi_l\,L_{kl}\,\phi_{kl}\,e^{i\lambda_{q+1}k\cdot x}=\sum_{kl}\chi_l\,L_{kl}\,e^{i\lambda_{q+1}k\cdot\Phi_l}\,.$$
For reference we note that as a consequence of , and we have $$\label{e:uni_v_bound}
{\left\|v_q\right\|}_0 \leq 1.$$ We also recall that as a consequence of simple convolution inequalities together with the inequalities we have for a fixed $t_0$, $N\geq 1$ and times $t$ satisfying ${\left|\mu_{q+1}t-l_{q+1}\right|}<1$ $${\left\|v_{\ell}\right\|}_N\leq \delta_{q,t_o}^{\sfrac12}\lambda_q\ell^{-N+1}.\label{e:v_est}$$
With this notation we now present a minor variant of Lemma 3.1 from [@BDIS].
\[l:ugly\_lemma\] Fix a time $t_0\in(-1,1)$ and let $l_{q+1}$ be as before, i.e. the unique integer such that $t_0\in[-\sfrac12+l_{q+1},\sfrac12+l_{q+1})$. Assuming the series of inequalities listed in Section \[s:ordering\] hold then we have the following estimates. For $t$ such that ${\left|\mu t-l_{q+1}\right|}\leq 1$ and $l\in \{l_{q+1}-1,l_{q+1},l_{q+1}+1\}$ we have $$\begin{aligned}
{\left\|D\Phi_l\right\|}_0&\leq C\, \label{e:phi_l}\\
{\left\|D\Phi_l - {\ensuremath{\mathrm{Id}}}\right\|}_0 &\leq C \frac{\delta_{q,t_0}^{\sfrac{1}{2}}\lambda_q}{\mu}\label{e:phi_l_1}\\
{\left\|D\Phi_l\right\|}_N&\leq C \frac{\delta_{q,t_0}^{\sfrac{1}{2}} \lambda_q }{\mu \ell^N},& N\ge 1\label{e:Dphi_l_N}\end{aligned}$$ Moreover, $$\begin{aligned}
{\left\|a_{kl}\right\|}_0+{\left\|L_{kl}\right\|}_0&\leq C \delta_{q+1,t_0}^{\sfrac12}\label{e:L}\\
{\left\|a_{kl}\right\|}_N&\leq C\delta_{q+1,t_0}^{\sfrac12}\lambda_q\ell^{1-N},&N\geq 1\label{e:Da}\\
{\left\|L_{kl}\right\|}_N&\leq C\delta_{q+1,t_0}^{\sfrac12}\ell^{-N},&N\geq 1\label{e:DL}\\
{\left\|\phi_{kl}\right\|}_N&\leq C \lambda_{q+1} \frac{\delta_{q,t_0}^{\sfrac{1}{2}} \lambda_q}{\mu \ell^{N-1}}
+ C \left(\frac{\delta_{q,t_0}^{\sfrac{1}{2}} \lambda_q \lambda_{q+1}}{\mu}\right)^N\nonumber \\
&\stackrel{\eqref{e:ell_def}\&\eqref{e:conditions_lambdamu_2}}{\leq} C\ell^{-N}&N\geq 1.\label{e:phi}\end{aligned}$$ Consequently, for any $N\geq 0$ $$\begin{aligned}
{\left\|w_c\right\|}_N &\leq C\delta_{q+1,t_0}^{\sfrac12} \left(\frac{\lambda_q}{\lambda_{q+1}}+\frac{\delta_{q,t_0}^{\sfrac12}\lambda_q}{\mu}\right)
\lambda_{q+1}^N\label{e:corrector_est2}\\
&\stackrel{\eqref{e:conditions_lambdamu_2}}{\leq} C\delta_{q+1}^{\sfrac12} \frac{\delta_{q}^{\sfrac12}\lambda_q}{\mu}
\lambda_{q+1}^N\label{e:corrector_est},\\
{\left\|w_o\right\|}_N&\leq C\delta_{q+1,t_0}^{\sfrac12}\lambda_{q+1}^N\label{e:W_est_N2}\\
&\leq C\delta_{q+1}^{\sfrac12}\lambda_{q+1}^N\label{e:W_est_N}.\end{aligned}$$ The constants appearing in the above estimates depend only on $N$ and the constant $C_0$ given in and . In particular for a fixed $N$, the constants appearing in - and - can be made arbitrarily small by taking $C_0$ to be sufficiently large. Furthermore, the weaker estimates and hold uniformly in time.
The proof of the above lemma follows from essentially exactly the same arguments to those given in the proof of Lemma 3.1 from [@BDIS] – making use of our new sequence of parameters $\delta_{q,t_0}$. The only minor point of departure from [@BDIS] is the appearance of the term $\frac{\lambda_q}{\lambda_{q+1}}$ in . This is related to the fact that we do not have a parameter ordering akin to the last inequality of for the parameters $\delta_{q,t_0}$. Nevertheless, since $\delta_{q,t_0}\leq \delta_q$, the estimate is sharper than the corresponding estimate of [@BDIS] and hence we obtain . From the definition of $w_c$ we have $$\begin{aligned}
\|w_c\|_N\leq &C\sum_{kl}\chi_l\left(\frac{1}{\lambda_{q+1}}\|a_{kl}\|_{N+1}+\|a_{kl}\|_0\|D\Phi_l-{\ensuremath{\mathrm{Id}}}\|_N+\|a_{kl}\|_N\|D\Phi_l-{\ensuremath{\mathrm{Id}}}\|_0\right)\\
&+C\|w_c\|_0\sum_l\chi_l\left(\lambda_{q+1}^N\|D\Phi_l\|_0^N+\lambda_{q+1}\|D\Phi_l\|_{N-1}\right).\end{aligned}$$ Hence applying - and applying the inequalities from Section \[s:ordering\] we obtain .
\[c:ugly\_cor\] Under the assumptions of Lemma \[l:ugly\_lemma\] we have $$\begin{aligned}
& \lambda_{q+1}^{-1}\|v_1 \|_1 + \|w \|_0 \leq \delta_{q+1, t_0}^{\sfrac{1}{2}} \label{e:final_v_est}\\
&\lambda_{q+1}^{-2} \|p_1 \|_2 + \lambda_{q+1}^{-1}\|p_1 \|_1 + \|p_1 - p \|_0 \leq \delta_{q+1, t_0}~ .\label{e:final_p_est}\end{aligned}$$
Recall by definition, we have the following inequalities: $$\begin{aligned}
{\left\|w\right\|}_N&\leq {\left\|w_o\right\|}_N+{\left\|w_c\right\|}_N \\
{\left\|v_1\right\|}_N&\leq {\left\|v\right\|}_N+{\left\|w\right\|}_N\\
{\left\|p_{1}-p\right\|}_N&\leq {\left\|{\left|w_o\right|}^2\right\|}_N+{\left\|{\left|w_c\right|}^2\right\|}_N
+ {\left\|{\left\langlew_o,w_c\right\rangle}\right\|}_N+{\left\|{\left|{\left\langlev-v_{\ell},w\right\rangle}\right|}\right\|}_N\\
&\leq +C{\left\|w_o\right\|}_N\left({\left\|w_o\right\|}_0+{\left\|w_c\right\|}_0\right)+C{\left\|w_c\right\|}_N{\left\|w_c\right\|}_0\\
&\quad+C{\left\|v-v_{\ell}\right\|}_N{\left\|w\right\|}_0+C{\left\|v-v_{\ell}\right\|}_0{\left\|w\right\|}_N\\
{\left\|p_1\right\|}_N&\leq {\left\|p\right\|}_N+{\left\|p_{1}-p\right\|}\end{aligned}$$ Hence and follow as a consequence of ,, , , and .
We now present a variant of Lemma 3.2 from [@BDIS]. We recall from [@BDIS] the notation for the *material derivative*: $D_t:= \partial_t + v_\ell \cdot \nabla$.
\[l:ugly\_lemma\_2\] Under the assumptions of Lemma \[l:ugly\_lemma\] we have $$\begin{aligned}
\|D_t v_\ell\|_N &\leq C \delta_{q,t_0}\lambda_q(1+\lambda_q\ell^{1-N})+C\delta_{q+1,t_0}\lambda_q\ell^{-N}\, , \label{e:Dt_v2}\\
&\leq C \delta_{q,t_0}\lambda_q\ell^{-N}\label{e:Dt_v}\\
\|D_t L_{kl}\|_N &\leq C \delta_{q+1,t_0}^{\sfrac{1}{2}} \delta_{q,t_0}^{\sfrac{1}{2}} \lambda_q\ell^{-N}\, ,\label{e:DtL}\\
\|D^2_t L_{kl}\|_N &\leq C\delta_{q+1,t_0}^{\sfrac{1}{2}} \lambda_{q}\ell^{-N}(\delta_{q,t_0} \lambda_q+\delta_{q+1,t_0}\ell^{-1})\, ,\label{e:D2tL2}\\
& \leq C\delta_{q+1,t_0}^{\sfrac{1}{2}}\delta_{q,t_0} \lambda_{q}\ell^{-N-1} \label{e:D2tL}\end{aligned}$$ Consequently for $t$ in the range ${\left|t\mu-l_{q+1}\right|}\le \sfrac12(1- \lambda_{q+1}^{-{\varepsilon}_1})$ we have $$\begin{aligned}
{\left\|D_t w_c\right\|}_N &\leq C \delta_{q+1,t_0}^{\sfrac12}\delta_{q,t_0}^{\sfrac12}\lambda_q\lambda_{q+1}^N\, ,\label{e:Dt_wc2}\\
{\left\|D_t w_o\right\|}_N &\equiv 0\, .\label{e:Dt_wo2}\end{aligned}$$ Moreover we have the following estimates which are valid uniformly in time $$\begin{aligned}
{\left\|D_t w_c\right\|}_N &\leq C \delta_{q+1}^{\sfrac12}\delta_q^{\sfrac12}\lambda_q\lambda_{q+1}^{N+{\varepsilon}_1}\, ,\label{e:Dt_wc}\\
{\left\|D_t w_o\right\|}_N &\leq C \delta_{q+1}^{\sfrac12}\mu\lambda_{q+1}^{N+{\varepsilon}_1}\, .\label{e:Dt_wo}\end{aligned}$$ Again, we note that the constants $C$ depend only on our choice of $C_0$: in particular, the constants appearing in - can be made arbitrarily small by taking $C_0$ sufficiently large.
First note that , and follow by exactly the same arguments as those given in Lemma 3.2 of [@BDIS] – making use of our new sequence of parameters $\delta_{q,t_0}$. However in contrast [@BDIS], time derivatives falling on $\chi_l$ for some $l$ pick up an additional factor of $\lambda_{q+1}^{{\varepsilon}_1}$, which explains this additional factor appearing in and .
To prove and , in addition to using our new parameters $\delta_{q,t_0}$, we will take advantage of our second order inductive estimates on the pressure in order to obtain sharper estimates than those found in [@BDIS].
Consider , we note that by the arguments of [@BDIS] we obtain that $$\|D_t v_\ell\|_N\leq \|\nabla p * \psi_\ell\|_N+\|{\rm div}\, \mathring R * \psi_\ell\|_N+C\lambda_q^2\ell^{1-N} \delta_{q,t_0}$$ Then from the inductive estimates of the pressure $p$, the estimate on the Reynolds stress $\mathring R$, together with standard convolution estimates, we obtain . From and since $\delta_{q+1,t_0}\leq \delta_{q,t_0}$ we obtain .
We now consider the estimate . $$\begin{aligned}
D_t^2L_{kl}=&\Bigl(-\frac{i}{\lambda_{q+1}}(D_tDv_\ell)^T\nabla a_{kl}+\frac{i}{\lambda_{q+1}}Dv_\ell^TDv_\ell^T\nabla a_{kl}+\\
&-a_{kl}D\Phi_lDv_\ell Dv_\ell k+a_{kl}D\Phi_lD_tDv_\ell k\Bigr)\times\frac{k\times B_k}{|k|^2}.\end{aligned}$$ Note that $D_tDv_\ell=DD_tv_\ell-Dv_\ell Dv_\ell$, so that $$\begin{aligned}
\|D_tDv_\ell\|_N&\leq \|D_tv_\ell\|_{N+1}+C\|Dv_\ell\|_N\|Dv_\ell\|_0\\
&\stackrel{\eqref{e:Dt_v2}\&\eqref{e:v_est}}{\leq} C( \delta_{q,t_0}\lambda_q^2\ell^{-N}+\delta_{q+1,t_0}\lambda_q\ell^{-N-1})\left(1+\lambda_q\ell\right)\\
&\stackrel{ \eqref{e:ell_lambda}}{\leq} C \delta_{q,t_0}\lambda_q^2\ell^{-N}+C\delta_{q+1,t_0}\lambda_q\ell^{-N-1}.\end{aligned}$$
Hence utilizing the estimates in Lemma \[l:ugly\_lemma\] we obtain $$\begin{aligned}
\|D_t^2L_{kl}\|_N&\leq C\delta_{q+1,t_0}^{\sfrac{1}{2}}\lambda_q\ell^{-N}\left(\delta_{q,t_0}\lambda_q+\frac{\delta_{q+1,t_0}}{\ell}\right)\left(1+\frac{\lambda_q}{\lambda_{q+1}}+\frac{\delta_{q,t_0}^{\sfrac12}\lambda_q}{\mu}\right)\\
&\stackrel{\eqref{e:conditions_lambdamu_3}}{\leq} C\delta_{q+1,t_0}^{\sfrac{1}{2}} \lambda_q\ell^{-N}(\delta_{q,t_0} \lambda_q+\delta_{q+1,t_0}\ell^{-1}).\end{aligned}$$ Thus we obtain . The estimate follows also as a consequence of , Lemma \[l:ugly\_lemma\] and .
\[r:pressure\] While and are the analogues of the corresponding estimates in [@BDIS], and are sharper and are derived taking into account the bounds on the second derivatives of the pressure .
Estimates on the Reynolds stress {#s:reynolds}
================================
In this section we describe the estimates on Reynolds stress, which follow by applying the arguments of Section 5 of [@BDIS] to the present scheme. The main result is the following proposition, which is a sharper, time localized version of Proposition 5.1 of [@BDIS], providing estimates for a subset of the times in the complement of the regions where the cut-off functions overlap.
\[p:R\] Fix $t$ in the range ${\left|t\mu-l_{q+1}\right|}<\sfrac12(1- \lambda_{q+1}^{-{\varepsilon}_1})$. There is a constant $C$ such that, if $\delta_{q,t_0}$, $\delta_{q+1,t_0}$ and $\mu$, satisfy , then we have
$$\begin{aligned}
\|R^0\|_0 +\frac{1}{\lambda_{q+1}}\|R^0\|_1+\frac{1}{\delta_{q+1,t_0}^{\sfrac{1}{2}}\lambda_{q+1}}\|D_tR^0\|_0&\leq C\delta_{q+1,t_0}^{\sfrac{1}{2}}\delta_{q,t_0}^{\sfrac{1}{2}} \lambda_q \ell\label{e:R0}\\
\|R^1\|_0 +\frac{1}{\lambda_{q+1}}\|R^1\|_1+\frac{1}{\delta_{q+1,t_0}^{\sfrac{1}{2}}\lambda_{q+1}}\|D_tR^1\|_0&\leq C \frac{\delta_{q+1,t_0}\delta_{q,t_0}^{\sfrac{1}{2}} \lambda_q \lambda_{q+1}^{{\varepsilon}_1}}{\mu}+\nonumber\\&\qquad C\delta_{q+1,t_0}^{\sfrac{1}{2}}\delta_{q,t_0}^{\sfrac{1}{2}} \lambda_q \ell\label{e:R1}\\
\|R^2\|_0 +\frac{1}{\lambda_{q+1}}\|R^2\|_1+\frac{1}{\delta_{q+1,t_0}^{\sfrac{1}{2}}\lambda_{q+1}}\|D_tR^2\|_0&\leq C \frac{\delta_{q+1,t_0}\delta_{q,t_0}^{\sfrac{1}{2}} \lambda_q \lambda_{q+1}^{{\varepsilon}_1}}{\mu} +\nonumber\\&\qquad C\delta_{q+1,t_0}^{\sfrac{1}{2}}\delta_{q,t_0}^{\sfrac{1}{2}} \lambda_q \ell\label{e:R2}\\
\|R^3\|_0 +\frac{1}{\lambda_{q+1}}\|R^3\|_1+\frac{1}{\delta_{q+1,t_0}^{\sfrac{1}{2}}\lambda_{q+1}}\|D_tR^3\|_0&\leq C \delta_{q+1,t_0}^{\sfrac{1}{2}} \delta_{q,t_0}^{\sfrac{1}{2}} \lambda_q \ell\label{e:R3}\\
\|R^4\|_0 +\frac{1}{\lambda_{q+1}}\|R^4\|_1+\frac{1}{\delta_{q+1,t_0}^{\sfrac{1}{2}}\lambda_{q+1}}\|D_tR^4\|_0&\leq C\delta_{q+1,t_0}^{\sfrac12}\delta_{q,t_0}^{\sfrac12} \lambda_q \ell\label{e:R4}\\
\|R^5\|_0 +\frac{1}{\lambda_{q+1}}\|R^5\|_1+\frac{1}{\delta_{q+1,t_0}^{\sfrac{1}{2}}\lambda_{q+1}}\|D_tR^5\|_0&\leq C \frac{\delta_{q+1,t_0} \delta_{q,t_0}^{\sfrac{1}{2}}\lambda_q}{\mu}+\nonumber\\&\qquad C\delta_{q+1,t_0}^{\sfrac{1}{2}}\delta_{q,t_0}^{\sfrac{1}{2}} \lambda_q\ell\, .\label{e:R5}\end{aligned}$$
Thus $$\begin{gathered}
\|\mathring{R}_1\|_0+\frac{1}{\lambda_{q+1}}\|\mathring{R}_1\|_1 + \frac{1}{\delta_{q+1,t_0}^{\sfrac{1}{2}}\lambda_q}\|D_t \mathring{R}_1\|_0\leq \\C \left(
\frac{\delta_{q+1,t_0} \delta_{q,t_0}^{\sfrac{1}{2}} \lambda_q \lambda_{q+1}^{{\varepsilon}_1}}{\mu}
+ \delta_{q+1,t_0}^{\sfrac{1}{2}} \delta_{q,t_0}^{\sfrac{1}{2}} \lambda_q \ell\right)\, ,\label{e:allR2}\end{gathered}$$ $$\begin{gathered}
\|\partial_t \mathring{R}_1 + v_1\cdot \nabla \mathring{R}_1\|_0\leq \\C \delta_{q+1,t_0}^{\sfrac{1}{2}}\lambda_{q+1} \left(\frac{\delta_{q+1,t_0} \delta_{q,t_0}^{\sfrac{1}{2}} \lambda_q \lambda_{q+1}^{{\varepsilon}_1}}{\mu}
+ \delta_{q+1,t_0}^{\sfrac{1}{2}} \delta_{q,t_0}^{\sfrac{1}{2}} \lambda_q \ell\right).\label{e:Dt_R_all2}\end{gathered}$$
The arguments will be a minor variation to those found in Proposition 5.1 of [@BDIS], the key differences being:
- Since for all $l$ we have $\chi_l'$ is identically zero for times $t$ in the range ${\left|t\mu-l_{q+1}\right|}<\sfrac12(1- \lambda_{q+1}^{-{\varepsilon}_1})$, no positive powers of $\mu$ will appear as a consequence of differentiating in time.
- As previously mentioned in Section \[s:ordering\], in contrast to the case in [@BDIS] we *do not* have an estimate of the type $$\label{e:doesnt_exist}
\frac{1}{\lambda_{q+1}}\leq \frac{\delta_{q+1,t_0}^{\sfrac12}}{\mu}$$ at our disposal.
- In many of the material derivative estimates in [@BDIS] the estimate $\delta_q^{\sfrac12}\lambda_q \leq \mu$ was used in order to simplify terms: we will avoid employing such an estimate, although in its place we will sometimes use the estimate $\delta_{q,t_0}^{\sfrac12}\lambda_q \leq\delta_{q+1,t_0}^{\sfrac12}\lambda_{q+1}$.
- In [@BDIS] a new constant $\epsilon>0$ was introduced in order to state the analogous estimates: in order to minimize the number of small constants, we simply use ${\varepsilon}_1$ and apply the identity $\ell=\lambda_{q+1}^{{\varepsilon}_1-1}$ in order to reduce the number of terms in the estimates.
- No term of the type $$\label{e:stupid_term}
\frac{\delta_{q+1,t_0}^{\sfrac12}\delta_{q,t_0}\lambda_q}{\lambda_{q+1}^{1-{\varepsilon}}\mu\ell},$$ appears in the estimate and within the brackets of the right hand sides of and . This is related to the fact that in [@BDIS] the authors did not keep track of second derivatives of the pressure (see Remark \[r:pressure\]).[^3]
Keeping in mind the observations (A), (B), (C) and (D) above, the proof of - follows by applying nearly identical arguments to that found in Proposition 5.1 of [@BDIS]. Indeed the estimates on $R^2$, $R^3$, $R^4$ and $R^5$, depend on the $C^0$ of $w_o$, $w_c$, $v$, $\mathring R$, $D_t w_o$, $D_t w_c$, $D_t v_{\ell}$, $D_t \mathring R$ and the $C^1$ norm of $w_o$, $w_c$, $v$, $p$, $\mathring R$. For bounding these quantities we use the estimates -, together with the estimates from Lemmas \[l:ugly\_lemma\] and \[l:ugly\_lemma\_2\], which are analogous to the corresponding estimates ones in [@BDIS]. The estimate easily follows as a consequence of -, and follows from together with the observation $$\|\partial_t \mathring{R}_1 + v_1\cdot \nabla \mathring{R}_1\|_0 \leq \|D_t \mathring{R}_1\|_0 + \left(\|v-v_\ell\|_0 + \|w\|_0\right)\|\mathring{R}_1\|_1 \, .$$ Therefore we will restrict ourselves to proving the estimates and . For reasons of brevity, in what follows we adopt the abuse of notation $l_1=l_{q+1}$.
[**Estimates on $R^0$.**]{} Recall from [@BDIS] that, by the definition of $R^0$ given by , taking into account Propositions \[p:stat\_phase\] and \[p:commutator\] and applying the decomposition $$\begin{split}
D_tR^0&=([D_t,\mathcal R]+\mathcal R D_t)(
\partial_tw+v_\ell\cdot \nabla w+w\cdot \nabla v_\ell)\\&=([v_{\ell}\cdot \nabla,\mathcal R]+\mathcal R D_t)(
\partial_tw+v_\ell\cdot \nabla w+w\cdot \nabla v_\ell)\end{split}\label{e:comm_decomp}$$ we need to bound the terms $\Omega_{kl}$ where $$\partial_tw+v_\ell\cdot \nabla w+w\cdot \nabla v_\ell=\sum_{kl}\Omega_{kl}e^{i\lambda_{q+1}k\cdot x}\, ,$$ that is $$\Omega_{kl}:=\left(\chi_l'L_{kl}+\chi_lD_tL_{kl}+\chi_lL_{kl}\cdot \nabla v_\ell\right)\phi_{kl}\,.$$ and the terms $\Omega'_{kl}$ where $$\label{e:Omega1}
D_t \left(\partial_tw+v_\ell\cdot \nabla w+w\cdot \nabla v_\ell\right)
:=\sum_{k}\Omega'_{kl}e^{i\lambda_{q+1}k\cdot x},$$ that is $$\begin{aligned}
&\Omega'_{kl}:= \Bigl(\partial_t^2\chi_lL_{kl}+2\partial_t\chi_lD_tL_{kl}+\chi_lD_t^2L_{kl}+\nonumber\\
&+\partial_t\chi_lL_{kl}\cdot\nabla v_\ell+\chi_lD_tL_{kl}\cdot\nabla v_\ell+\chi_lL_{kl}
\cdot\nabla D_tv_\ell-\chi_lL_{kl}\cdot\nabla v_\ell\cdot \nabla v_\ell\Bigr)\phi_{kl}.\label{e:Omega2} \end{aligned}$$ Precisely, applying Propositions \[p:stat\_phase\] with $\alpha={\varepsilon}_1$ we obtain $$\begin{aligned}
\| R^0\|_0 \leq& C\sum_{kl} \left(\ell \|\Omega_{kl}\|_0 + \lambda_{q+1}^{1-N}\ell \|\Omega_{kl}\|_N +
\lambda_{q+1}^{-N} \|\Omega_{kl}\|_{N+{\varepsilon}_1}\right),\label{e:R0est}\\
\| R^0\|_1 \leq& C\lambda_{q+1}\sum_{kl} \left(\ell \|\Omega_{kl}\|_0 + \lambda_{q+1}^{1-N}\ell \|\Omega_{kl}\|_N +
\lambda_{q+1}^{-N} \|\Omega_{kl}\|_{N+{\varepsilon}_1}\right)\nonumber+\\
&\sum_{k} \left(\ell \|\Omega_{kl}\|_1 + \lambda_{q+1}^{1-N}\ell \|\Omega_{kl}\|_{N+1} +
\lambda_{q+1}^{-N} \|\Omega_{kl}\|_{N+1+{\varepsilon}_1}\right)\label{e:R0Dest}\end{aligned}$$ and by Propositions \[p:stat\_phase\] and \[p:commutator\] and the decomposition we obtain $$\begin{aligned}
\|D_t R^0\|_0 \leq&C\sum_{kl}\bigg[ \left(\ell \|\Omega_{kl}'\|_0 + \lambda_{q+1}^{1-N}\ell \|\Omega_{kl}'\|_N +
\lambda_{q+1}^{-N} \|\Omega_{kl}'\|_{N+{\varepsilon}_1}\right)\nonumber\\
&+\ell\lambda_{q+1}^{-1} \|v_{\ell}\|_1 \|\Omega_{kl}\|_1 \nonumber\\&
+ \lambda_{q+1}^{1-N}\ell \left(\|\Omega_{kl}\|_{N+{\varepsilon}_1} \|v_{\ell}\|_{1+{\varepsilon}_1} +\|\Omega_{kl}\|_{1+{\varepsilon}_1} \|v_{\ell}\|_{N+{\varepsilon}_1}\right).\label{e:DtR0est}\end{aligned}$$
Observe that since we assumed ${\left|t\mu-l_{1}\right|}<\sfrac12(1- \lambda_{q+1}^{-{\varepsilon}_1})$ we have that $\Omega_{kl},\Omega'_{kl}\equiv 0$ for all $l\neq l_{1}$. Moreover, since on the given temporal range $\chi_{l_{1}}\equiv 1$ and $\chi'_{l_{1}}\equiv 0$, we have $$\Omega_{kl_{1}}:=\left(D_tL_{kl_{1}}+L_{kl_{1}}\cdot \nabla v_\ell\right) \phi_{kl_{1}}\,,$$ and $$\begin{aligned}
& \Omega'_{kl_{1}}:= \Bigl(D_t^2L_{kl_1}+D_tL_{kl_{1}}\cdot\nabla v_\ell+L_{kl_{1}}
\cdot\nabla D_tv_\ell-L_{kl_{1}}\cdot\nabla v_\ell\cdot \nabla v_\ell\Bigr) \phi_{kl_{1}}.\end{aligned}$$
Applying Lemmas \[l:ugly\_lemma\], Lemma \[l:ugly\_lemma\_2\], and we obtain $$\begin{aligned}
\label{e:Omega_Est}
\|\Omega_{kl_1}\|_N\leq C \delta_{q+1,t_0}^{\sfrac12}\delta_{q,t_0}^{\sfrac12}\lambda_q\ell^{-N}
.\end{aligned}$$
Similarly we obtain $$\begin{aligned}
\nonumber
\|\Omega'_{kl_1}\|_N&\leq C\delta_{q+1,t_0}^{\sfrac{1}{2}} \lambda_{q}\ell^{-N}(\delta_{q,t_0} \lambda_q+\delta_{q+1,t_0}\ell^{-1})\\
&\stackrel{ \eqref{e:ell_lambda}\&\eqref{e:conditions_lambdamu_3}}{\leq} C\delta_{q+1,t_0}\delta_{q,t_0}^{\sfrac12}\lambda_q\lambda_{q+1}\ell^{-N}.\label{e:extraterm}\end{aligned}$$
\[r:pressure2\] Note we implicitly used the estimates and which rely on second order inductive estimates on the pressure (see Remark \[r:pressure\]). In [@BDIS], only first order estimates of the pressure were assumed, resulting in an additional error term of the type .
Hence choosing $N$ large enough such that $N{\varepsilon}_1> 3$, then combining - we obtain .
[**Estimates on $R^1$.**]{} Recall that a key ingredient to the estimation of $R^1$ involves estimating $$f_{klk'l'}:=\chi_l\chi_{l'}a_{kl}a_{k'l'}\phi_{kl}\phi_{k'l'}$$ and $$D_t\left(\nabla f_{klk'l'}\,e^{i\lambda_{q+1}(k+k')\cdot x}\right)e^{-i\lambda_{q+1}(k+k')\cdot x}= \Omega''_{klk'l'}.$$
More precisely, using Proposition \[p:stat\_phase\], it was shown in [@BDIS] that $$\begin{aligned}
\|R^1\|_0 \leq& C\mathop{\sum_{(k,l),(k',l')}}_{k+k'\neq 0} \left(\ell \|f_{klk'l'}\|_1 + \lambda_{q+1}^{1-N}\ell \|f_{klk'l'}\|_{N+1} +
\lambda_{q+1}^{-N} [f_{klk'l'}]_{N+1+{\varepsilon}_1}\right)\nonumber\end{aligned}$$ $$\begin{aligned}
&\|R^1\|_1 \leq\nonumber \\&\quad C\lambda_{q+1}\mathop{\sum_{(k,l),(k',l')}}_{k+k'\neq 0} \left(\ell \|f_{klk'l'}\|_1 + \lambda_{q+1}^{1-N}\ell \|f_{klk'l'}\|_{N+1} +
\lambda_{q+1}^{-N} [f_{klk'l'}]_{N+1+{\varepsilon}_1}\right)\nonumber\\
&\quad+C\mathop{\sum_{(k,l),(k',l')}}_{k+k'\neq 0} \left(\ell \|f_{klk'l'}\|_2 + \lambda_{q+1}^{1-N}\ell \|f_{klk'l'}\|_{N+2} +
\lambda_{q+1}^{-N} [f_{klk'l'}]_{N+2+{\varepsilon}_1}\right)\nonumber\end{aligned}$$ and using Proposition \[p:stat\_phase\] and together with the identity $D_t\mathcal R= [v_{\ell}\cdot \nabla, \mathcal R]+\mathcal RD_t$ that $$\begin{aligned}
\|D_t R^1\|_0 &\leq C\mathop{\sum_{(k,l),(k',l')}}_{k+k'\neq 0} \left(\ell \|\Omega''_{klk'l'}\|_1 + \lambda_{q+1}^{1-N}\ell \|\Omega''_{klk'l'}\|_{N+1} +
\lambda_{q+1}^{-N} [\Omega''_{klk'l'}]_{N+1+{\varepsilon}_1}\right)\nonumber\\
&+\ell\lambda_{q+1}^{-1} \|f_{klk'l'}\|_2 \|v_{\ell}\|_1 \nonumber\\&
+ \lambda_{q+1}^{1-N}\ell \left(\|f_{klk'l'}\|_{N+1+{\varepsilon}_1} \|v_{\ell}\|_{1+{\varepsilon}_1} +\|f_{klk'l'}\|_{2+{\varepsilon}_1} \|v_{\ell}\|_{N+{\varepsilon}_1}\right)\nonumber.\end{aligned}$$
Again as a consequence of our assumption ${\left|t\mu-l_{1}\right|}<\sfrac12(1- \lambda_{q+1}^{-{\varepsilon}_1})$ we have that if either $l\neq l_1$ or $l'\neq l_1$ then $f_{klk'l'}\equiv 0$ and $\Omega''_{klk'l'}\equiv 0$. Moreover we have $$\begin{aligned}
\Omega_{kl_1k'l_1}'':=&-\left(a_{kl_1}Dv_\ell^T\nabla a_{k'l_1}+a_{k'l_1}Dv_\ell^T\nabla a_{kl_1}\right)\phi_{kl_1}\phi_{k'l_1}\\& -\lambda_{q+1}a_{kl_1}a_{k'l_1}\left(D\Phi_lDv_\ell^Tk+D\Phi_{l_1}Dv_\ell^Tk'\right) \phi_{kl_1}\phi_{k'l_1}.\end{aligned}$$
Estimating $f_{kl_1k'l_1}$ and $\Omega''_{kl_1k'l_1}$ we have from Lemma \[l:ugly\_lemma\] and Lemma \[l:ugly\_lemma\_2\] for $N\geq 1$ $${\left\|f_{kl_1k'l_1}\right\|}_N\leq C\delta_{q+1,t_0}\ell^{1-N}\left(\lambda_q+\frac{\delta_{q,t_0}^{\sfrac12}\lambda_q \lambda_{q+1}}{\mu}\right),$$ and $$\begin{aligned}
{\left\|\Omega''_{kl_1k'l_1}\right\|}_0&\leq C\delta_{q+1,t_0}\delta_{q,t_0}^{\sfrac12}\lambda_q\left(\lambda_q+ \lambda_{q+1}\right)\leq C\delta_{q+1,t_0}\delta_{q,t_0}^{\sfrac12}\lambda_q\lambda_{q+1}\\
{\left\|\Omega''_{kl_1k'l_1}\right\|}_N&\leq C\delta_{q+1,t_0}\delta_{q,t_0}^{\sfrac12}\lambda_q\lambda_{q+1}\ell^N,\end{aligned}$$ for $N\geq 1$.
Combining the above estimates and again selecting $N$ such that $N{\varepsilon}_1> 3$ we obtain .
We now state uniform estimates for the new Reynolds stress. Taking advantage of some of the additional observations used previously to prove Proposition \[p:R\], by applying nearly identical arguments to that of Proposition 5.1 of [@BDIS] we obtain the following Proposition.
There is a constant $C$ such that, if $\delta_{q}$, $\delta_{q+1}$ and $\mu$, satisfy , then we have $$\|\mathring{R}_1\|_0+\frac{1}{\lambda_{q+1}}\|\mathring{R}_1\|_1 + \frac{1}{\mu}\|D_t \mathring{R}_1\|_0\leq C \left( \delta_{q+1}^{\sfrac{1}{2}} \mu\ell\lambda_{q+1}^{{\varepsilon}_1}
+ \frac{\delta_{q+1} \delta_q^{\sfrac{1}{2}} \lambda_q \lambda_{q+1}^{{\varepsilon}_1}}{\mu}
\right)\, ,\label{e:allR}$$ $$\|\partial_t \mathring{R}_1 + v_1\cdot \nabla \mathring{R}_1\|_0\leq C \delta_{q+1}^{\sfrac{1}{2}}\lambda_{q+1} \left( \delta_{q+1}^{\sfrac{1}{2}} \mu\ell\lambda_{q+1}^{2{\varepsilon}_1}
+ \frac{\delta_{q+1} \delta_q^{\sfrac{1}{2}} \lambda_q \lambda_{q+1}^{2{\varepsilon}_1}}{\mu}
\right).\label{e:Dt_R_all}$$
In contrast to [@BDIS], the extra factors of $\lambda_{q+1}^{{\varepsilon}_1}$ appearing in and are due to the fact that in the present scheme derivatives falling on $\chi_l$ pick up an extra factor of $\lambda_{q+1}^{{\varepsilon}_1}$. A second point of difference to [@BDIS] is that unlike [@BDIS], no terms of the form $$\label{e:absent_terms}
\delta_{q+1}^{\sfrac{1}{2}} \delta_q^{\sfrac{1}{2}} \lambda_q \ell+\frac{\delta_{q+1}^{\sfrac12}\delta_q\lambda_q}{\lambda_{q+1}^{1-{\varepsilon}}\mu}$$ appear within the brackets of the right hand sides of and . The absence of the first term in can be easily explained by the fact that by we have $$\delta_{q+1}^{\sfrac{1}{2}} \delta_q^{\sfrac{1}{2}} \lambda_q \ell\leq \frac{\delta_{q+1} \delta_q^{\sfrac{1}{2}} \lambda_q \lambda_{q+1}^{{\varepsilon}_1}}{\mu}.$$ The absence of the second term in follows by the same reasoning as the absence of an analogous term in Proposition \[p:R\] (see the comment (D) after the statement of Proposition \[p:R\] and Remarks \[r:pressure\] and \[r:pressure2\]).
Choice of the parameters and conclusion of the proof {#s:conclusion}
====================================================
We begin by noting that we have not imposed any upper bounds on the choice of $\lambda_0$ and thus we are free to choose $\lambda_0$ to be as large as need be: in what follows we will use this fact multiple times without further comment.
$1/5-{\varepsilon}$ convergence {#varepsilon-convergence .unnumbered}
-------------------------------
We now make the following parameter choices $$\begin{aligned}
\alpha&:=1+{\varepsilon}_0, & \lambda_q&= \floor{\lambda_0^{\alpha^q}},\\
{\varepsilon}_1&:=\frac{{\varepsilon}_0^2}{18},&
\delta_q &:= \lambda_q^{-\sfrac25+2{\varepsilon}_0},\\
\mu&:= \delta_q^{\sfrac{1}{4}}\delta_{q+1}^{\sfrac{1}{4}} \lambda_q^{\sfrac{1}{2}} \lambda_{q+1}^{\sfrac{1}{2}},\end{aligned}$$ here $\floor{a}$ denotes the largest integer smaller than $a$. It is worth noting that with the above choices, our definition of $\mu$ agrees with the definition given in [@BDIS].
Having made the above choices it is clear the inequalities and are satisfied. Moreover assuming -, it follows as a consequence of Corollary \[c:ugly\_cor\] that and are satisfied with $q$ replaced by $q+1$. In order to show and with $q$ replaced with $q+1$ we note that with our choices of parameters we obtain from and that $$\begin{aligned}
{\left\|\mathring R_{q+1}\right\|}_0+\frac{1}{\lambda_{q+1}}{\left\|\mathring R_{q+1}\right\|}_1&\leq C\delta_{q+1}^{\sfrac12}\mu\ell\lambda_{q+1}^{{\varepsilon}_1}\\
\frac1{\delta_{q+1}^{\sfrac12}\lambda_{q+1}}{\left\|(\partial_t+v_{q+1}\cdot \nabla)\mathring R_{q+1}\right\|}_0&\leq C\delta_{q+1}^{\sfrac12}\mu\ell\lambda_{q+1}^{2{\varepsilon}_1}.\end{aligned}$$ Hence since $$\delta_{q+1}^{\sfrac12}\mu\ell\lambda_{q+1}^{2{\varepsilon}_1}\leq C\lambda_q^{-\sfrac25+\frac{6{\varepsilon}_0}{5}+\frac{5{\varepsilon}_0^2}{3}+\frac{{\varepsilon}_0^3}{6}},\\
\leq C \delta_{q+2}\lambda_q^{-{\varepsilon}_0^2}$$ we obtain both , and with $q$ replaced by $q+1$. Since the inequalities - hold for $q=0$, we obtain by induction that the inequalities hold for $q \in {\ensuremath{\mathbb{N}}}$. The inequalities - with $q$ replaced by $q+1$ then follow as a consequence of Corollary \[c:ugly\_cor\], together with the estimates on $v$, $p$, $\mathring R$, $w$, $w_o$ and $w_c$. In particular, one may derive time derivative estimates on $w$ and $p_1-p$ from the simple decomposition $\partial_t=D_t-v_{\ell}\cdot\nabla$ and the estimates $$\begin{aligned}
{\left\|\partial_t w\right\|}_0&\leq {\left\|\partial_t w_o\right\|}_0+{\left\|\partial_t w_c\right\|}_0\\
&\stackrel{\eqref{e:uni_v_bound}}{\leq} {\left\|D_t w_o\right\|}_0+{\left\|D_t w_c\right\|}_0+{\left\|w_o\right\|}_1+{\left\|w_c\right\|}_1\end{aligned}$$ and $$\begin{aligned}
\|\partial_t (p_{q+1} - p_q)\|_0 &\leq (\|w_c \|_0 + \|w_o\|_0) (\|\partial_t w_c\|_0+ \|\partial_t w_o\|_0)\\
&\qquad + 2 \|w\|_0\|\partial_t v\|_0
+ \ell\|v\|_1\|\partial_t w\|_0\\
&\stackrel{\eqref{e:uni_v_bound}}{\leq} (\|w_c \|_0 + \|w_o\|_0) ({\left\|D_t w_o\right\|}_0+{\left\|D_t w_c\right\|}_0+{\left\|w_o\right\|}_1+{\left\|w_c\right\|}_1)\\
&\qquad + 2 \|w\|_0(\|\partial_t v+v\cdot\nabla v\|_0+\|v\|_1)
+ \ell\|v\|_1\|\partial_t w\|_0\\
&\leq (\|w_c \|_0 + \|w_o\|_0) ({\left\|D_t w_o\right\|}_0+{\left\|D_t w_c\right\|}_0+{\left\|w_o\right\|}_1+{\left\|w_c\right\|}_1)\\
&\qquad + 2 \|w\|_0(\|p\|_1+\|\mathring R\|_1+\|v\|_1)
+ \ell\|v\|_1\|\partial_t w\|_0\end{aligned}$$ The required estimates then follow as a consequence from -, , , , and .
$1/3-{\varepsilon}$ convergence {#varepsilon-convergence-1 .unnumbered}
-------------------------------
Let us define $U^{(q)}$ to be the set $$U^{(q)}=\bigcup_{l\in [-\mu_q,\mu_q]}[\mu_q^{-1}(l+\sfrac12-\lambda_q^{-{\varepsilon}_1}),\mu_q^{-1}(l+\sfrac12+\lambda_q^{-{\varepsilon}_1})],$$ i.e. a union of $\sim2\mu_q$ balls of radius $\lambda_{q}^{-{\varepsilon}_1}\mu_q^{-1}$ and define $$V^{(q)}=\bigcup_{q'=q}^\infty U^{(q')}.$$ Observe that $V^{(q)}$ can be covered by a sequence of balls of radius $r_i$ such that $$\label{e:Hausdorff_est}
\sum r_i^d\leq 3\sum_{q'=q}^{\infty} \lambda_{q'}^{-d{\varepsilon}_1}\mu_{q'}^{1-d}.$$ Thus assuming $$\label{e:d_ineq}
d>\frac{(1+\alpha)(-\frac15+{\varepsilon}_0+1)}{(1+\alpha)(-\frac15+{\varepsilon}_0+1)+2\alpha{\varepsilon}_1},$$ it follows that the right hand side of converges to zero as $q$ tends to infinity.
From this point on we assume $d<1$ is fixed, satisfying – which we note is possible due to the fact the right hand side of is strictly less than $1$.
For any time $t_0\in \bigcap_N V^{(N)}$ we simply set $\delta_{q,t_0}=\delta_q$ for all $q$.
Now suppose $t_0\notin V^{(N)}$ for some integer $N$, furthermore assume $N$ to be the smallest such integer. We now make the following parameter choices $$\delta_{q+1,t_0}:=\begin{cases}
\lambda_{q+1}^{-\sfrac25+2{\varepsilon}_0} & \text{if } q \leq N \\
\max\left(\lambda_q^{-\frac{{\varepsilon}_0^2} 9} \delta_{q,t_0}^\alpha, \lambda_{q+1}^{-\sfrac23+2{\varepsilon}_0}\right), & \text{if } q > N
\end{cases}$$ It follows that $$\frac{\delta_{q,t_0}\lambda_q}{\delta_{q+1,t_0}\lambda_{q+1}}\geq \lambda_q^{\sfrac{{\varepsilon}_0}3},$$ from which we obtain assuming ${\varepsilon}_0$ is sufficiently small. Applying Corollary \[l:ugly\_lemma\_2\] and Proposition \[p:R\] iteratively we see that (\[e:delta\_cond\_first2\]-\[e:delta\_cond\_last2\]) hold for all $q\ge N$. In particular, in order to show for $q$ replaced by $q+1$ we note that by Proposition \[p:R\] we have for all times $t$ satisfying ${\left|t\mu_{q+1}-l_{q+1}\right|}<\sfrac12(1- \lambda_{q+1}^{-{\varepsilon}_1})$ $$\label{e:wanted}
\|\mathring{R}_1\|_0+\frac{1}{\lambda_{q+1}}\|\mathring{R}_1\|_1 \leq \\\underbrace{C \delta_{q+1,t_0}^{\sfrac{1}{2}}\delta_{q,t_0}^{\sfrac{1}{2}}\lambda_q\ell}_I
+ \underbrace{C\frac{\delta_{q+1,t_0} \delta_{q,t_0}^{\sfrac{1}{2}} \lambda_q \lambda_{q+1}^{{\varepsilon}_1}}{\mu}}_{II}.$$ Notice that if ${\left|\mu_{q+2}t-l_{q+2}\right|} \leq 1$ then $$\begin{aligned}
{\left|t\mu_{q+1}-l_{q+1}\right|}&\leq \frac{\mu_{q+1}}{\mu_{q+2}}{\left|\mu_{q+2}t-l_{q+2}\right|}+{\left|\frac{\mu_{q+1}l_{q+2}}{\mu_{q+2}}-l_{q+1}\right|} \\
&\leq \frac{\mu_{q+1}}{\mu_{q+2}}+\mu_{q+1}{\left|\frac{l_{q+2}}{\mu_{q+2}}-t_0\right|}+{\left|\mu_{q+1}t_0-l_{q+1}\right|}\\
&< \frac{2\mu_{q+1}}{\mu_{q+2}}+{\left|\mu_{q+1}t_0-l_{q+1}\right|}\\
&< 2\lambda_{q}^{-\sfrac{{\varepsilon}_0}4}+\sfrac12-\lambda_{q+1}^{-{\varepsilon}_1}\\
&<\sfrac12(1- \lambda_{q+1}^{-{\varepsilon}_1}).\end{aligned}$$ Thus holds for times $t$ in the range ${\left|\mu_{q+2}t-l_{q+2}\right|}<1$.
Taking logarithms of $I$ and $II$ we obtain $$\label{e:log1}
\ln I\leq \left(1+\frac{{\varepsilon}_0}{2}\right)\ln \delta_{q,t_0}+\left(\frac{{\varepsilon}_0^2}{ 18}+\frac{{\varepsilon}_0^3}{ 18}-{\varepsilon}_0\right)\ln\lambda_q+C$$ and $$\label{e:log2}
\ln II \leq \left(\frac32+{\varepsilon}_0\right) \ln \delta_{q,t_0}+\left(\frac15-\frac{7{\varepsilon}_0}{5}-\frac{4{\varepsilon}_0^2}{9}+O({\varepsilon}_0^3)\right)\ln \lambda_q+ C.$$ Note by definition we have $$\label{e:log3}
\ln \delta_{q+2,t} \geq \left(1+{\varepsilon}_0\right)^2\ln \delta_{q,t_0}-\left(\frac{2{\varepsilon}_0^2}{ 9}+O({\varepsilon}_0^3)\right)\ln \lambda_{q}.$$ Thus since $\delta_{q,t_0}\geq \lambda_q^{-\sfrac23+2{\varepsilon}_0}$, combining and we obtain $$\begin{aligned}
\ln\left(\frac{I}{\delta_{q+2,t}}\right)&\leq \left(-\frac{3{\varepsilon}_0}{2}-{\varepsilon}_0^2\right)\ln \delta_{q,t_0}+\left(\frac{5{\varepsilon}_0^2}{18}-{\varepsilon}_0+O({\varepsilon}_0^3)\right)\ln\lambda_q+C\nonumber\\
&\leq \left(-\frac{{\varepsilon}_0^2}{4}+O({\varepsilon}_0^3)\right)\ln\lambda_q+C.\label{e:log4}\end{aligned}$$ Similarly, since $\delta_{q,t_0}\leq \lambda_q^{-\sfrac25+2{\varepsilon}_0}$, combining and we obtain $$\begin{aligned}
\ln\left(\frac{II}{\delta_{q+2,t}}\right)&\leq \left(\frac12-{\varepsilon}_0-{\varepsilon}_0^2\right) \ln \delta_{q,t_0}+\left(\frac15-\frac{7{\varepsilon}_0}{5}-\frac{2{\varepsilon}_0^2}{9}+O({\varepsilon}_0^3)\right)\ln \lambda_q+ C\nonumber\\
&\leq \left(-{\varepsilon}_0^2+O({\varepsilon}_0^3)\right)\ln \lambda_q+ C.\label{e:log5}\end{aligned}$$ Hence assuming ${\varepsilon}_0$ is sufficiently small, from and we obtain for $q$ replaced by $q+1$.
Observe also that there exists an $N'$ such that for all $q\geq N+N'$ we have $$\delta_{q,t_0}=\lambda_q^{-\sfrac23+2{\varepsilon}_0},$$ and hence the inequality is never satisfied for $q\geq N+ N'$. Thus $$\Xi^{N+N'}\subset V^{N}.$$ In particular $N'$ can be chosen universally, independent of $N$. Fixing $\delta>0$ and choosing $N$ such that $V^N$ can be covered by a sequence of balls of radius $r_i$ satisfying $$\sum r_i^d<\delta,$$ we obtain that if we set $M=N+N'$ then is satisfied which concludes the proof of Proposition \[p:iterate\].
For the sake of completeness we note that analogously to the estimates -, the estimates - follow as a consequence of Lemma \[l:ugly\_lemma\], Lemma \[l:ugly\_lemma\_2\] and Proposition \[p:R\] – here the set $\Omega$ can be taken explicitly to be $$\Omega:= \bigcap_{q=1}^\infty V^{(q)}.$$
[10]{}
Extended self-similarity in turbulent flows. , 1 (1993), 29–32.
Anomalous dissipation for 1/5-[H]{}ölder [E]{}uler flows. (to appear).
. (Feb. 2013).
Energy conservation and [O]{}nsager’s conjecture for the [E]{}uler equations. , 6 (2008), 1233–1252.
Onsager’s conjecture on the energy conservation for solutions of [E]{}uler’s equation. , 1 (1994), 207–209.
The [E]{}uler equations as a differential inclusion. , 3 (2009), 1417–1436.
On admissibility criteria for weak solutions of the [E]{}uler equations. , 1 (2010), 225–260.
Dissipative [E]{}uler flows and [O]{}nsager’s conjecture. (2012), 1–40.
The $h$-principle and the equations of fluid dynamics. , 3 (2012), 347–375.
Dissipative continuous [E]{}uler flows. (2013), 1–26.
Inertial energy dissipation for weak solutions of incompressible [E]{}uler and [N]{}avier-[S]{}tokes equations. , 1 (2000), 249–255.
Energy dissipation without viscosity in ideal hydrodynamics. [I]{}. [F]{}ourier analysis and local energy transfer. , 3-4 (1994), 222–240.
Onsager and the theory of hydrodynamic turbulence. , 1 (2006), 87–135.
Fully developed turbulence and intermittency. , 1 (1980), 359–367.
. Cambridge University Press, Cambridge, 1995. The legacy of A. N. Kolmogorov.
H[ö]{}lder continuous [E]{}uler flows in three dimensions with compact support in time. (2012), 1–173.
The local structure of turbulence in incompressible viscous fluid for very large [R]{}eynolds numbers. , 1890 (1991), 9–13. Translated from the Russian by V. Levin, Turbulence and stochastic processes: Kolmogorov’s ideas 50 years on.
Turbulence and navier-stokes equation. (1976), 121.
Statistical hydrodynamics. , Supplemento, 2 (Convegno Internazionale di Meccanica Statistica) (1949), 279–287.
An inviscid flow with compact support in space-time. , 4 (1993), 343–401.
Weak solutions with decreasing energy of incompressible [E]{}uler equations. , 3 (2000), 541–603.
Existence of weak solutions for the incompressible [E]{}uler equations. , 5 (2011), 727–730.
[^1]: More precisely, the Hausdorff dimension $d$ is such that $1-d>C{\varepsilon}^2$ for some positive constant $C$.
[^2]: In [@BDIS] the estimates corresponding to - are written in terms of a sequence of parameters $\delta_q$ which in the context of the present paper are defined to be $\delta_q:=\lambda_q^{-\sfrac25+2{\varepsilon}_0}$ (cf. Section \[s:ordering\] and Section \[s:conclusion\]).
[^3]: Such a term imposes strong restrictions on the choice of $\ell$ to ensure convergence and is in part the reason for the complicated choice of $\ell$ taken in [@BDIS].
|
---
abstract: |
Cancer progression is an evolutionary process that is driven by mutation and selection in a population of tumor cells. We discuss mathematical models of cancer progression, starting from traditional multistage theory. Each stage is associated with the occurrence of genetic alterations and their fixation in the population. We describe the accumulation of mutations using conjunctive Bayesian networks, an exponential family of waiting time models in which the occurrence of mutations is constrained to a partial temporal order. Two opposing limit cases arise if mutations either follow a linear order or occur independently. We derive exact analytical expressions for the waiting time until a specific number of mutations have accumulated in these limit cases as well as for the general conjunctive Bayesian network. Finally, we analyze a stochastic population genetics model that explicitly accounts for mutation and selection. In this model, waves of clonal expansions sweep through the population at equidistant intervals. We present an approximate analytical expression for the waiting time in this model and compare it to the results obtained for the conjunctive Bayesian networks.\
[**Keywords:**]{} [Bayesian network, cancer, genetic progression, multistage theory, Wright-Fisher process]{}
author:
- Moritz Gerstung
- Niko Beerenwinkel
bibliography:
- 'nikos.bib'
- 'lit.bib'
- 'lit2.bib'
title: Waiting time models of cancer progression
---
=1
Introduction
============
Cancer is a genetic disease that develops as the result of mutations in specific genes. When these genes work normally, they control the growth of cells in the body. Cancer cells have lost the normal cooperative behavior of cells in multicellular organisms resulting in increased proliferation. Tumor development starts from a single genetically altered cell and proceeds by successive clonal expansions of cells that have acquired additional advantageous mutations. The progression of cancer is characterized by the accumulation of these genetic changes [@CairnsN1975; @NowellS1976; @MerloNRC2006; @Michor2004; @Crespi2005].
Many oncogenes and tumor suppressor genes have been identified that contribute to tumorigenesis [@FutrealNRC2004]. In general, the mutational patterns of cancer cells vary greatly, not only among cancer types, but also among individual tumors of the same type. Some of this genetic variation might be due to the fact that all cancer cells need to acquire certain functional changes, the hallmarks of cancer, and most of these functions are accomplished by several gene products acting together in signaling pathways [@Hanahan2000; @Vogelstein2004]. Thus, many different genetic alterations can have similar phenotypic effects.
The incidence of sporadic cancer indicates that the underlying events are stochastic and that, in general, several steps are necessary. Therefore, a random processes approach appears to be an appropriate modeling strategy. The progression stages are generally not observable on a molecular level [*in vivo*]{}, and in a clinical setting, patients are typically diagnosed at the final stages of tumorigenesis. Mathematical modeling plays an important role in cancer research today, because it can be used to reconstruct and to analyze the evolutionary process driving cancer progression [@Anderson2008].
Models of tumorigenesis have been proposed early on to explain cancer incidence data [@Nordling1953; @Armitage1954; @Knudson1971]. These models assume that cancer is a stochastic multistep process with small transition rates and they have been further developed into the multistage theory of cancer [@Moolgavkar1992; @Frank2007; @Jones2008]. The tumor stages may be defined by specific mutations, by the number of mutations, by epigenetic changes, by functional alterations, or by histological properties. Since cancer progression is an evolutionary process, population genetics models are used extensively to describe tumorigenesis [@Nowak2006; @Wodarz2005; @Schinazi2006; @Beerenwinkel2007c; @Durrett2008]. Various deterministic and stochastic models have been proposed, some of which address specific questions, such as the dynamics of tumor suppressor genes [@Iwasa2005], genetic instability [@Nowak2006b], or tissue architecture [@Nowak2003].
As more and more genetic data from cancer cells become available from comprehensive studies [@SjoblomS2006; @WoodS2007; @JonesS2008; @ParsonsS2008; @LeyN2008] and through databases [@FutrealNRC2004; @Baudis2007; @MitelmanDB2008], one can also start investigating the dependencies between genetic events using statistical models. In view of multistage theory, tumors proceed through distinct stages, which can be characterized by the appearance of certain mutations. Particular attention has been paid to inferring the order of genetic alterations. Several graphical models have been developed for this purpose and applied to various cancer types [@Desper1999; @Radmacher2001; @Hjelm2006; @Rahnenfuehrer2005; @Beerenwinkel2006a; @Beerenwinkel2007d; @Beerenwinkel2007e].
A quantitative understanding of carcinogenesis can help developing new diagnostic and prognostic markers. Today a variety of univariate genetic markers is known [@SidranskyNRC2002], most of them comprising well-known oncogenes or tumor suppressors. Because of the diverse genetic nature of cancer, markers measuring the accumulation of several mutations, i.e., the progression of cancer, may improve existing ones. Here, we investigate the dynamics of cancer progression as a function of transition rates and of order constraints on the genetic events. The expected waiting time can be regarded as a measure of genetic progression to cancer [@Rahnenfuehrer2005].
In Section \[sec:multistage-theory\], we introduce the general stochastic multistep process and present an equivalent description in terms of ordinary differential equations (ODEs). At this abstract level of description, carcinogenesis may be regarded as proceeding through distinct stages, which can be defined by histological grades, functional changes, or genetic alterations. In Section \[sec:genet-progr-canc\], these stages will be associated with the occurrence of a certain number of mutations. We present expressions of the waiting time until a given stage is reached, for different models of mutation. Finally, in Section \[sec:population-dynamics\], we analyze an evolutionary model of carcinogenesis explicitly describing the appearance of genetic alterations in the tissue by mutations in single cells and their subsequent clonal expansions.
Multistage theory {#sec:multistage-theory}
=================
The multistage theory of cancer postulates that tumorigenesis is a linear multistep process, in which each step from one stage to the next is a rare event (Figure \[fig:multistage\]). Let us denote the cancer stages by $0$, $1$, $2$, $\dots$, $k$, where stage $0$ refers to the normal precancerous state, $1$ to the first adenomatous stage, and $k$ to a defined cancerous endpoint, such as the the formation of metastases. The process is started at time $t=0$ in state $0$.
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
@!C=0.8cm@R=0.9cm[ \*+\[F\][ 0 ]{}@[->]{}\[r\]\^[u\_1]{} & \*+\[F\][ 1 ]{}@[->]{}\[r\]\^[u\_2]{} & \*+\[F\][ 2 ]{}@[->]{}\[r\]\^[u\_3]{} & … @[->]{}\[r\]\^[u\_k]{} & \*+\[F\][ k ]{}\
@<.5ex>@[.>]{}\[rrrr\]&&&& ]{}
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
The transition rate from stage $j-1$ to stage $j$ is denoted $u_j$. That is, the waiting times for the transitions to occur are assumed to be independently exponentially distributed. Here, the coefficients $u_j$ denote the transition rates between the stages of tumor development; later we will link them to different models of mutation and fixation. Because of the sequential nature of the linear model, the waiting time $\tau_k$ until stage $k$ is reached is given recursively by the sum of exponentially distributed random variables, $$\label{eq:tau}
\tau_1 \sim \operatorname{Exp}(u_1), \qquad \tau_j \sim \tau_{j-1} + \operatorname{Exp}(u_j), \quad j = 2,\dots, k.$$ The waiting times follow the linear order $\tau_1 < \dots < \tau_k$ and the expected waiting time is $$\label{eq:waitingtimelinear}
{\mathbb{E}}[\tau_k] = {\mathbb{E}}[\tau_{k-1}] + \frac{1}{u_k}
= \sum_{j=1}^k \frac{1}{u_j}.$$ In particular, if all transition rates are equal, $u_j = u$ for all $j=1, \dots, k$, we find ${\mathbb{E}}[\tau_k] = k/u$. Hence the waiting time scales linear in the number of transitions $k$.
Let $f_{\tau_1,\dots,\tau_k}(t_1, \dots, t_k)$ be the density function of the joint distribution of waiting times $\tau = (\tau_1, \dots, \tau_k)$ defined by Eq. (\[eq:tau\]). The linear order of the waiting times $\tau_j$ induces the factorization $$\label{eq:lineardensity}
f_{\tau_1,\dots,\tau_k}(t_1, \dots, t_k) = \prod_{j=1}^k f_{\tau_j|\tau_{j-1}}(t_j \mid t_{j-1})$$ of $f_{\tau_1,\dots,\tau_k}$ into the conditional densities $$\begin{split}
\label{eq:transition}
f_{\tau_j|\tau_{j-1}}(t_j \mid t_{j-1}) = u_j \exp \left( -u_j[t_j -
t_{j-1}] \right) \, {\mathbb{I}}(t_j > t_{j-1}),
\end{split}$$ where ${\mathbb{I}}$ is the indicator function.
Multistage theory can also be formulated as a system of ordinary differential equations (ODEs). We derive this formulation as follows: Let $x_j(t)$ denote the probability that stage $j$ is reached before time $t \ge 0$, but stage $j+1$ has not yet been reached, $$\begin{aligned}
x_0(t) &=& \operatorname{Prob}[0 < t < \tau_1] \nonumber \\
x_j(t) &=& \operatorname{Prob}[\tau_j < t < \tau_{j+1}],
\quad j = 1, \dots k-1, \\
x_k(t) &=& \operatorname{Prob}[\tau_k < t]. \nonumber\end{aligned}$$ We have $x_0(t) + \dots + x_k(t) = 1$ and $x_j(t) = \operatorname{Prob}[t <
\tau_{j+1}] - \operatorname{Prob}[t < \tau_j]$ due to the linearity of transitions. It follows that $\dot{x}_j(t) = f_{\tau_{j+1}}(t) - f_{\tau_j}(t)$.Using the conditional exponential nature of the model, Eq. (\[eq:transition\]), one finds that $f_{\tau_j}(t) = \int_0^\infty
f_{\tau_j,\tau_{j-1}}(t, t') {\,\mathrm{d}}t' =\int_0^{t} \exp(-u_{j}[t-t'])
f_{\tau_{j-1}}(t') {\,\mathrm{d}}t' $. From the identity $\exp(-u_{j}t)=u_{j}\int_{t}^\infty \exp(-u_{j}t') {\,\mathrm{d}}t'$, one obtains $$\begin{aligned}
\nonumber
f_{\tau_j}(t) &= u_{j} \int_{t}^\infty\!\!\! \int_0^{t}\exp(-u_{j}[t''-t'])
f_{\tau_{j-1}}(t') {\,\mathrm{d}}t' {\,\mathrm{d}}t''\\
& = u_{j}
\int_{t}^\infty\!\!\! \int_0^{t}f_{\tau_j,\tau_{j-1}}(t'',t') {\,\mathrm{d}}t' {\,\mathrm{d}}t''\\
\nonumber
&= u_{j}\operatorname{Prob}[\tau_{j-1} < t < \tau_{j}] = u_{j}
x_{j-1}(t).\end{aligned}$$ Hence, the probabilities $x_j(t)$ obey the set of ODEs, $$\begin{aligned}
\label{eq:ode}
\dot{x}_0(t) & = & -u_1 \, x_0(t), \nonumber \\
\dot{x}_j(t) & = & u_{j} x_{j-1}(t) - u_{j+1}x_j(t),
\quad j = 1, \dots, k-1, \quad\\
\dot{x}_k(t) & = & u_{k} \, x_{k-1}(t), \nonumber\end{aligned}$$ subject to initial conditions $x_0(0) = 1$ and $x_j(0)
= 0$ for all $j \ge 1$. These rate equations describe the linear chain of exponential waiting time processes as a probability flux of rate $u_{j} x_{j-1}(t)$ from state $j-1$ to state $j$.
If all rates are identical, $u_j=u$ for all $j$, then the solution of this linear system of ODEs is given by Poisson distributions with time-dependent parameter $ut$, $$\label{eq:Pois}
x_j(t) = \operatorname{Pois}(j; ut) = \frac{(ut)^j \exp(-ut)}{j!},
\quad j = 0, \dots, k-1.$$ The probability of having reached the final stage of progression, $k$, at time $t$ is $$\label{eq:x_k}
x_k(t) = 1 - e^{-ut} \, \sum_{j=0}^{k-1}
\frac{(ut)^j}{j!}=\operatorname{Pois}(k; ut)\sum_{j=0}^\infty
\frac{(u t)^j}{(k+j)_j}.$$ We also recover from the ODE system the expected waiting time to the final cancer stage, $${\mathbb{E}}[\tau_k] = \int_0^\infty u t x_{k-1}(t) {\,\mathrm{d}}t
= \frac{k}{u}.$$
Multistage theory provides a mathematical framework for describing the stepwise progression of cancer. For the above discussion of the model, we have neither specified the definition of the postulated stages, nor the nature of the transitions. Indeed, different interpretations and uses of the model are possible. In the following we link multistage theory closely to the genetic progression of cancer.
Genetic progression of cancer {#sec:genet-progr-canc}
=============================
In this section, we associate the stages of tumorigenesis to mutations in the genomes of cancer cells. Each stage is defined by the number of mutations that have accumulated in the cells of the tissue. Each mutation occurs initially in a single cell as the result of an erroneous DNA duplication. Some mutations alter the behavior of the cell in such a way that it experiences a growth advantage relative to the other cells in the tissue. These cells can outgrow their competitors in a clonal expansion and the mutation spreads in the tissue. While the first appearance of a mutation is essentially a random process, i.e., each mutation is equally likely to appear, the fate of a mutation in the population depends on the relative fitness of the cell in which it occurs.
We define tumor progression to be in stage $j$ of the multistep model, Eq. (\[eq:tau\]), if most of the tumor cells harbor exactly $j$ mutations. We are interested in the waiting time until $k$ out of $d$ possible mutations have accumulated, where typically $k \ll d$. For example, [@SjoblomS2006] suggest that $k \approx 20$ genes out of $d \approx 100$ to $1000$ need to be hit in order to develop invasive colon cancer. In this interpretation of multistage theory, stages correspond to population states and transitions correspond to genetic transformations of the ensemble of tumor cells, including mutation and selection. These population dynamics will be investigated in more detail in the next section. The focus of the present section is on how different models of accumulating mutations affect the waiting time.
Genetic mutations occur randomly at erroneous cell divisions, but the subsequent fixation of the mutation within the cell population is constrained: A mutation will only spread if it confers a growth advantage. This restricts not only the mutations driving cancer, but also the order in which they can appear, because some physiological changes must be achieved before others. For example, in colon cancer, cells must lose the ability to undergo apoptosis, before additional mutations accumulate in the resulting neoplasia. Hence loss of function of the tumor suppressor gene [*APC*]{} controlling apoptosis is necessary before other mutations such as [*KRAS2*]{} can fixate [@Nowak2003; @Vogelstein2004]. Furthermore, sometimes only the combined action of mutations drives cancer progression. It is known, for example, that only the combination of [*p53*]{} and [*Ras*]{} trigger the development of tumors in mice [@LandS1983].
In general, there may exist several order constraints for the successive fixation of mutations as shown in Figure \[fig:graph\]. The simplest of these constraints is the linear model, where the waiting times of all mutations are totally ordered (Figure \[fig:graph\](a)). The linear model is exactly the multistep process discussed in the previous section. Alternatively, mutations may occur independently without any constraints (Figure \[fig:graph\](b)). If there exist order relations among some of the possible mutations, a partial order may be used to describe the process of accumulating mutations (Figure \[fig:graph\](c)). We will discuss these models separately and show how the topology of the genotype space affects the waiting time.
Let $d$ be the number of possible mutations and denote by $T_j$ the waiting time for mutation $j$ ($j=1,\dots,d$) to be generated and to establish in the tumor. The waiting times $T_j$ are assumed to be exponentially distributed with parameters $\lambda_j$, and they obey certain temporal order constraints (Figure \[fig:graph\]). The joint distribution of $T = (T_1, \dots, T_d)$ determines how long it takes until $k$ out of the $d$ mutations have accumulated. We define the random variable $\tau_k$ denoting stage $k$ as the waiting time until any $k$ mutations appear, $$\label{eq:minmax}
\tau_k = \min_{ \{j_1,\dots,j_k\} \subset [d]} \,
\max \, \{ T_{j_1}, \dots T_{j_k} \}, \quad [d]=\{1,2,\ldots,d\}.$$ If all mutations accumulate in a linear order (Figure \[fig:graph\](a)), each at rate $\lambda_j$, $$\label{eq:linearmutations}
T_1 \sim \operatorname{Exp}(\lambda_1), \quad
T_j \sim T_{j-1} + \operatorname{Exp}(\lambda_j), \quad j=2, \dots, d,$$ then $\tau_k = T_k$ and the process of mutation and clonal expansion is mathematically equivalent to the general linear multistep process of Eq. (\[eq:tau\]). In this case, the transition rates $u_j =
\lambda_j$ may be interpreted as an effective rate for the mutation and the clonal expansion process. According to Eq. (\[eq:waitingtimelinear\]) the waiting time for $k < d$ mutations is given by ${\mathbb{E}}[\tau_k]=\sum_{j=1}^k 1/\lambda_j$ and the waiting time scales linear with the number of mutations.
[b[0.5cm]{}<b[4.5cm]{}b[3cm]{}]{} (a) & @!=0.8cm[ \*+\[o\]\[F\][ 1 ]{}@[->]{}\[r\]& \*+\[o\]\[F\][ 2 ]{}@[->]{}\[r\]& \*+\[o\]\[F\][ 3 ]{} ]{} & @!=0cm[ & @[..]{}\[rr\]& & {1,2,3}\
@[..]{}\[rr\]@[..]{}\[ur\] && @[..]{}\[ur\]\
& @[..]{}’\[r\]\[rr\]@[..]{}’\[u\]\[uu\] & & {1,2}\
@[..]{}\[uu\]@[..]{}\[ur\] && {1}@[..]{}\[uu\]]{}\
(b) & @!=0.8cm[\*+\[o\]\[F\][ 1 ]{} & \*+\[o\]\[F\][ 2 ]{} & \*+\[o\]\[F\][ 3 ]{} ]{} & @!=0cm[ & {2,3} & & {1,2,3}\
{3}&& {1,3}\
& {2}’\[r\]\[rr\]’\[u\]\[uu\] & & {1,2}\
&& {1}]{}\
(c) & @R=0.8cm@C=.2cm[ \*+\[o\]\[F\][ 1 ]{}@[->]{}\[dr\]& & \*+\[o\]\[F\][ 2 ]{}@[->]{}\[dl\]\
&\*+\[o\]\[F\][ 3 ]{} &]{} &@!=0cm[ & @[..]{}\[rr\]& & {1,2,3}\
@[..]{}\[rr\]@[..]{}\[ur\] && @[..]{}\[ur\]\
& {2}’\[r\]\[rr\]@[..]{}’\[u\]\[uu\] & & {1,2}\
@[..]{}\[uu\]&& {1}@[..]{}\[uu\]]{}\
& Poset & Genotype lattice
Independent mutations
---------------------
Let us now consider the situation where all mutations may occur in an arbitrary order (Figure \[fig:graph\](b)), $$\label{eq:indepmodel}
T_j \sim \operatorname{Exp}(\lambda_j), \quad j=1, \dots, d.$$ If $k=1$, Eq. (\[eq:minmax\]) simplifies to $$\label{eq:tau1}
\tau_1 = \min \{T_1, \dots, T_d\}
\sim \operatorname{Exp}(\lambda_1 + \dots + \lambda_d)$$ and the expected waiting time is ${\mathbb{E}}[\tau_1] = 1 / \sum_{j=1}^d
\lambda_j$. If all fixation rates are equal to $\lambda$, then ${\mathbb{E}}[\tau_1] = 1 / (d \lambda)$. Thus, the occurrence of any one out of $d$ mutations is equivalent to a 1-step process at rate $u_1 = d \lambda$.
For now, we continue assuming identical rates $\lambda$. If $k \ge 2$ and the first mutation has occurred, then there are $d-1$ choices left for the second mutation to occur, hence $\tau_2 \sim \tau_1 +
\operatorname{Exp}((d-1)\lambda)$. In general, the accumulation of $k$ out of $d$ mutations, which occur independently at the same rate $\lambda$, is equivalent to the $k$-step process, Eq. (\[eq:tau\]), with rates $u_j = (d-j+1)\lambda$. From Eq. (\[eq:waitingtimelinear\]), we find $$\label{eq:waitingtimeiid1}
{\mathbb{E}}[\tau_k]=\frac{1}{\lambda}\sum_{j=1}^k\frac{1}{d-j+1}.$$ If many mutations are possible, $d\gg k$, then the expected waiting time for $k$ mutations is approximately ${\mathbb{E}}[\tau_k] \approx k /(d
\lambda)$, which is smaller than $1/\lambda$ and linear in $k$. The waiting time approaches zero in the limit $d \rightarrow \infty$ for every fixed $k$, because the exponential distribution is non-zero at $t = 0$. On the other hand, if $k=d$, all possible mutations need to occur and ${\mathbb{E}}[\tau_k] = H_k / \lambda$, where $H_k = \sum_{j=1}^k
1/j$ is the $k$-th harmonic number. Using ${H_k}\approx {\gamma +
\log k}$, $\gamma\approx0.577$ being the Euler-Mascheroni constant, we find an approximate logarithmic dependency for the occurrence of all possible mutations, ${\mathbb{E}}[\tau_k] \approx (\gamma + \log
k)/\lambda$. In contrast to the $k \ll d$ case, for $k = d$, the expectation of $\tau_k$ is larger than $1/\lambda$ and increases only logarithmically in $k$.
In both cases the expected waiting time is always larger if mutations can only occur in a linear fashion, Eq. (\[eq:waitingtimelinear\]), than if mutations are independent, Eq. (\[eq:waitingtimeiid1\]). This is due to the fact that in the independent case, all mutations are possible in any step of the process, whereas in the linear case, only one mutation is feasible at each stage.
The ODE system, Eq. (\[eq:ode\]), corresponding to the multistage model with unequal transition rates $u_j = (d-j+1)\lambda$ has also an analytical solution. For $j=1,\dots,d$, $$\label{eq:odesolutionunequal}
x_j(t)=\binom{d}{j} \left( 1-e^{-ut} \right)^j
\left( e^{-ut} \right)^{d-j}.$$ If $k \ll d$, then $u_j \approx d \lambda$ and Eq. (\[eq:Pois\]) yields $x_j(t)\approx \operatorname{Pois}(j; d \lambda t)$. Thus, the number of independent mutations accumulates at a speed that is roughly $d$ times faster than for a linear chain of mutations.
We now turn to the case of independent mutations with arbitrary fixation rates $\lambda_j$. The distribution of $\tau_1$ is given by Eq. (\[eq:tau1\]) with expected value $1/\sum_{j=1}^d \lambda_j$. If $k \ge 2$, then for the second mutation there are $d-1$ choices. However, the rate at which the second mutation occurs now depends on the specific realization of the first mutation. Hence, we have to consider the set $\mathcal C_k$ of all total orderings $$T_{j_1} < \dots < T_{j_k}$$ of $k$ out of $d$ waiting times. There are $(d)_k = d!/(d-k!)$ such orders. We identify $\mathcal C_k$ with the set of all mutational pathways $j_1 \rightarrow \dots
\rightarrow j_k$ of length $k$ in $2^{[d]}$. For notational convenience, we write such a path $C \in \mathcal{C}_k$ as a collection of subsets $C = (C_0, C_1, \dots, C_k)$ such that $C_0 =
\emptyset$ and $C_i = \cup_{\ell=1}^i \{j_\ell\}$, for $i = 2, \dots,
k$. Each set $C_i$ represents an intermediate genotype on the path with $i$ mutations.
The expected waiting time until any $k$ out of $d$ mutations occur is the weighted sum over all mutational pathways of length $k$, $$\label{eq:indep}
{\mathbb{E}}[\tau_k] = \sum_{C\in \mathcal C_k} {\mathbb{E}}[\tau_k \mid C] \operatorname{Prob}[C],$$ where $$\label{eq:pathprob}
\operatorname{Prob}[C]= \prod_{i=1}^k \frac{\lambda_{j_i}}
{\sum_{j \in {\mathrm{Exit}}(C_{i-1})} \lambda_{j}},$$ is the probability of pathway $C$ with $\{j_i\}=C_i\setminus C_{i-1}$ and ${\mathrm{Exit}}(C_{i-1})=[d] \setminus C_{i-1}$ being the set of all possible mutations at step $i$. Furthermore, $$\label{eq:pathtime}
{\mathbb{E}}[\tau_k \mid C]= \sum_{i =1}^k \frac{1}
{\sum_{j \in {\mathrm{Exit}}(C_{i-1})} \lambda_j}$$ is the expectation of the waiting time $\tau_k$ given that the path $C$ is realized. For a fixed pathway, say $1 \rightarrow \dots \rightarrow k$, the waiting time distribution is $\operatorname{Exp}(\lambda_1 + \dots + \lambda_d)$ for the first mutation,, $\operatorname{Exp}(\lambda_2 + \dots + \lambda_d)$ for the second mutation, and $\operatorname{Exp}(\lambda_j + \dots + \lambda_d)$ for the $j$-th mutation. In general, Eq. (\[eq:pathtime\]) arises from a linear $k$-step process, Eq. (\[eq:tau\]), with transition rates $u_j =
\sum_{\ell \in {\mathrm{Exit}}(C_{j-1})} \lambda_\ell$. Note that this waiting time is different from the waiting time in the linear model, because here a linear pathway is considered within a much larger lattice of mutational patterns (Figure \[fig:graph\](b)). In the denominators of both Eqs. (\[eq:pathprob\]) and (\[eq:pathtime\]) we account for alternative evolutionary routes by summing over the fixation rates of all mutations that could have occurred at this point. If all fixation rates are identical to $\lambda$, then $\operatorname{Prob}[C] =
1/(d)_k$ and ${\mathbb{E}}[\tau_k \mid C] = \sum_{i=1}^k 1/(d - i + 1) \lambda$ are independent of $C$, and we recover Eq. (\[eq:waitingtimeiid1\]).
Partially ordered mutations {#sec:ct-cbn}
---------------------------
Sequentially and independently accumulating mutations can be regarded as two opposite extreme cases, where the linear model imposes maximum constraints on the order in which mutations can occur, while the independent model imposes none. For most biological systems, including cancer progression, we expect more realistic models to lie somewhere in between these extremes (Figure \[fig:graph\](c)). Conjunctive Bayesian networks are a class of waiting time models that allow for partial orders among the mutations, i.e., they encode constraints like $T_i < T_j$ for some of the mutations [@Beerenwinkel2006a; @Beerenwinkel2007d; @Beerenwinkel2007e].
Formally, the (continuous time) conjunctive Bayesian network is defined recursively by a partially ordered set, or poset, $P = ([d], \prec)$ and fixation rates $\lambda_j$, as $$\label{eq:cbn}
T_j = \bigl\{ \max_{i \in {\mathrm{pa}}(j)} T_i \bigr\} + \operatorname{Exp}(\lambda_j),
\qquad j = 1, \dots, d,$$ where ${\mathrm{pa}}(j) = \{ i \mid i \prec j$ and $(i \prec \ell \prec j \Rightarrow i=\ell$ or $\ell = j) \}$ is the set of mutations that cover mutation $j$. This model class includes the linear and the independent model, for which $|{\mathrm{pa}}(j)| = 1$ and ${\mathrm{pa}}(j) = \emptyset$, respectively. It is a Bayesian network model, because the joint density of $T = (T_1, \dots, T_d)$ factors into conditional densities as $$\begin{split}
f_{T_1,\dots,T_d}(&t_1,\dots,t_d) = \prod_{i=1}^d
f_{T_i|\{T_j:j\in {\mathrm{pa}}(i)\}}(t_i\mid \{t_j : j\in{\mathrm{pa}}(i)\})\\
& = \prod_{i=1}^d \lambda_i \exp\bigl( -\lambda_i[t_i-\max_{j\in
{\mathrm{pa}}(i)} t_j] \bigr) {\mathbb{I}}\bigl(t_i > \max_{j\in {\mathrm{pa}}(i)}
t_j\bigr).
\end{split}$$ The expected waiting time until $k$ mutations have accumulated according to the partial order $P$ can be calculated in a fashion similar to Eq. (\[eq:indep\]). Let $J(P) \subset 2^{[d]}$ denote the set of all genotypes that are compatible with the poset $P$, i.e., the subsets $S \subset [d]$ for which $j \in S$ and $i \prec j$ implies $i \in S$. Considering the set $\mathcal{C}_k(P)$ of all mutational pathways of length $k$ in $J(P)$, we find $$\label{eq:cbntime}
{\mathbb{E}}[\tau_k] = \sum_{C \in \mathcal{C}_k(P)} {\mathbb{E}}[\tau_k \mid C] \operatorname{Prob}[C],
$$ with $\operatorname{Prob}[C]$ and ${\mathbb{E}}[\tau_k
\mid C]$ defined in Eqs. and , respectively. The set of possible paths $\mathcal C_k(P)$ is restricted to the lattice $J(P)$; hence the set of possible next mutations, ${\mathrm{Exit}}(C_i)$, is also constrained to the elements compatible with the poset $P$. The expected waiting time for $k$ mutations grows with the number of relations because ${\mathrm{Exit}}(C_i)$ is the larger, the less relations exist in the poset. It is therefore maximal in a totally ordered set, then decreases for a partial order, and is minimal for an unordered set. The two opposing limit cases of the linear chain and the independent case represent extrema also in terms of the expected waiting time.
In practice, the number of mutational pathways can be large, but the expectation, Eq. (\[eq:cbntime\]), can be computed recursively without the need of enumerating all paths. The conjunctive Bayesian network does not only allow for calculating the expected waiting time, but it has also nice statistical properties. Both the parameters $\lambda_j$ and the structure $P$ of the model can be inferred efficiently from observed data. The maximum likelihood estimator for the parameters is $$\label{eq:ml}
\hat \lambda_j = \frac{M}{\sum_{i=1}^M (t_{ij}-\max_{\ell \in
{\mathrm{pa}}(j)}t_{\ell j})},$$ where $M$ is the number of observations and the $i$-th observation $t_{i \cdot}$ is a realization of $T = (T_1, \dots, T_d)$. The maximum likelihood poset $\hat P$ is the maximal poset that is compatible with the data. In other words, $\hat P$ is simply the poset that contains all compatible relations.
In practice, the occurrence times of mutations, $T_{j}$, may not be observable, but instead only mutational patterns are available. This setting gives rise to a censored version of the conjunctive Bayesian network, in which parameter estimation is still feasible using an Expectation-Maximization algorithm [@Beerenwinkel2007e].
Population dynamics {#sec:population-dynamics}
===================
In the previous section we have treated genetic progression as an effective process with steps including both mutation and clonal expansion that occur at effective rates $\lambda_j$. We will now dissect these two processes and analyze models with explicit mutation and proliferation. Let $\mu$ denote the mutation rate. We assume that each mutation increases fitness by the same amount $s$ in a multiplicative manner such that the fitness of a cell with $j$ mutations is $(1 + s)^j$. Before analyzing the system with both mutation and selection we first discuss this model for $s=0$. This corresponds to an ensemble of $N$ independently and identically distributed copies of the waiting time process. For example, such a situation is found in the colon: It consists of more than $10^6$ crypts [@HumphriesNRC2008], each of which can develop an adenoma independently. The model also applies to the case of selectively neutral mutations in a tissue and we will present expressions for the waiting time until the first cell has accumulated a given number of mutations.
Independent cell lineages
-------------------------
If $s=0$, all cells have the same replicative capacity irrespective of their mutational patterns. Mutations therefore accumulate independently in a neutral evolutionary process. We can analyze this process by interpreting the independence model with rates $\lambda_j =
\mu$ as describing the state of a single cell. The population of genetically heterogeneous, but phenotypically identical cells can then be regarded as an ensemble of independent cell lineages, each evolving according to Eq. (\[eq:indepmodel\]). In this setting, we are interested in the average time it takes until the first cell with $k$ mutations appears in a population of size $N$, i.e., in the expectation of $\min \{ \tau_k^{(i)} \mid i=1,\dots,N\}$, where all $\tau_k^{(i)}$ are identical distributed according to Eq. (\[eq:tau\]) with $u_j = (d-j+1)\mu$. Rather than calculating this expectation, we take a different approach. Let $\tau_k$ be the waiting time for $k \ll d$ independent mutations, which is equivalently defined by the linear process with rate $\mu d$, Eq. (\[eq:waitingtimeiid1\]). In an ensemble of many identical cell lineages the probabilities $x_j(t) = \operatorname{Prob}[\tau_j < t]$ may be identified with the relative abundances of cells with $j$ mutations in the population. Similarly, $\operatorname{Prob}[\tau_k < t]=\sum_{j \ge k}\operatorname{Pois}(j;\mu dt) $ is the fraction of cells having at least $k$ mutations. When this fraction exceeds $1/N$, chances are high that the first cell has accumulated $k$ mutations. Thus, we define $$\label{eq:taustar}
\tau_k^* = \inf \, \{t \ge 0 \mid
x_k(t) \ge 1/N \}.$$ This quantity can also be interpreted as the $(1/N)$-quantile of the distribution of $\tau_k$. Using Eq. (\[eq:x\_k\]), we can find $\tau_k^*$ by solving $$\label{eq:solve}
\frac{1}{N} = \operatorname{Pois}(k;\mu d t)\sum_{j=0}^\infty
\frac{(\mu d t)^j}{(k+j)_j}.$$ for $t$. Since $N$ is typically very large ($N = 10^6$ to $10^9$ cells), we are searching for solutions in the regime where the right hand side of Eq. (\[eq:solve\]) is small. This is the case for $\mu d t \ll k$. Then only the $j=0$ term of the sum contributes appreciably and we have to solve $1/N = \operatorname{Pois}(k;\mu d \tau_k^*)$.
For $k=1$, we consider the subset of 1-cells, i.e., cells containing one mutation, that starts growing in the background of mutation-free cells as $x_1(t) = \mu d t\exp(-\mu d t) \approx \mu d t$ for $t \approx
0$. Thus the average waiting time to the appearance of the first cell with one mutation is $\tau_1^* \approx 1/(\mu d N)$. Similarly, for $k=2$, we find $x_2(t) = (1/2) (\mu d t)^2 \exp(-\mu d t) \approx (1/2)
(\mu d t)^2$ and thus $x_2(t) = 1/N$ has the approximate solution $\tau_2^* \approx \sqrt{2} / (\mu d \sqrt{N})$. Alternatively, one can arrive at this approximation by considering the initial linear growth of the population of 1-cells. The first 2-cell is produced by these growing 1-cells when $$\label{eq:first2cell}
\mu d\, \int_0^{\tau_2^*} x_1(t) {\,\mathrm{d}}t = \frac{1}{N},$$ having the same approximate solution given above.
In general, the solution of $1/N = \operatorname{Pois}(k;\mu d \tau_k^*)$ is given in terms of the Lambert $W$ function, which is defined as the solution of $W(z) e^{W(z)} = z$, $$\label{eq:tau*_exact}
\tau_k^* = - \frac{k}{\mu d} \, W_0 \left( -
\frac{k!^{1/k}}{k \, N^{1/k}} \right),$$ where $W_0$ is the principle branch of $W$ [@Corless1996]. For large population sizes $N$, the argument of the Lambert $W$ function in Eq. (\[eq:tau\*\_exact\]) is close to zero and hence $W(z) \approx z$. We obtain $$\label{eq:tau*_approx}
\tau_k^* \approx \frac{k!^{1/k}}{\mu d \, N^{1/k}}, \qquad \mbox{for all } k \ge 1,$$ which generalizes the approximations for $\tau_1^*$ and $\tau_2^*$ given above.
On the other hand, for large $k$, we have $ k!^{1/k} \approx k/e$ and $ N^{1/k} \approx 1 + (\log N) / k$ leading to $$\tau_k^* \approx \frac{k^2}{e \mu d (k + \log N)}.
\label{eq:tau_neutral}$$ This approximation is less accurate, but reasonable for usual parameter values and $k=0,\dots,20$ (Figure \[fig:tau\]). For $N=1$, it coincides with the result for a single cell line, Eq. (\[eq:waitingtimeiid1\]), up to a constant factor of $1/e$. The waiting time depends on the the inverse of the logarithm of the population size. For example, the average waiting time to the first cell with $k$ mutations among $10^9$ cells is only about 20 times shorter than the same waiting time in a single cell. For large $k$, this expression becomes again linear in $k$ (Figure \[fig:tau\]).
![\[fig:tau\] Approximate solutions of the waiting time $\tau_k^*$ as defined by the equation $1/N = \operatorname{Pois}(k;\mu d \tau_k^*)$, for $N=10^9$, $\mu d = 0.001$, and $k=1,\dots,20$. The exact solution in terms of the Lambert $W$ function (filled circles, Eq. (\[eq:tau\*\_exact\])) is compared to the approximation given in Eq. (\[eq:tau\*\_approx\]) (squares) and the less accurate but simpler approximation of Eq. (\[eq:tau\_neutral\]) (triangles). ](tau_small){width="\linewidth"}
The normal mutation rate due to DNA polymerase errors is on the order of $10^{-10}$ to $10^{-9}$ base pairs (bp) per cell per generation [@KunkelARB2000]. For an average human gene size of 27kbp [@VenterS2001], the mutation rate per gene should be on the order of $\mu\approx 10^{-6}$ per cell per generation. The waiting time until the first of $10^9$ cells has accumulated $k= 20$ mutations would be on the order of $10^{6}$ cell generations which, in turn, typically occur at the time-scale of days or weeks. Thus, the waiting time would be on the order of $10^6$ days or more, clearly exceeding a human lifetime. Hence a neutral evolutionary process alone cannot account for the genetic progression of cancer.
Selection and clonal expansion
------------------------------
We now analyze the dynamics of an evolving cell population in which each mutation confers the same selective advantage $s > 0$. Because a new mutant with an additional mutation has a growth advantage, it will expand in the tissue and outcompete the other cells. The next mutation is most likely to occur on this growing clone. We therefore use an evolutionary model of carcinogenesis that accounts for mutation and selection [@Beerenwinkel2007c] and trace the number of cells with $j$ mutations, $N_j(t)$, in each generation $t=0,1,2,\dots$.
$$\xymatrix@=0.08\linewidth{
{*+[o][F]{~0~}}\ar[d] &{*+[o][F]{~0~}}\ar@{~>}[d] &{*+[o][F]{~0~}}& {*+[o][F]{~0~}}\ar[d]\ar[dl] &{*+[o][F]{~0~}}&{*+[o][F]{~0~}}\ar[d]\ar[dl] &
t= 0\\
{*+[o][F]{~0~}}&{*+[o][F]{~1~}}\ar[dl]\ar[d] \ar[dr] &{*+[o][F]{~0~}}&{*+[o][F]{~0~}}\ar[d] &{*+[o][F]{~0~}}\ar@{~>}[d] &{*+[o][F]{~0~}}\ar[d] & t= 1 \\
{*+[o][F]{~1~}}&{*+[o][F]{~1~}}\ar[dl]\ar[d] &{*+[o][F]{~1~}}\ar@{~>}[d]\ar[dr] &{*+[o][F]{~0~}}&{*+[o][F]{~1~}}\ar[d] \ar[dr] &{*+[o][F]{~0~}}& t= 2\\
{*+[o][F]{~1~}}&{*+[o][F]{~1~}}&{*+[o][F]{~2~}}&{*+[o][F]{~1~}}&{*+[o][F]{~1~}}&{*+[o][F]{~1~}}& t= 3\\
}$$
Consider a population of $N$ cells that undergo subsequent rounds of cell divisions as shown in Figure \[fig:Wright-Fisher\]. In each generation, mutations occur randomly and independently at rate $\mu$. The total number of possible mutations is denoted $d$. We assume that fitness, i.e., the expected number of offspring, is proportional to $(1+s)^j$, where $j$ is the number of accumulated mutations. The population dynamics are assumed to follow a Wright-Fisher process [@Ewens2004]. In this model, generations are time-discrete and synchronized. A new configuration $[N_0(t+1),\ldots, N_d(t+1)]$ of cells is drawn from the previous generation $t$ according to the multinomial distribution $$\label{eq:multinomial}
\begin{split}
\operatorname{Prob}\left[ N_0(t+1) = n_0, \dots, N_d(t+1) = n_d \right]\\ =
\frac{(n_0 + \dots + n_d)!}{n_0!\cdots n_d!} \prod_{i=1}^d
\theta^{n_j}_j,
\end{split}$$ where $n_0 + \dots + n_k = N$. The parameters $\theta_j$ denote the probability of sampling a $j$-cell, $$\label{eq:theta}
\theta_j =
\sum_{i=0}^j\binom{d-i}{j-i}\mu^{j-i}(1-\mu)^{d-j}
\frac{(1+s)^{i}x_i}{\sum_l(1+s)^lx_l},$$ where we defined $x_j(t) = N_j(t)/N$ as the relative abundance of $j$-cells. A cell with $j$ mutations can occur in generation $t+1$ either as progeny of a $j$-cell in generation $t$, or by erroneous duplication of a $(j-1)$-cell. For $s=0$ and infinitesimal generation times, the model reduces to the case of independent cell lineages undergoing independent mutations, which has been discussed in the previous section.
In general, no closed form solution of the Wright-Fisher process is known. However, the dynamics defined by Eqs. (\[eq:multinomial\]) and (\[eq:theta\]) display certain regularities that can be exploited in order to derive an approximate analytical expression for the expected waiting time to the first cell with $k$ mutations, $\tau_k^*$. Numerical simulations show that the subsets of $j$-cells sequentially sweep through the population and the mutant waves travel at constant speed [@Beerenwinkel2007c]. This regular behavior can be analyzed by decomposing the process into the generation of a new cell type by mutation and its clonal expansion driven by selection.
The dynamics of clonal expansions are given by the replicator equation [@Nowak2006], $$\label{eq:replicator}
\dot{x}_j(t) = s x_j(t) \left[ j-\sum_{i=1}^\infty
i \, x_i(t) \right],$$ where we consider only those cell types that are already present in the system and we ignore mutation. The fitness of $j$-cells is $(1+s)^j \approx 1 + js$, if $s \ll 1$. Eq. (\[eq:replicator\]) has a solution in terms of the Gaussians $x_j(t)=A
\exp(-[j-vt]^2/[2\sigma^2] )$ with normalization constant $A$ and width $\sigma^2=v/s$, where $v$ is the velocity of the traveling wave. The initial growth of a newly founded clone is exponential, but eventually follows this Gaussian distribution. The final decline corresponds to the clone ultimately becoming extinct by outcompetition of fitter clones harboring additional mutations.
The velocity of the waves is determined by the mutation process. A new $(j+1)$-cell is generated by mutation from the growing clone of $j$-cells. The equation $x_{j+1}(t) = 1/N$ can therefore be rewritten, similar to Eq. (\[eq:first2cell\]), as $$\label{eq:velocity}
\int_0^{\tau_{j+1}^*} \mu d x_{j}(t) {\,\mathrm{d}}t = \frac{1}{N},$$ where initially, $x_{j}$ grows exponentially according to Eq. . This approach finally yields the approximate expected waiting time [@Beerenwinkel2007c] $$\label{eq:tauWrightFisher}
\tau_k^* \approx
\frac{k \log^2 (s / [\mu d])}
{2s \log N}.$$ This expression suggests approximating the Wright Fisher process, Eqs. (\[eq:multinomial\]) and (\[eq:theta\]), by a linear multistep process, Eq. (\[eq:tau\]), with transition rate $u = (2s
\log N) / \log^2 (s/[\mu d])$ in which stages correspond to clonal expansions [@MaleyCL2007]. Comparing Eq. with the waiting time in a neutral evolutionary process, Eq. (\[eq:tau\_neutral\]), here the waiting time per mutation, $1/du$, contributes only logarithmically and the expected waiting time Eq. (\[eq:tauWrightFisher\]) is proportional to $k/s$, reducing the overall waiting time considerably. The reason for this acceleration lies in the growth advantage of the mutated cells: A single cell produces an exponentially growing number of clonal offspring. This growth, in turn, directly relates to the probability of creating a cell with an additional mutation. Therefore, clonal expansions dramatically speed up the accumulation of mutations in a population.
For example, considering a fitness advantage of $s=10^{-2}$ per mutation, $d=100$ susceptible loci, a mutation rate of $u=10^{-7}$ per gene, and a population size of $N=10^9$ cells results in a waiting time of $\tau_{20}^* \approx 10^3$ generations. With a generation time of 1 to 2 days, this waiting time ranges on the time scale of several years, being consistent with clinical observations. By contrast, the waiting time in the neutral model is on the order of $10^{6}$ generations. Hence even a moderate selective advantage decreases the waiting time by three orders of magnitude.
The time $\tau^*_j$ denotes the time after which the probability that a cell with $j$ mutations has been generated exceeds $1/N$. This is an approximation for the expected waiting time of the first $j$-cell with an additional mutation in a population of size $N$. For the Wright-Fisher process it is known, however, that due to genetic drift the probability of fixation of a selectively advantageous mutation initially present in a single cell is only $2s$ [@Ewens2004]. Hence, the majority of mutated cells become extinct. This is also observed in the numerical simulations of the Wright-Fisher process (Eqs. (\[eq:multinomial\], \[eq:theta\]); @Beerenwinkel2007c). On average it takes $1/2s$ cells until the first successful mutant is generated. This effect is included indirectly in approximation Eq. (\[eq:tauWrightFisher\]): $x_j(t)
\propto e^{st}$ is the expected frequency conditioned only on $x_j(0)=1/N$ and not on survival. It also accounts for all trajectories, including those going extinct. Recently, this effect has been studied in a related model [@DesaiG2007; @BrunetG2008]. @DesaiG2007 found an approximate waiting time of $\tau_k^*\approx k
\log(s/[ud])/(s[2\log N + \log\{sud\}])$. Comparing with expression Eq. , the only difference is the term $\log(sud)$ in the denominator. For typical parameter values, $\log(sud) \approx -7 \approx -\log N$. Therefore, the waiting time is larger by a factor of $1.5$ as compared to Eq. . But this comparison is limited, because the models are not identical. For example, @DesaiG2007 obtain a fixation probability of $s$, whereas in the Wright-Fisher model it is $2s$.
Conclusion
==========
A quantitative understanding of cancer progression is required for constructing clinical markers and for revealing rate-limiting steps of this process. Here, we have analyzed waiting time models for carcinogenesis and solved the equations defining the expected waiting times. Similar quantities have previously been shown to measure the degree of tumor progression and to predict survival in cancer patients [@Rahnenfuehrer2005].
In the simplest case, carcinogenesis may be described by a linear multistep process. The progression stages are generally described by histological alterations and functional changes, or on a molecular level, by mutation of certain genes and subsequent clonal expansion. In a general multistep process, the overall waiting time to reach stage $k$ is the sum of the waiting times of all predecessing steps.
If tumor stages are defined by the number of mutations that have fixated in the cell population, then the progression dynamics depends on the order in which mutations accumulate. For example, mutations may accumulate in a linear fashion, according to a partial order, or completely independently. Linear accumulation is the slowest and independent progression is the fastest. The acceleration can be considerable, especially if many mutations are available that drive carcinogenesis.
The linear and the independent model present opposing limits of the conjunctive Bayesian network family of models, in which the mutations obey a partial order. The relations of the poset may result from causal relationships among mutations, such as the requirement in colon cancer for the tumor suppressor [*APC*]{} to be mutated before other mutations are beneficial and can fixate. The poset constraints induce a subset of mutational pathways in the hypercube representing all combinatorial genotypes (Figure \[fig:graph\]). Because the waiting time is additive in the mutational pathways, its expected value for the conjunctive Bayesian networks ranges between those for the linear and the independent model.
We have discussed a particular instance of the Wright-Fisher process, an evolutionary model comprising mutation and selection. In this model, we find waiting times on the order of 20 years for a normal mutation rate and a selective advantage of 1% per mutation. The successive clonal waves might be regarded as the stages in classical multistage theory. A neutral evolutionary process can not explain the clinical progression of colon cancer, in which about 20 out of hundreds of mutations accumulate in a time frame of 5 to 20 years. This process may only be explained by advantageous mutations giving rise to clonal expansions. These selective sweeps drastically increase the chances of acquiring additional mutations the spreading offspring.
|
---
abstract: 'We show that for gaugino mediated supersymmetry breaking the gravitino mass is bounded from below. For a size of the compact dimensions of order the unification scale and a cutoff given by the higher-dimensional Planck mass, we find $m_{3/2} \gtrsim 10{\ensuremath{\:\mathrm{GeV}}}$. In a large domain of parameter space, the gravitino is the lightest superparticle with a scalar $\tilde\tau$-lepton as the next-to-lightest superparticle.'
bibliography:
- 'GauginoMediation.bib'
---
DESY 05-089
[**The Gravitino in Gaugino Mediation**]{}
**Wilfried Buchmüller, Koichi Hamaguchi and Jörn Kersten**\
Deutsches Elektronen-Synchrotron DESY, 22603 Hamburg, Germany
Introduction
============
In supersymmetric (SUSY) models with R-parity conserved, the lightest superparticle (LSP) is stable and plays a key role in cosmology as well as in collider physics. The most widely studied LSP candidate is the neutralino, which is a linear combination of the gauginos and higgsinos. There is however another candidate, the gravitino, which is always present once SUSY is extended to a local symmetry leading to supergravity.
As a neutral and stable particle, the gravitino LSP is a natural cold dark matter candidate [@Pagels:1981ke].[^1] In the early universe gravitinos are produced by thermal scatterings after inflation [@Nanopoulos:1983up; @Khlopov:1984pf; @*Ellis:1984eq; @Moroi:1993mb]. Generically, their relic abundance exceeds the observed cold dark matter density unless the reheating temperature $T_R$ is sufficiently small. However, several mechanisms have been proposed which avoid this constraint and yield the correct dark matter density even for large $T_R$ [@Bolz:1998ek; @*Fujii:2002fv; @*Fujii:2003iw; @Buchmuller:2003is]. This also removes an obstacle for thermal leptogenesis [@Fukugita:1986hr], which requires $T_R \gtrsim 2 \cdot 10^9{\ensuremath{\:\mathrm{GeV}}}$ [@Buchmuller:2002rq]. Such a high reheating temperature is disfavored for a non-LSP gravitino, since its decays would alter the light element abundances produced by big-bang nucleosynthesis [@Falomkin:1984eu; @Khlopov:1984pf; @*Ellis:1984eq; @Kawasaki:2004qu]. Even for a gravitino LSP, the late time decay of the next-to-lightest superparticle (NLSP) may cause cosmological problems. Recent analyses show that a stau NLSP is allowed for a gravitino mass $\lesssim 10\,\text{--}\,100{\ensuremath{\:\mathrm{GeV}}}$, while for a neutralino NLSP the bound is severer [@Fujii:2003nr; @*Ellis:2003dn; @*Feng:2004mt; @*Roszkowski:2004jd]. Gravitino dark matter can also be realized via non-thermal production from decays of the NLSP [@Feng:2003xh; @*Feng:2003uy].
If the long-lived NLSP is a charged particle, such as the stau, it can be collected and studied in detail at the LHC and the ILC, which allows to explore various aspects of new physics [@Buchmuller:2004rq; @Hamaguchi:2004df; @*Feng:2004yi; @*Hamaguchi:2004ne; @*Brandenburg:2005he]. Particularly interesting are gravitino masses in the range $10\,$–$\,100{\ensuremath{\:\mathrm{GeV}}}$, where one may be able to measure the Planck scale as well as the gravitino spin [@Buchmuller:2004rq].
The theoretical predictions for the gravitino mass and the nature of the LSP depend on the mechanism of SUSY breaking. In models with gauge mediation, the gravitino is usually much lighter than $1{\ensuremath{\:\mathrm{GeV}}}$ [@Dine:1994vc; @*Dine:1995ag], so that it clearly becomes the LSP. For gravity mediation, its mass is of the same order as the masses of scalar quarks and leptons, i.e. $100{\ensuremath{\:\mathrm{GeV}}}\,$–$\,1{\ensuremath{\:\mathrm{TeV}}}$ [@Nilles:1983ge]. Whether it is the LSP or not depends on the details of the model. On the other hand, anomaly mediation predicts a very heavy gravitino [@Randall:1998uk; @*Giudice:1998xp], which cannot be the LSP.
In this Letter we discuss the gravitino mass for gaugino mediated SUSY breaking [@Kaplan:1999ac; @*Chacko:1999mi], which is one of the simplest mechanisms solving the SUSY flavor problem (cf. also [@Inoue:1991rk]). The role of the gravitino in this context has not been studied in detail in the literature. We use naive dimensional analysis (NDA) to derive a lower bound on the gravitino to gaugino mass ratio. From this we conclude that the gravitino mass is typically larger than about $10{\ensuremath{\:\mathrm{GeV}}}$. Therefore, it can be the LSP. In this case the NLSP is naturally the stau. Together with the relatively large gravitino mass, this has exciting consequences for cosmology and collider physics.
Gaugino Mediation
=================
We consider a theory with $D$ dimensions and 4-dimensional branes located at positions $y_i$ in the compact dimensions. Coordinates $x$ denote the usual 4 dimensions, while $y$ refer to the compact dimensions. In models with gaugino mediated SUSY breaking [@Kaplan:1999ac; @*Chacko:1999mi], the gauge superfields live in the bulk, while the scalar responsible for SUSY breaking (contained in the chiral superfield $S$) lives on the 4-dimensional brane $i=1$. The part of the Lagrangian relevant for gaugino masses is $$\begin{aligned}
\label{eq:LDOriginal}
\mathscr{L}_D &=
\frac{1}{4g_D^2} \int\D^2\theta \, W^a W^a + \text{h.c.} + {}
\nonumber\\
& \quad\; +
\delta^{(D-4)}(y-y_1) \int\D^4\theta \, S^\dagger S + {}
\nonumber\\
& \quad\; +
\delta^{(D-4)}(y-y_1)\,\frac{h}{4\Lambda} \int\D^2\theta\,S\,W^a W^a
+ \text{h.c.} \;,\end{aligned}$$ where $W^a$ is the field strength superfield, $h$ is a dimensionless coupling and $\Lambda$ is the cutoff of the theory. All fields are 4D $N=1$ superfields. Bulk fields depend on the coordinates $y$. For details of the formalism, see [@Arkani-Hamed:2001tb; @*Hebecker:2001ke]. Additional fields required by the higher-dimensional SUSY are present but not explicitly included in the Lagrangian, since they are not relevant for our discussion.
A vacuum expectation value (vev) $F_S$ for the $F$-term of $S$ breaks SUSY and leads to the gaugino mass $$\label{eq:GauginoMass}
m_{1/2} = \frac{g_4^2 h \, F_S}{2\Lambda}$$ at the compactification scale. The gravitino mass is given by [@Nilles:1983ge] $$\label{eq:GravitinoMass}
m_{3/2} = \frac{1}{\sqrt{3}} \frac{F_S}{M_4} \;,$$ where $M_4 \simeq 2.4 \cdot 10^{18} {\ensuremath{\:\mathrm{GeV}}}$ is the 4-dimensional (reduced) Planck mass. This relation is valid if the vev of $F_S$ is the only source of SUSY breaking. If there are further sources, the gravitino becomes heavier.
Constraints from Naive Dimensional Analysis
===========================================
We want the effective Lagrangian (\[eq:LDOriginal\]) to be valid up to a cutoff scale $\Lambda$. This requires that the couplings at the compactification scale do not exceed upper bounds which can be estimated by means of ‘naive dimensional analysis’ [@Chacko:1999hg]. In general, one rewrites the $D$-dimensional Lagrangian with bulk fields $\Phi(x,y)$ and brane fields $\phi_i(x)$ on the $i$th brane, $$\label{eq:LDCanonical}
\mathscr{L}_D =
\mathscr{L}_\mathrm{bulk}(\Phi(x,y)) +
\sum_i \delta^{D-4}(y-y_i) \, \mathscr{L}_i(\Phi(x,y),\phi_i(x)) \;,$$ in terms of dimensionless fields $\hat\Phi(x,y)$ and $\hat\phi_i(x)$, and the cutoff $\Lambda$, so that $$\label{eq:LDDimless}
\mathscr{L}_D =
\frac{\Lambda^D}{\ell_D/C} \,
\mathscr{\hat L}_\mathrm{bulk}(\hat\Phi(x,y)) +
\sum_i \delta^{D-4}(y-y_i) \, \frac{\Lambda^4}{\ell_4/C} \,
\mathscr{\hat L}_i(\hat\Phi(x,y),\hat\phi_i(x)) \;.$$ Here the Lagrangians $\mathscr{\hat L}$ have kinetic terms of the form $\mathscr{\hat L} = (\frac{\partial}{\Lambda}\hat\Phi)^2 + \dots$ for scalars, and analogously for other fields. If the kinetic terms of the original Lagrangian are canonical with respect to $\Phi$ and $\phi_i$, the rescaling of bosonic bulk and brane fields reads $$\Phi(x,y) = \left( \frac{\Lambda^{D-2}}{\ell_D/C} \right)^{1/2}\hat\Phi(x,y)
\quad , \quad
\phi_i(x) = \left( \frac{\Lambda^2}{\ell_4/C} \right)^{1/2} \hat\phi_i(x) \;.
\label{eq:phiAndphiHat}$$ For non-canonical kinetic terms in [Eq. ]{}, the field rescaling has to be adjusted so that [Eq. ]{} is obtained. The geometrical loop factor $$\ell_D = 2^D \pi^{D/2} \, \Gamma(D/2)$$ grows rapidly with the number of dimensions: $\ell_4 = 16\pi^2$, $\ell_5 = 24\pi^3$, $\ell_6 = 128\pi^3$ etc. The factor $C$ accounts for the multiplicity of fields in loop diagrams for a non-Abelian gauge group $G$. We choose $C=C_2(G)$, i.e. $C=5$ for SU(5) and $C=8$ for SO(10).
The combination $C/\ell_D$ gives the typical geometrical suppression of loop diagrams. This suppression is canceled by the factors $\ell_D/C$ and $\ell_4/C$ in front of the Lagrangians $\mathscr{\hat L}$ in [Eq. ]{}. Consequently, all loops will be of the same order of magnitude, provided that all couplings are $\mathscr{O}(1)$. Thus, according to the NDA recipe the effective $D$-dimensional theory remains weakly coupled up to the cutoff $\Lambda$, if the dimensionless couplings in [Eq. ]{} are smaller than one.
As an example, consider the $D$-dimensional gauge coupling $$\label{gaugeD4}
\frac{V_{D-4}}{g_D^2} = \frac{1}{g_4^2} \;,$$ where $V_{D-4}$ is the volume of the compact dimensions. From the dimensionless covariant derivative, $$\hat D_\mu =
\frac{\partial_\mu}{\Lambda} - \frac{\I g_D A_\mu}{\Lambda} =
\frac{\partial_\mu}{\Lambda} -
\I g_D \left(\frac{\Lambda^{D-4}}{\ell_D/C}\right)^{1/2} \hat A_\mu
\;,$$ one reads off $$g_D \left(\frac{\Lambda^{D-4}}{\ell_D/C}\right)^{1/2} < 1 \;.$$ For a given cutoff this constrains the gauge coupling. Conversely, knowing the gauge coupling at the compactification scale, $g_4^2$, one obtains an upper bound on the cutoff, $$\Lambda < \Lambda_\mathrm{gauge} =
\left( \frac{\ell_D/C}{g_4^2} \right)^\frac{1}{D-4} M_c \;,$$ where we have defined $$\label{uniV}
M_c = \left( \frac{1}{V_{D-4}} \right)^\frac{1}{D-4} \;.$$ For $M_c$ close to the unification scale, one has $g_4^2\simeq\frac{1}{2}$.
The cutoff $\Lambda_\mathrm{gauge}$ can be compared with the $D$-dimensional Planck scale $M_D$ where quantum gravity effects are expected to become important, $$M_D = \left( M_c^{D-4} M_4^2 \right)^\frac{1}{D-2} \;,$$ as shown in [Fig. \[fig:Scales\]]{} for $D=6$. We require $\Lambda<M_D$, which turns out to be more restrictive than $\Lambda < \Lambda_\mathrm{gauge}$, unless $M_c$ is very small.
![Relevant scales ($D=6$): $\Lambda_{SWW}$ for $m_{1/2}=m_{3/2}$ and $C=5$, ignoring the running of the gaugino masses, $\Lambda_\mathrm{gauge}$ for $g_4^2=\frac{1}{2}$, $D$-dimensional Planck scale, and the lower limit $\Lambda>M_c$. []{data-label="fig:Scales"}](Scales)
Application to Gaugino Mediation
================================
We can now apply the NDA prescription to gaugino mediation, which will lead to an upper bound on the gaugino masses. The field strength superfield has to be rescaled as (cf. [Eq. ]{} and ) $$\label{eq:WAndWHat}
W^a = \left( \frac{\Lambda^{D-1}}{\ell_D/C} \right)^{1/2} g_D \,
\hat W^a \;.$$ Since $\D\theta$ has mass dimension $1/2$, it also has to be divided by the corresponding power of the cutoff to obtain a dimensionless expression. We then arrive at the Lagrangian $$\begin{aligned}
\label{eq:LGauginoDimless}
\mathscr{L}_D &=
\frac{\Lambda^D}{\ell_D/C} \, \frac{1}{4}
\int\frac{\D^2\theta}{\Lambda} \,\hat W^a \hat W^a + \text{h.c.} +{}
\nonumber\\
& \quad\; +
\delta^{(D-4)}(y-y_1) \, \frac{\Lambda^4}{\ell_4/C}
\int\frac{\D^4\theta}{\Lambda^2} \, \hat S^\dagger \hat S + {}
\nonumber\\
& \quad\; +
\delta^{(D-4)}(y-y_1) \, \frac{\Lambda^4}{\ell_4/C} \,
\frac{g_D^2 h \sqrt{\ell_4 C} \Lambda^{D-4}}{\ell_D} \, \frac{1}{4}
\int\frac{\D^2\theta}{\Lambda} \, \hat S \, \hat W^a \hat W^a
+ \text{h.c.}\end{aligned}$$ The requirement that all couplings[^2] be smaller than one implies $$\frac{g_D^2 h \sqrt{\ell_4 C} \Lambda^{D-4}}{\ell_D} < 1 \;.$$ Using the relations (\[gaugeD4\]) and (\[uniV\]), and $\ell_4=16\pi^2$, one then obtains an upper bound on the coupling $h$, $$h < \frac{\ell_D}{4\pi\sqrt{C} g_4^2}
\left( \frac{M_c}{\Lambda} \right)^{D-4} ,$$ which translates into an upper bound on the gaugino mass (cf.[Eq. ]{}): $$\label{eq:UpperBoundGauginoMass}
m_{1/2} <
\frac{\ell_D F_S}{8\pi \sqrt{C} \Lambda}
\left( \frac{M_c}{\Lambda} \right)^{D-4} .$$ Note that there is no lower bound on the gaugino mass. The upper bound becomes weaker if the cutoff $\Lambda$ is lowered. Together with [Eq. ]{}, [Eq. ]{} yields a lower bound on the mass ratio $$\label{eq:LowerBoundMassRatio}
\frac{m_{3/2}}{m_{1/2}} >
\frac{8\pi \sqrt{C}}{\sqrt{3} \ell_D}
\left( \frac{\Lambda}{M_c} \right)^{D-4} \frac{\Lambda}{M_4} \;.$$ For a fixed gravitino to gaugino mass ratio, [Eq. ]{} yields again an upper bound on the cutoff $\Lambda$, $$\label{eq:LambdaSWW}
\Lambda < \Lambda_{SWW} =
\left( \frac{\sqrt{3} \ell_D}{8\pi \sqrt{C}} \, M_4 \, M_c^{D-4} \
\frac{m_{3/2}}{m_{1/2}}
\right)^\frac{1}{D-3} \;,$$ which is compared in [Fig. \[fig:Scales\]]{} with $\Lambda_\mathrm{gauge}$ and $M_D$ as a function of $M_c$.
Let us now discuss the lower bound on the gravitino to gaugino mass ratio for a given cutoff $\Lambda$. For the minimal value $\Lambda=M_c$, one obtains the absolute lower bound $m_{3/2}/m_{1/2} > 8\pi \sqrt{C}/(\sqrt{3} \ell_D) \, M_c/M_4$. However, this corresponds to the extreme case where the effective theory described by the Lagrangian becomes non-perturbative immediately above $M_c$. In the following, we choose the cutoff to equal the $D$-dimensional Planck scale for concreteness. The minimal mass ratio is then a function of the number of dimensions and the compactification scale, $$\left( \frac{m_{3/2}}{m_{1/2}} \right)_\mathrm{min} =
\frac{8\pi \sqrt{C}}{\sqrt{3} \ell_D}
\left( \frac{M_4}{M_c} \right)^\frac{D-4}{D-2} \;.$$ It is shown in [Fig. \[fig:RatioM12M32\]]{} for $D=5,6,10$ and compactification scales between $10^{15} {\ensuremath{\:\mathrm{GeV}}}$ and $10^{17} {\ensuremath{\:\mathrm{GeV}}}$. As we are assuming compact dimensions of equal size, the $D=10$ example appears less favored [@Hebecker:2004ce] and should only be considered as a limiting case for illustration purposes. Note that here $m_{1/2}$ is the value of the gaugino mass at the compactification scale. The running to low energies typically decreases the mass of the lightest gaugino by a factor of about $0.4$. We have not included this correction here, since it is model-dependent.
![Lower bound on the ratio of the gravitino and the gaugino mass (valid at the compactification scale) for a cutoff $\Lambda=M_D$, $C=5$, and $D=5,6,10$. The $D=10$ result is multiplied by a factor of 100. []{data-label="fig:RatioM12M32"}](RatioM12M32)
From the figure we see that the gravitino can be the LSP, if the gaugino mass is sufficiently close to its upper bound from NDA. However, it cannot be much lighter than the neutralinos for $D=5$ and $D=6$. This also means that a gravitino LSP becomes unlikely if the theory is only weakly coupled at the cutoff.
Fixing $m_{1/2}$, we find a lower bound for $m_{3/2}$, which is shown in [Fig. \[fig:GravitinoMass\]]{} for a gluino mass (at low energy) of $1{\ensuremath{\:\mathrm{TeV}}}$. For $D=6$ and $M_c=10^{17}{\ensuremath{\:\mathrm{GeV}}}$, we obtain $m_{3/2}>17{\ensuremath{\:\mathrm{GeV}}}$. This can be considered a typical lower bound in gaugino mediation. Hence, experimental implications for a much lighter gravitino would disfavor this mechanism of SUSY breaking.
![Lower bound on the gravitino mass for $M_3(1{\ensuremath{\:\mathrm{TeV}}}) = 1{\ensuremath{\:\mathrm{TeV}}}$, $\Lambda=M_D$, $C=5$, and $D=5,6,10$. []{data-label="fig:GravitinoMass"}](GravitinoMass)
To realize smaller gravitino masses, one can lower $m_{1/2}$ or the cutoff scale, or increase the number of extra dimensions. Further possibilities include the group theory factor $C$, which could be smaller if the GUT symmetry was broken on the brane where $S$ is located.
Another important aspect of the scenario is the mass of the lighter stau. In gaugino mediation, it is very small at the compactification scale, so that its dominant contribution is due to the running to low energies. It is determined by the renormalization group equations of the right-handed stau mass squared [@Inoue:1982pi; @*Inoue:1983pp], $$\label{eq:StauRGE}
16\pi^2 \, \mu\frac{\D}{\D\mu} m^2_{\tilde\tau_\mathrm{R}} =
4 y_\tau^2 \, (m^2_{H_d} + m^2_{\tilde\tau_\mathrm{L}} +
m^2_{\tilde\tau_\mathrm{R}}) +
4 a_\tau^2 -
\frac{24}{5} g_1^2 M_1^2 \;,$$ and of the other parameters of the theory. In order to demonstrate the typical order of magnitude, [Fig. \[fig:RatioMStauMBino\]]{} shows the ratio of the $\tilde\tau_\mathrm{R}$ and the bino mass at the electroweak scale calculated in a crude approximation: Mixing has been neglected, and only the terms proportional to $M_1^2$ and $m^2_{\tilde\tau_\mathrm{R}}$ in [Eq. ]{} have been taken into account. We have chosen $\tan\beta=10$, $m_{1/2}(M_c)=400{\ensuremath{\:\mathrm{GeV}}}$ and $m^2_{H_u}(M_c)=m^2_{H_d}(M_c)=0$. From the figure we see that in this case the $\tilde\tau$ is typically lighter than the $\tilde B$, but not by a large factor, so that it can easily be heavier than the gravitino. A comparison with the full two-loop calculations [@Belanger:2005jk; @*Allanach:2003jw] shows that the accuracy of the approximation is reasonable for small $\tan\beta$ (and $m^2_{H_u}=m^2_{H_d}=0$ at $M_c$) but quickly worsens for values larger than about 20. Then, the neglected effects are important and the actual stau mass becomes significantly lighter than the estimate. We have also neglected the modifications to the running above the GUT scale for large $M_c$, which tend to make the stau heavier [@Schmaltz:2000gy; @*Schmaltz:2000ei].
![Ratio of stau and bino mass for $\tan\beta=10$ and $m_{1/2}(M_c) = 400{\ensuremath{\:\mathrm{GeV}}}$, calculated in a very rough approximation (see text). []{data-label="fig:RatioMStauMBino"}](RatioMStauMBino)
Conclusions
===========
We have studied constraints on gaugino and gravitino masses in models with gaugino mediated SUSY breaking. Based on naive dimensional analysis, we have derived an upper bound on the coupling responsible for gaugino masses. This leads to a lower bound on the mass ratio $m_{3/2}/m_{1/2}$, which allows for a gravitino LSP in a large domain of parameter space. Some regions in parameter space are now allowed that were previously discarded in order to avoid a stau LSP. In particular, the compactification scale can coincide with the GUT scale even in the minimal scenario with vanishing Higgs soft masses. Fixing the gaugino mass, one can translate the result for the mass ratio into a lower bound on the gravitino mass. For a gluino mass of $1{\ensuremath{\:\mathrm{TeV}}}$, we find $m_{3/2} \gtrsim 10{\ensuremath{\:\mathrm{GeV}}}$.
A gravitino LSP can be naturally accompanied by a stau NLSP. Long-lived staus will then be observed at future colliders, and in their decays the gravitino may be discovered.
Acknowledgments {#acknowledgments .unnumbered}
===============
We would like to thank Adam Falkowski, Jörg Jäckel, Hyun Min Lee, Michael Ratz, Kai Schmidt-Hoberg, Michele Trapletti and Alexander Westphal for helpful discussions. This work has been supported by the “Impuls- und Vernetzungsfonds” of the Helmholtz Association, contract number VH-NG-006.
[^1]: A keV gravitino as dominant component of dark matter as discussed in [@Pagels:1981ke] is now disfavored by the matter power spectrum (cf. [@Viel:2005qj]).
[^2]: Note that this applies to the couplings of canonically normalized fields. Hence, the factor $\frac{1}{4}$ in the last line of [Eq. ]{} is not part of the coupling.
|
---
author:
- 'Patrick Varin, Lev Grossman, and Scott Kuindersma[^1] [^2]'
bibliography:
- 'bibliography.bib'
- 'extra.bib'
title: '**A Comparison of Action Spaces for Learning Manipulation Tasks** '
---
Introduction
============
Background
==========
Learning Algorithms
===================
Action Spaces
=============
Experiments
===========
Results
=======
Conclusions
===========
Appendix {#appendix .unnumbered}
========
[^1]: This work was funded by Schlumberger-Doll Research.
[^2]: School of Engineering and Applied Sciences, Harvard University, Cambridge, MA 02138, USA [{varin@g, lgrossman@college, scottk@seas}.harvard.edu]{}
|
---
author:
- Zachary Hall
- and Jesse Thaler
bibliography:
- 'softdrop.bib'
title: Photon isolation and jet substructure
---
Introduction {#sec:1}
============
Photons produced in high-energy collisions fall into two categories: “direct” photons produced in perturbative hard processes and “indirect” photons produced from the fragmentation of quark and gluon partons. Because direct photons access the perturbative part of the collision, they are typically of more interest than indirect photons. For this reason, photon isolation techniques have been developed to filter out indirect photons, especially from $\pi^0 \rightarrow \gamma \gamma$ decays [@Baer:1990ra; @Berger:1990et; @Kramer:1991yc; @Kunszt:1992np; @Glover:1992he; @Buskulic:1995au; @Glover:1993xc; @GehrmannDeRidder:1997wx; @Frixione:1998jh; @Cieri:2015wwa]. Although there are different types of isolation criteria used, they all follow roughly the same philosophy: photons collinear to a significant amount of hadronic energy are labeled indirect, while photons well separated from hadronic energy are labeled direct. By now, photon isolation is a well-established method to study direct photons, with numerous measurements at the Large Hadron Collider (LHC) and previous experiments [@Glover:1992he; @Buskulic:1992ji; @Glover:1994he; @Khachatryan:2010fm; @Chatrchyan:2011ue; @Aad:2010sp; @Aaboud:2017lxm; @Aaboud:2017kff].
In the years since the development of photon isolation, jet physics has undergone a rapid evolution, first with the rise of clustering-based jet observables [@Catani:1993hr; @Ellis:1993tq; @Dokshitzer:1997in; @Wobisch:1998wt; @Wobisch:2000dk; @Cacciari:2008gp; @Salam:2009jx; @Cacciari:2011ma] and more recently with the explosion of the field of jet substructure [@Seymour:1991cb; @Seymour:1993mx; @Butterworth:2002tt; @Butterworth:2007ke; @Butterworth:2008iy; @Abdesselam:2010pt; @Altheimer:2012mn; @Altheimer:2013yza; @Adams:2015hiv; @Larkoski:2017jix; @Asquith:2018igt]. Jet substructure provides a rich toolbox to explore soft and collinear dynamics within jets, and it is natural to ask whether substructure techniques could be adapted to handle photons. At minimum, jet substructure could be used to robustly veto hadronic activity and isolate direct photons. More ambitiously, jet substructure could facilitate new methods to study indirect photons, by revealing a continuum of collinear photon fragmentation processes from perturbative radiation to hadronic decays.
In this paper, we introduce a new substructure-based photon isolation technique called *soft drop isolation*. This method derives from soft drop declustering [@Larkoski:2014wba], one of many jet grooming algorithms [@Butterworth:2008iy; @Ellis:2009su; @Ellis:2009me; @Krohn:2009th; @Dasgupta:2013ihk] that have been successfully adopted at the LHC. Ordinarily, soft drop declustering is used to identify hard subjets within a jet that satisfy the condition: $$\label{eq:SDcondition}
\frac{\text{min}\left(p_{T1},p_{T2}\right)}{p_{T1} + p_{T2}} \geq z_{\text{cut}} \left(\frac{R_{12}}{R_0}\right)^{\beta},$$ where $p_{Ti}$ are the transverse momenta of the subjets, $R_{12}$ is their pairwise angular separation, $R_0$ is the jet radius parameter, and $z_{\text{cut}}$ and $\beta$ are the parameters of the soft drop algorithm. Soft drop isolation *inverts* the condition in , thereby selecting “photon jets” with no appreciable substructure. With its origins in jet substructure, soft drop isolation is well suited to the age of particle flow at both CMS [@Sirunyan:2017ulk] and ATLAS [@Aaboud:2017aca].
Like Frixione or “smooth” isolation [@Frixione:1998jh], soft drop isolation is collinear safe and fully regulates the collinear divergence of quark-to-photon fragmentation. This is in contrast with traditional cone isolation techniques [@Baer:1990ra; @Berger:1990et; @Kramer:1991yc; @Kunszt:1992np; @Glover:1992he], which are collinear unsafe.[^1] Collinear-safe photon isolation criteria eliminate the need for parton fragmentation functions [@Koller:1978kq; @Laermann:1982jr] to regulate the collinear divergence of $q \rightarrow q \gamma$ processes. This is a significant advantage, as fragmentation functions are inherently non-perturbative and therefore not directly calculable, and experimental measurements [@Glover:1993xc; @Buskulic:1995au; @GehrmannDeRidder:1997wx; @Ackerstaff:1997nha; @Bourhis:1997yu; @Bourhis:2000gs] have significant uncertainties. For these reasons, collinear-safe photon isolation criteria are preferable for perturbative theoretical calculations. Note that these statements apply to all orders in perturbative quantum chromodynamics (QCD) but only to leading order in quantum electrodynamics (QED). Beyond leading order in QED, additional effects such as $\gamma \rightarrow \bar{q} q$ splittings emerge that may require a more delicate treatment (see e.g. [@Frederix:2016ost]).
As we will see, soft drop isolation is equivalent at leading (non-trivial) order to the most common implementation of Frixione isolation, at least when considering the small $R_0$ and small $z_{\text{cut}}$ limits. Unlike Frixione isolation or cone isolation, though, soft drop isolation is democratic, meaning that it treats photons and hadrons equivalently in the initial clustering step. This feature is reminiscent of earlier democratic isolation criteria [@Buskulic:1995au; @Glover:1993xc; @GehrmannDeRidder:1997wx], which can be more natural than undemocratic criteria in cases where jets are the central objects of interest. Soft drop isolation is, to our knowledge, the first collinear-safe democratic photon isolation criterion.
[0.3]{}
\(a) [$q$]{}; (b) ; (f1) [$\gamma$]{}; (c) [$q$]{};
;
[0.3]{}
\(a) [$q$]{}; (b); (f1) [$\gamma$]{}; (c); (f3) [$q$]{}; (f2) [$g$]{}; ;
[0.3]{}
\(a) [$g$]{}; (b); (f1) [$\bar{q}$]{}; (c); (f3) [$q$]{}; (f2) [$\gamma$]{}; ;
In the second half of this paper, we take advantage of the democratic nature of soft drop isolation to define an isolated photon subjet: a photon that is not isolated from its parent jet but which is isolated within its parent subjet. At leading order in the collinear limit, isolated photon subjets arise from the splitting of a quark into a quark plus a photon in QED, as shown in . The probability for a quark to radiate a photon with some angle $\theta_{\gamma}$ and momentum fraction $z_{\gamma}$ is given by: $$\label{eq:QEDsplit}
\text{d} P_{q \rightarrow q \gamma} = \frac{\alpha_e e^2}{2 \pi} \,\, \frac{\text{d} \theta_{\gamma}}{\theta_{\gamma}} \,\, P(z_{\gamma}) \,\, \text{d} z_{\gamma}, \qquad
P(z) = \left(\frac{1 + (1 - z)^2}{z}\right)_+,$$ where $P(z)$ is the (regularized) QED splitting function. Inspired by related work on the $q \to q g$ splitting function in QCD [@Larkoski:2015lea; @Larkoski:2017bvj; @Tripathee:2017ybi; @Sirunyan:2017bsd; @Caffarri:2017bmh; @Kauder:2017mhg], we use isolated photon subjets to expose the QED $q \to q \gamma$ splitting function $P(z)$. We also investigate the impact of the higher-order $\alpha_s$ corrections in , though we restrict our calculations to the collinear limit.
This work is complementary to earlier experimental investigations of the quark-photon fragmentation function at the Large Electron-Positron Collider (LEP) [@Buskulic:1995au; @Glover:1993xc; @GehrmannDeRidder:1997wx; @Ackerstaff:1997nha]. Notably, exposed the quark-photon fragmentation function down to $z_\gamma \sim 0.2$ by using cluster shape observables to mitigate meson decay backgrounds. Compared to these studies, the isolated photon subjet approach has the advantage of being perturbatively calculable and likely being easier to implement in the complicated hadronic environment of the LHC. Additionally, the isolated photon subjet condition regulates higher-order terms such as those in , thereby more directly exposing the QED splitting function as opposed to the inclusive photon fragmentation function. Similar to the LEP study, the primary background to isolated photon subjets comes from meson decays, but this can be partially controlled using an angular cut on $R_{12}$.
The rest of this paper is organized as follows. In , we define soft drop isolation, investigate its features, and analyze its performance in $\gamma$-plus-jet events from a parton shower generator. In , we define the isolated photon subjet and compare the extraction of the QED splitting function between a parton shower and an analytic calculation. We conclude with a discussion of future directions in .
Photon isolation with soft drop declustering {#sec:2}
============================================
Soft drop isolation is based on soft drop declustering, a jet grooming algorithm that removes soft and wide-angle radiation to find hard substructure [@Larkoski:2014wba]. In this section, we show how to tag isolated photons by identifying jets without any substructure. We first define soft drop photon isolation in and show that it is infrared and collinear safe. We then show that it is democratic in and compare its behavior to Frixione isolation in . In , we study soft drop isolation using a parton shower, showing that it performs nearly identically to Frixione isolation.
Definition of soft drop isolation {#sec:2.1}
---------------------------------
[0.47]{}
(0,0) – (4,0) node\[right\] [$\gamma$]{}; (1,0) – (1,1) – (4,1); (1.5,0) – (1.5,-1) –(4,-1); (2.5,-1) – (2.5, -0.75) – (4, -0.75); (2, 0) – (2,0.5) – (4,0.5); (3.0,0) – (3.0, -0.25) – (4.0,-0.25);
\
[0.47]{}
(0,0) – (4,0) node\[right\] [$\gamma$]{}; (1,0) – (1,1) – (4,1); (1.5,0) – (1.5,-1) –(4,-1); (2.5,-1) – (2.5, -0.75) – (4, -0.75); (2, 0) – (2,0.5) – (4,0.5); (2.5, 0.5) – (2.5,0.75) – (4,0.75); (2.75, 0.5) – (2.75,0.3) – (4,0.3); (3.0,0) – (3.0, -0.25) – (4.0,-0.25);
\
The original soft drop procedure begins with a jet of radius $R$ obtained through some clustering algorithm; this paper uses the anti-$k_t$ algorithm [@Cacciari:2008gp] with radius $R = 0.4$ throughout. Following this, the jet is reclustered using the Cambridge-Aachen (C/A) algorithm [@Dokshitzer:1997in; @Wobisch:1998wt; @Wobisch:2000dk], yielding an angular-ordered clustering tree. The jet is then declustered into its two C/A parent subjets; if the soft drop condition in is satisfied by the two subjets, then the jet “passes” soft drop and is returned as the soft-dropped jet. Otherwise, the softer (by $p_T$) of the two subjets is dropped and the procedure is repeated on the harder of the two subjets.
As shown in , soft drop isolation is defined in terms of the soft drop algorithm, but with reversed criteria. If at no point the jet passes the soft drop condition and one is left with a single constituent that cannot be declustered, then the jet “fails” soft drop and the single constituent is returned as the soft-dropped jet.[^2] If that single constituent is a photon, then that photon is declared to *pass* soft drop isolation and is labeled as an isolated photon.
Like all photon isolation criteria, soft drop isolation depends on a particle identification scheme to define a (singlet) photon. This is relevant in the case of prompt photons converted to $e^+\,e^-$ pairs in material, which one typically wants to label as a photon candidate.[^3] By contrast, one typically wants the particle identification scheme to reject closely collinear $\pi^0 \rightarrow \gamma \gamma$ decays, which can mimic a singlet photon. In practice, photon definitions are implemented in particle reconstruction algorithms through a combination of cluster-shape observables and tracking [@Sirunyan:2017ulk; @Aaboud:2017aca]. For our parton shower study below, we use truth information to label photons, deferring a study of detector effects to future work.
Like soft drop, soft drop isolation depends on the parameters $z_{\text{cut}}$ and $\beta$. For the algorithm to be collinear safe, we must chose $\beta > 0$. Although there is some flexibility in choosing these parameters, we will for definiteness use the default parameters: $$z_{\text{cut}} = 0.1, \qquad \beta = 2.$$ Given the matching between the soft drop parameter $z_{\text{cut}}$ and the Frixione parameter $\epsilon$ shown in , these parameter choices are roughly equivalent to the standard “tight isolation” parameters outlined in the 2013 Les Houches Accords [@Andersen:2014efa].
We now demonstrate that soft drop isolation is infrared and collinear safe when applied to isolated photons. The following logic closely follows ; a more rigorous proof can be found by following .[^4] Because soft drop isolation requires the non-photon $p_T$ to vanish as $\Delta R \rightarrow 0$, it is intuitive that collinear divergences will be regulated. As seen from , collinear divergences in the process $q \to q \gamma$ have amplitude squared proportional to $1/\theta_\gamma$, where $\theta_\gamma$ is the emission angle. For a quark with transverse momentum $p_T$ and a photon with transverse momentum $p_{T\gamma}$, the cross section for an isolated photon in the presence of a collinear divergence scales like:
$$\begin{aligned}
\label{eq:collinear_divergence_a}
\sigma_{\text{SD}}
& \propto \int \frac{\text{d} \theta_\gamma}{\theta_\gamma} \int \text{d}p^2_T \, \Theta\left[p_{T\gamma}\, \frac{z_{\text{cut}} \left(\frac{\theta_\gamma}{R_0}\right)^{\beta}}{1 - z_{\text{cut}} \left(\frac{\theta_\gamma}{R_0}\right)^{\beta}} - p_T\right], \\[10pt]
& \sim p_{T\gamma}^2 \,\, \frac{(1 - z_{\text{cut}}) \log (1-z_{\text{cut}})+ z_{\text{cut}}}{\beta (1 - z_{\text{cut}})},\end{aligned}$$
\[eq:collinear\_divergence\]
which is clearly convergent. The Heaviside theta function in is the (inverted) soft drop condition in , with the simplifying assumption that $z_{\text{cut}} < \frac{1}{2}$ (which has no effect on the convergence properties). Just as with Frixione isolation, the fact that soft drop isolation is collinear safe eliminates the dependence of perturbative calculations on fragmentation functions.
Crucially, the soft drop condition does not restrict the phase space of infinitesimally soft gluons, since infinitesimal radiation always satisfies . Infrared divergences from soft gluons have amplitude squared proportional to $1/p_T^2$. For a gluon with transverse momentum $p_T$, the cross section for an isolated photon in the presence of an infrared divergence scales like:
$$\begin{aligned}
\sigma_{\text{SD}}
& \propto \int \text{d} \theta_\gamma \int^{p_{T \gamma}^2} \frac{\text{d}p^2_T}{p_T^2} \, \Theta\left[p_{T\gamma}\, \frac{z_{\text{cut}} \left(\frac{\theta_\gamma}{R_0}\right)^{\beta}}{1 - z_{\text{cut}} \left(\frac{\theta_\gamma}{R_0}\right)^{\beta}} - p_T\right],
\label{eq:infrared_divergence_1} \\[10pt]
& \sim R_0\,\left(\log\left(z_{\text{cut}}\right) - \beta\right),
\label{eq:infrared_divergence_2}$$
\[eq:infrared\_divergence\]
which is again convergent. In , we have used the plus prescription to perform the integral over $p_T$, which is valid since we have not restricted the phase space of infinitesimally soft gluons and thereby ensured that real-virtual cancellation will occur.
Because soft drop isolation is based on declustering, it is easy to check that infrared and collinear safety persists with multiple emissions. Each step in the declustering procedure acts on two subjets, so the way the algorithm handles divergence structures will be the same at each step. In this way, soft drop isolation gives an infrared- and collinear-safe definition for isolated photons.
Soft drop isolation is democratic {#sec:2.2}
---------------------------------
The terms “democratic isolation” and “the democratic approach” have typically referred to a particular form of isolation pioneered in the LEP era for the study of the photon fragmentation function [@Buskulic:1995au; @Glover:1993xc; @GehrmannDeRidder:1997wx]. In traditional democratic isolation, the entire event is clustered into jets, including both photons and hadrons. This step, which treats photons and hadrons equally, is the origin of the term “democratic”; undemocratic criteria such as Frixione isolation and cone isolation instead center the isolation scheme around the photon. Following the jet clustering step, a photon is defined to be isolated if it accounts for the majority of the energy of its parent jet. However, traditional democratic isolation is essentially just a clustering-based form of cone isolation and correspondingly suffers from the same problem of collinear unsafety.
As is clear from the definition in , soft drop isolation is a democratic criterion. Much like traditional democratic isolation, soft drop isolation begins by clustering the particles in an event democratically into jets. It is only after the jet has been completely declustered that the soft drop isolation algorithm distinguishes between photons and other particles. Unlike traditional democratic isolation, though, soft drop isolation is collinear safe. We believe that soft drop isolation is the first democratic collinear-safe photon isolation criterion.
As a democratic criterion, the logic of soft drop isolation is different from that of undemocratic criteria. Instead of testing whether a photon is isolated, soft drop isolation tests whether a jet contains an isolated photon. Democratic isolation techniques are thus more natural for cases where one is testing for multiple isolated photons or for cases where jets are the most natural object. Frixione isolation or cone isolation, on the other hand, are more natural for testing the hardest photon in an event to see if it is isolated.
The fact that soft drop isolation is democratic leads to some mild differences with Frixione isolation. The reasons for this are twofold. First, the fact that the photon is isolated from a jet with radius $R$ means that this isolation radius is not strictly drawn around the photon: the photon might not be exactly at the jet center. Therefore, there can be some differences when the photon is off-center and there are hard features at a distance $\sim R$ from the photon. This has little effect in practice, however, since isolated photons naturally contain most of the momentum of the jet and therefore appear very close to the jet center. Second, soft drop isolation is applied after the event has already been clustered into jet objects, whereas Frixione isolation is applied before the event has been clustered. Frixione isolation thus can allow low-momentum objects at angles $\Delta R < R_0$, whereas such objects are mostly excluded by soft drop isolation (namely, they can only occur due to deviations of the photon from the jet center). These differences between democratic and undemocratic approaches will be explored further in .
Soft drop’s democratic nature makes it a natural choice for the study of jet structure and substructure. The isolated photon subjet introduced later in is one such example that would be quite unnatural to define with a non-democratic criterion. More broadly, democratic criteria are the natural choice for modern hadron colliders, where jets are ubiquitous objects and clustering techniques like anti-$k_t$ [@Cacciari:2008gp] are now used by default.
Relationship to Frixione isolation {#sec:2.3}
----------------------------------
Given the above discussion, it is perhaps surprising that (democratic) soft drop isolation turns out to be equivalent to (undemocratic) Frixione isolation, at least in a particular limit. For small $R_0$ and small $z_{\text{cut}}$, there are appropriate choices of soft drop parameters such that soft drop isolation and the most common form of Frixione isolation impose the same restriction on two-particle final states. Since this corresponds to the leading (non-trivial) order configuration in , we say that the two criteria are equivalent at leading order.
Frixione or “smooth” isolation [@Frixione:1998jh] has been the preferred photon isolation criterion for perturbative calculations. In contrast to cone isolation, Frixione isolation regulates the collinear divergence by forcing the partonic energy to zero in the collinear limit. In this way, the exact collinear divergence from $q \to q \gamma$ is fully eliminated without in any way restricting the soft phase space, which is required in order to ensure real and virtual cancellation of soft gluon divergences.
Frixione isolation uses an initial angular cut at some radius from the photon $R_0$. The particles within that radius are then required to pass a momentum cut based on an angular function $X(\Delta R)$, typically called a Frixione function. The full condition may be expressed in terms of the transverse momentum $p_{T i}$ and distance to the photon $R_{i,\gamma}$ of each hadronic particle as:[^5] $$\forall\, \Delta R \leq R_0:\quad \sum_{i} \,\, p_{T i} \,\, \Theta\left(\Delta R - R_{i,\gamma}\right) < \, X(\Delta R).$$ There is significant flexibility in the choice of Frixione function $X(\Delta R)$. The most common function used in the literature [@Frixione:1998jh; @Cieri:2015wwa; @Binoth:2010nha; @AlcarazMaestre:2012vp; @Andersen:2014efa; @Aaboud:2017lxm; @Aaboud:2017kff] is: $$\label{eq:frix_x(R)}
X(\Delta R) = p_{T \gamma} \, \epsilon \, \left(\frac{1 - \cos(\Delta R)}{1 - \cos(R_0)}\right)^n.$$ Under the “tight isolation” parameters outlined in the Les Houches Accords [@Andersen:2014efa], typical parameter values are $\epsilon \sim 0.1$ and $n = 1$. Another common implementation [@Cieri:2015wwa; @Andersen:2014efa] uses a fixed $E_T^{\rm iso}$ in place of $p_{T \gamma} \, \epsilon$ in .
At leading order (corresponding to one additional particle within the photon’s isolation cone and taking the small $R_0$ limit), the Frixione isolation condition in becomes: $$\label{eq:frixcosleading}
p_T < p_{T\gamma}\, \epsilon \left(\frac{\Delta R}{R_0}\right)^{2 n}.$$ It should be noted that this form of $X(\Delta R)$ is equivalent to another Frixione function described in , though this function has not found widespread implementation.
Looking at , the leading-order soft drop criterion with $z_{\text{cut}} < \frac{1}{2}$ is: $$\label{eq:sdleading}
p_T < p_{T \gamma} \frac{z_{\text{cut}} \left(\frac{\Delta R}{R_0}\right)^{\beta}}{1 - z_{\text{cut}} \left(\frac{\Delta R}{R_0}\right)^{\beta}}.$$ This is clearly equivalent to in the small $z_{\text{cut}}$ or $\frac{\Delta R}{R_0}$ limits with the identification $z_{\text{cut}} = \epsilon$ and $\beta = 2 n$. We should also note that, given the flexibility in choosing a Frixione function, it is possible to choose $X(\Delta R)$ corresponding exactly to the right-hand side of . This form of Frixione isolation would be fully equivalent to soft drop isolation at leading order.[^6]
Despite the leading-order equivalence of Frixione and soft drop isolation, there are important differences at higher orders. These differences stem from the fact that soft drop isolation is based on clustering, whereas Frixione isolation is based on a more traditional cone approach. The details of which scheme is stricter depend on the precise phase space configuration, and it is not possible to make a general statement about the differences in multi-particle configurations.
In practice, differences due to higher-order configurations are negligible in most realistic settings, as seen in the parton shower study below. Additionally, we found that the two schemes closely matched even with the differences between at $\Delta R \sim R_0$. Instead, the primary differences between the two schemes stem from the fact that soft drop isolation is democratic, as already discussed in .
Parton shower study {#sec:2.4}
-------------------
As a practical test of soft drop isolation, we now perform a parton shower study of isolated photon production in the $\gamma$+jet(s) final state. Not surprisingly given their leading-order equivalence, we find that soft drop and Frixione isolation perform nearly identically, though soft drop isolation’s democratic construction leads to some differences in angular distributions.
We generated events in <span style="font-variant:small-caps;">Pythia</span> 8.223 [@Sjostrand:2006za; @Sjostrand:2014zea] from proton-proton collisions with center-of-mass energy 13 TeV, using the default settings for hadronization and underlying event. We created a sample of 800,000 events from the <span style="font-variant:small-caps;">Pythia</span> `PromptPhoton` process, which encodes Compton-like processes that produce a hard photon.[^7] In total, <span style="font-variant:small-caps;">Pythia</span> produces photons from the hard scattering process, initial-state radiation (ISR), final-state radiation (FSR), and final-state hadron decays (primarily from neutral pions). Though not shown, we also tested a similar sample of `HardQCD` events, which encodes $2 \rightarrow 2$ QCD processes that can produce isolated photons from extra initial-state or final-state emissions; the results did not offer any new qualitative insights compared to the `PromptPhoton` sample. Jet clustering and photon isolation were performed using <span style="font-variant:small-caps;">FastJet</span> 3.2.1 [@Cacciari:2011ma]. Soft drop was implemented using the <span style="font-variant:small-caps;">FastJet Contrib</span> 1.026 `RecursiveTools` package [@fastjetcontrib].
For our event selection, we require an isolated photon with $p_{T\gamma} > 125$ GeV and one hadronic jet with $p_{T\text{jet}} > 100$ GeV. We use the condition $p_{T X} > 25$ GeV to define any additional jets that might appear in the event. A rapidity cut of $|y| < 2$ was applied to the final photon and jet objects after jet clustering. These selection criteria were chosen to roughly match a photon isolation study from ATLAS [@Aaboud:2017kff]. For each isolation criterion, we use the tight isolation parameters: $z_{\text{cut}} = \epsilon = 0.1$, $\beta = 2n = 2$, and $R_0 = 0.4$ [@Andersen:2014efa]. We used <span style="font-variant:small-caps;">Pythia</span> truth information to perform particle identification.
Because of the democratic versus undemocratic distinction, we had to use slightly different photon selection schemes for soft drop and Frixione isolation. For soft drop isolation, we first clustered the event into $R = 0.4$ jets with $p_{T X} > 25$ GeV and tested each jet for an isolated photon with $p_{T\gamma} > 125$ GeV and $|y_{\gamma}| < 2$; the remaining hadrons from the isolated-photon jet were discarded. For Frixione isolation, every photon with $p_{T\gamma} > 125$ GeV and $|y_{\gamma}| < 2$ was tested for isolation; if such a photon was found, then the rest of the event was clustered into $R = 0.4$ jets. In the case where an event contained multiple isolated photons, we used only the hardest isolated photon.
[0.47]{} ![ Inclusive $\gamma$+jet production cross sections from the <span style="font-variant:small-caps;">Pythia</span> `PromptPhoton` process, comparing the spectrum of soft drop isolation, Frixione isolation, and the hardest photon without isolation. **(a)** Photon transverse momentum $p_{T \gamma}$. **(b)** Angle $R_{\gamma X}$ between the photon and the nearest object with $p_{T X} > 25$ GeV. In both figures, the bottom panels show the ratios to the non-isolated case, and the shading indicates statistical uncertainties. Although the $p_{T \gamma}$ spectra are nearly identical, there are significant differences in the $R_{\gamma X}$ spectra due to soft drop isolation’s democratic nature.](figures/promptspectrum_pt.pdf "fig:")
[0.47]{} ![ Inclusive $\gamma$+jet production cross sections from the <span style="font-variant:small-caps;">Pythia</span> `PromptPhoton` process, comparing the spectrum of soft drop isolation, Frixione isolation, and the hardest photon without isolation. **(a)** Photon transverse momentum $p_{T \gamma}$. **(b)** Angle $R_{\gamma X}$ between the photon and the nearest object with $p_{T X} > 25$ GeV. In both figures, the bottom panels show the ratios to the non-isolated case, and the shading indicates statistical uncertainties. Although the $p_{T \gamma}$ spectra are nearly identical, there are significant differences in the $R_{\gamma X}$ spectra due to soft drop isolation’s democratic nature.](figures/promptspectrum_dr.pdf "fig:")
In , we show the photon $p_T$ spectrum for each isolation scheme, as well as for the hardest photon (isolated or not) in each event. The soft drop and Frixione distributions are nearly identical, showing that the differences between soft drop and Frixione isolation arising from higher-order effects mentioned in are extremely small in practice. There are on average 5% differences between the isolated photon spectra and the hardest photon spectrum, indicating that both isolation schemes properly identify direct photons. Notably, the two isolated spectra exhibit average differences of less than 0.1% (below the precision of this study), showing that the soft drop isolation and Frixione isolation perform nearly identically.
In , we show the angular distance $R_{\gamma X}$ between the isolated photon and the nearest inclusive jet with $p_{T X} > 25$ GeV and $|y_X| < 2$. As expected, the isolated photon spectra are significantly reduced compared to the non-isolated spectrum for $R_{\gamma X} < 0.4$. The soft drop and Frixione distributions are very similar for $R_{\gamma X}$ much larger than $0.4$, but there are significant differences between the two isolation schemes in the transition region around $R_{\gamma X} = 0.4$.
For $R_{\gamma X} < 0.4$, these differences are not due to any differences in strictness but rather to soft drop’s democratic construction. Because in Frixione isolation the clustering happens after the isolation step, it is possible for low-energy objects within the photon’s isolation cone to become part of one of the inclusive jets $X$. In contrast, soft drop isolation performs the clustering before the isolation step. Therefore, the only cases in which $R_{\gamma X} < 0.4$ would be permitted are those where the photon is significantly off-center from the jet axis. These cases are exceedingly rare, and as such, the soft drop isolation spectrum exhibits a relatively hard cutoff at $R_{\gamma X} = 0.4$. We suspect that this hard cutoff behavior will be desirable for future direct photon studies at the LHC.
For $R_{\gamma X} \simeq 0.4$, soft drop isolation is more strict than Frixione isolation due to the difference in defining an isolation region through clustering versus through cones. In soft drop isolation, hard objects at $R_{\gamma X}$ slightly greater than 0.4 will often cluster with the photon. In Frixione isolation, by contrast, hard objects at this distance will not factor into the isolation, as they fall outside of the isolation cone. The result is that we expect soft drop isolation to be somewhat stricter in such configurations. This can be observed in , where the soft drop isolation spectrum is suppressed relative to Frixione isolation in the approximate region $0.4 < R_{\gamma X} < 0.7$.
We used <span style="font-variant:small-caps;">Pythia</span> truth information to analyze the performance of each isolation scheme as applied in the above study. Although in the plots above we used only the hardest isolated photon in the event, the following efficiency values include all photons that passed the initial $p_T$ and $y$ cuts. Soft drop isolation and Frixione isolation each had around 90% efficiency of tagging direct photons as prompt photons. Both isolation criteria achieved 100% rejection of indirect photon backgrounds from final-state hadron decays (limited by the statistics of our sample). For FSR, which can generate photons both collinear to and well separated from jets, we analyzed both wide-angle radiation, defined as emissions with angle $\Delta R > 0.4$, and collinear radiation, defined as emissions with angle $\Delta R < 0.4$. Both isolation criteria tagged 53% of photons from wide-angle FSR as prompt and achieved more than 99% rejection of collinear FSR.
The above study validates the use of soft drop isolation to identify direct photons. In the context of <span style="font-variant:small-caps;">Pythia</span>, the level of background rejection from both isolation criteria is so high that it was difficult to get a trustable sample of isolated photons from collinear FSR or hadron decays. Although the above analysis indicates that soft drop isolation and Frixione isolation give very similar indirect photon background rates when using the tight isolation parameters, a detailed study with a detector simulation (including particle identification that accounts for photon conversion and collinear pion decays) would be needed to fully quantify the differences.
Exposing the QED splitting function {#sec:3}
===================================
Because soft drop isolation is democratic, we can naturally use it in contexts where photons play a key role in the substructure of a jet. The goal of this study is to use the kinematics of isolated photon subjets to expose the QED $q \to q \gamma$ splitting function. We first give a concrete definition of an isolated photon subjet in . We then calculate the kinematics of the isolated photon subjet to order $\alpha_e$ in the collinear limit in and show that the photon momentum fraction is directly given by the QED splitting function. We extend this calculation to order $\alpha_e \alpha_s$ in and show that the qualitative features do not change. In , we test this procedure with a parton shower generator, where we find behavior consistent with the analytic calculations.
Definition of an isolated photon subjet {#sec:3.1}
---------------------------------------
Our definition of an isolated photon subjet uses a combination of soft drop declustering and soft drop isolation to identity a quark-like jet with photon substructure. We begin with a jet of radius $R$ obtained through some clustering algorithm (anti-$k_T$ in our study). Soft drop is then applied to the jet with $z_{\text{cut}} = 0.1$, $\beta = 0$, and radius parameter $R_0 = R$, such that soft drop acts like the modified Mass Drop Tagger (mMDT) [@Dasgupta:2013ihk]. Events that pass this step now have two prong substructure, and analogous to the QCD splitting function study of , the choice $\beta = 0$ ensures that the $z$ distribution of the resulting subjets is not biased. We then decluster the soft-dropped jet into its two constituent subjets and apply soft drop isolation to each subjet with $z_{\text{cut}} = 0.1$, $\beta = 2$, and radius parameter $R_0 = R_{12}/2$.[^8] If exactly one of the subjets passes soft drop isolation, it is labeled as an isolated photon subjet.
![Example jet with an isolated photon subjet from a $q \rightarrow q \gamma$ spltting. For the initial soft drop, denoted $\rm{SD}^{\rm jet}_{\beta = 0}$ (equivalent to mMDT [@Dasgupta:2013ihk]), we used parameters $z_{\rm cut} = 0.1$, $\beta = 0$, and $R_0 = R = 0.4$. For the subjet isolation criterion, denoted $\rm{SD}^{\gamma}_{\beta = 2}$, we used parameters $z_{\rm cut} = 0.1$, $\beta = 2$, and $R_0 = R_{12}/2$, where $R_{12}$ is the angle between the two subjets.[]{data-label="fig:jetimage"}](figures/jetimage.pdf)
In , we show an example jet from <span style="font-variant:small-caps;">Pythia</span> that contains an isolated photon subjet. The details of the event generation will be given in . We see that the first step of soft drop declustering has decreased the active area [@Cacciari:2008gn] from the original jet (black, dotted) to the blue and orange subjets. The blue subjet consists of only a single photon. The orange, dashed subjet arises from the showering and hadronization of a quark parton. Using the <span style="font-variant:small-caps;">Pythia</span> event record, we can verify that this configuration does indeed arise from a $q \rightarrow q \gamma$ splitting.
The momentum fraction of the isolated photon subjet provides a novel way to expose the QED splitting function, both in perturbative calculations and in experiment. The QED splitting function, given in , describes the probability distribution of the momentum sharing $z$ between the photon and the quark. We define the isolated photon momentum sharing as $$\label{eqn:def-ziso}
{z_{\rm iso}}= \frac{p_{T \text{$\gamma$-sub}}}{p_{T\text{$\gamma$-sub}} + p_{T \text{had-sub}}},$$ as a proxy for the partonic $z$, where $p_{T \text{$\gamma$-sub}}$ is the transverse momentum of the isolated photon subjet and $p_{T \text{had-sub}}$ is the transverse momentum of the other (hadronic) subjet.[^9] In order to eliminate the primary background from meson decays, we implemented a simple cut on the angle between the two subjets $R_{12} > \theta_{\text{min}}$; a similar cut was used in the CMS study of the QCD splitting function [@Sirunyan:2017bsd]. The details of this cut are discussed further in . Note that with this $\theta_{\text{min}}$ restriction, the ${z_{\rm iso}}$ observable is infrared and colllinear safe, not just Sudakov safe [@Larkoski:2013paa; @Larkoski:2015lea].
Order $\alpha_e$ calculation {#sec:3.2}
----------------------------
We now calculate the differential cross section in ${z_{\rm iso}}$ to lowest non-trivial order, focusing on the collinear limit in the fixed-coupling approximation. At order $\alpha_e$, the cross section is quite simple to evaluate. There is only one term that contributes, corresponding to the single quark-photon branching from . The cross section can be expressed in terms of the initial quark cross section $\sigma_q$, the quark charge $e_q$, the emission angle $\theta_{\gamma}$, the momentum sharing $z_{\gamma}$, and the order $\alpha_e$ isolated photon subjet condition $\Theta_{(1,0)} $ as: $$\label{eq:lowestordercrosssection}
\frac{\text{d}\sigma_{(1,0)}}{\text{d} {z_{\rm iso}}} = \,
\int \text{d}\sigma_q \,\, \frac{\alpha_e e_q^2}{2 \pi} \,\,\frac{\text{d} \theta_{\gamma}}{\theta_{\gamma}} \,\, \text{d} z_{\gamma} \, P(z_{\gamma})\,\,\Theta_{(1,0)} ,$$ where the notation $(m,n)$ refers to the order $\alpha_e^m \alpha_s^n$.
Because at this order the jet consists of only a quark and a photon, the procedure in always identifies a quark subjet and a photon subjet, which is automatically an isolated subjet. The only conditions are that the two particles fall within the jet radius, that the jet as a whole pass the initial soft drop condition, and that the two subjets pass the minimum relative-angle condition: $$\Theta_{(1,0)} =
\Theta\left[z_{\gamma} - z_{\text{cut}}\right]
\Theta\left[\left(1 - z_{\gamma}\right) - z_{\text{cut}}\right]
\delta\left[{z_{\rm iso}}- z_{\gamma}\right]
\Theta\left[R - \theta_{\gamma}\right]
\Theta\left[\theta_{\gamma} - \theta_{\text{min}}\right].$$ Inserting this into , our cross section neatly factorizes into angular and momentum-fraction components, yielding a ${z_{\rm iso}}$ distribution that is directly proportional to the splitting function: $$\begin{split}
\label{eq:lowestorderfinalanswer}
\frac{\text{d} \sigma_{(1,0)}}{\text{d} {z_{\rm iso}}} &=
\sigma_q \frac{\alpha_e e_q^2}{2 \pi} \, \int_{\theta_{\text{min}}}^{R} \, \frac{\text{d} \theta_{\gamma}}{\theta_{\gamma}} \, \int_{z_{\text{cut}}}^{1 - z_{\text{cut}}}\, \text{d} z_{\gamma} \, P(z_{\gamma}) \,
\delta\left[{z_{\rm iso}}- z_{\gamma}\right]\\
&= \sigma_q \frac{\alpha_e e_q^2}{2 \pi} \, \log\left(\frac{R}{\theta_{\text{min}}}\right) \,\,P({z_{\rm iso}}) \,\Theta\left[{z_{\rm iso}}- z_{\text{cut}}\right]\,\Theta\left[1 - z_{\text{cut}} - {z_{\rm iso}}\right].
\end{split}$$ Thus, at order $\alpha_e$ the isolated photon subjet momentum fraction directly exposes the QED $q \to q \gamma$ splitting function.
The initial quark cross section $\sigma_q$ is the cross section for quark jet production at the $p_T$ scale of the calculation. At order $\alpha_e$, $\sigma_q$ appears only as a factor in normalization; at order $\alpha_e \alpha_s$, where both quark jet and gluon jet terms contribute, the ratio of $\sigma_q$ to its gluon jet production counterpart $\sigma_g$ is relevant. These values are discussed in detail in .
Order $\alpha_e \alpha_s$ calculation {#sec:3.3}
-------------------------------------
Going to higher orders, one might worry that the simple behavior in would be spoiled by QCD radiation within the jet. This turns out not to be the case. The reason is that the isolated photon subjet condition regulates singularities collinear to the photon, such that higher-order terms in the inclusive parton-photon fragmentation function are controlled without diminishing the order $\alpha_e$ splitting function. Although there are still higher-order corrections, they are significantly reduced compared to the raw fragmentation function. In this way, the isolated photon subjet more directly exposes the QED splitting function instead of merely exposing the parton-photon fragmentation function.
We can verify the above statements by performing a calculation of the ${z_{\rm iso}}$ distribution at order $\alpha_e \alpha_s$. At this order, analytic calculations of the cross section become considerably more involved, even restricting to the collinear limit with fixed coupling and strongly-ordered emissions. Two terms contribute to the cross section: the case in which an initial quark emits a photon and a gluon (), and the case in which an initial gluon splits into a quark-antiquark pair, one of which then radiates a photon (). Of these two terms, the initial-quark case is dominant, as the initial gluon will be almost entirely excluded by the subjet isolation step.
We work in the strongly-ordered limit, with the emission ordering determined by a generalized virtuality $Q = z(1 - z)\theta^n$. By changing the value of $n$, we can get a sense of the uncertainties in our calculation, though we emphasize that we have not performed a comprehensive uncertainty estimate. The choice $n = 1$ corresponds to $k_t$ ordering, $n = 2$ corresponds to a mass ordering, and we also test $n = 1/2$ for completeness. For the initial-quark diagram in , the ordering determines whether the gluon or the photon is emitted first. For the initial-gluon diagram in , the gluon-to-quarks splitting is required to occur first.
The total differential cross section in the observable ${z_{\rm iso}}$ can be expressed in terms of the initial-quark cross section $\sigma_q$, the initial-gluon cross section $\sigma_g$, each emission’s angle $\theta$ and momentum sharing $z$, the azimuthal angle with respect to the jet axis between emissions $\phi$, the $q \rightarrow q \gamma$ and $q \rightarrow q g$ splitting function $P$, the $g \rightarrow q \bar{q}$ splitting function $P_{qg}$, and the order $\alpha_e \alpha_s$ isolated photon subjet condition $\Theta_{(1,1)}$:[^10] $$\begin{split}
\frac{\text{d}\sigma_{(1,1)}}{\text{d} {z_{\rm iso}}} = \,&\int
\text{d}\sigma_q \,\, \frac{\alpha_e e_q^2}{2 \pi} \,\frac{\text{d} \theta_{\gamma}}{\theta_{\gamma}} \, \text{d} z_{\gamma} \, P(z_{\gamma})
\,\,
\frac{\alpha_s C_F}{2 \pi} \,\frac{\text{d} \theta_{g}}{\theta_{g}} \,\text{d} z_{g} \, P(z_{g})
\,\,
\frac{\text{d} \phi}{2\pi} \,\Theta_{(1,1)} \left[p_q, p_g, p_{\gamma}\right] \\[10pt]
~+2\,&\int
\text{d}\sigma_g \,\,
\frac{\alpha_s T_F}{2 \pi} \,\frac{\text{d} \theta_{q}}{\theta_{q}} \,\text{d} z_{q} \, P_{qg}(z_{q})
\,\,
\frac{\alpha_e e_q^2}{2 \pi} \,\frac{\text{d} \theta_{\gamma}}{\theta_{\gamma}} \, \text{d} z_{\gamma} \, P(z_{\gamma})
\,\,
\frac{\text{d} \phi}{2\pi} \,\Theta_{(1,1)} \left[p_q, p_{\bar{q}}, p_{\gamma}\right].
\end{split}
\label{eq:aeascross}$$
For simplicity of presentation, we do not give the precise functional form for $\Theta_{(1,1)}$. This function contains the clustering, initial soft drop, and subjet isolation steps and depends on the four-momenta of the final-state particles. These four-momenta in turn depend on how the branching variables are mapped to physical kinematics. We decide to define four-momenta by conserving three-momentum at each branching; we do not conserve energy in this process, which is consistent in the collinear limit. For the branching $A \to BC$ of a particle with initial momentum $p_0$ and kinematics $z$, $\theta$, and $\phi$, the resulting four-momenta are defined as:
$$\begin{aligned}
p_{A} &= p_0\, \{1,\, 0,\, 0,\,1\},\\
p_{B} &= p_0\,
\left\{z \sqrt{1 + (1-z)^2 \theta^2},\, -z\,(1 - z)\,\theta\, \cos \phi,\, -z\,(1 - z)\,\theta\, \sin \phi,\, z \right\},
\\
p_{C} &= p_0\,
\left\{(1 - z) \sqrt{1 + z^2 \theta^2},\, z\,(1 - z)\,\theta\, \cos \phi,\, z\,(1 - z)\,\theta\, \sin \phi,\, 1 - z \right\}.\end{aligned}$$
Because the ordering of emissions changes how momentum is conserved, the virtuality ordering is implicitly contained in the expressions for the four-momenta. While it is possible to express $\Theta_{(1,1)}$ in terms of the splitting kinematics (and we have), it is tedious and unenlightening.
In practice, we use Monte Carlo integration to perform the integral in . We generate “events” with each parameter $z$ and $\theta$ selected according to a uniform distribution with a lower bound of $0.001$, and $\phi$ distributed uniformly in $[0,2\pi)$. Each event is assigned a weight equal to the integrand in . To implement the plus prescription on $z_g$ in the initial quark case, for each event with an initial quark, we generated a second event with the same values of $\{z_{\gamma}, \theta_{\gamma}, \theta_g\}$, a negative weight, and $z_g$ selected according to a uniform distribution over $[0,0.001)$. We use the splitting kinematics to construct three massless four-vectors, after which we use the same <span style="font-variant:small-caps;">FastJet</span> tools as in to implement the isolated photon subjet procedure.
Although the kinematics of are independent of the jet momentum scale, the parameters $\sigma_q$, $\sigma_g$, and $\alpha_s$ all depend on the momentum. We performed our analysis at jet transverse momenta of $p_T = \{100, 200, 400, 800\}$ GeV. The initial quark jet cross section $\sigma_q$ and the gluon jet cross section $\sigma_g$ were determined for each momentum in <span style="font-variant:small-caps;">Pythia</span>. At 400 GeV, we obtained $\sigma_q/\sigma_g = 0.63$. We assume flavor universality throughout, such that the ${z_{\rm iso}}$ distribution does not depend on the quark charges except as a normalization. At each energy we used a fixed-coupling approximation for the value of $\alpha_s$, evaluated at $\mu = p_T \, R$: $$\alpha_s(\mu^2) = \frac{\alpha_s(m_Z^2)}{1 + \alpha_s(m_Z^2) \,\, b_0 \log\left(\frac{\mu^2}{m_Z^2}\right)},$$ where $b_0 = (33 - 2 N_f)/(12 \pi)$. Here, $N_f$ is the number of flavors available at the scale $\mu$.
![ Probability densities for isolated photon subjet momentum fraction ${z_{\rm iso}}$ at order $\alpha_e$ and order $\alpha_e \alpha_s$ in the collinear limit. Shown are results at $p_T = \{100, 200, 400, 800\}$ GeV.[]{data-label="fig:theor_results"}](figures/pythia_zs_theor.pdf)
In , we show the order $\alpha_e \alpha_s$ probability densities in ${z_{\rm iso}}$. Compared to the order $\alpha_e$ cross section, the $\alpha_e \alpha_s$ terms yield at most a 10% suppresion, and as such, the ${z_{\rm iso}}$ distribution largely resembles the basic quark-photon splitting function. The order $\alpha_e \alpha_s$ initial gluon term is for the most part suppressed at a factor of $\sim 0.1$ compared to the order $\alpha_e \alpha_s$ initial quark term and contributes a correction to the order $\alpha_e$ result of at most $1\%$. Changing the virtuality scale $n$ between $n = 1/2$ and $n = 2$ has an effect of at most $4 \%$, so we expect that including higher-order contributions to the cross section or relaxing the strong-ordering assumption would have a mild impact on the final shape of the distribution.
Parton shower study {#sec:3.4}
-------------------
We now perform a parton shower study in <span style="font-variant:small-caps;">Pythia</span> 8.223, with the aim of testing the robustness of the ${z_{\rm iso}}$ distribution to hadronization effects.[^11] We generate events from the <span style="font-variant:small-caps;">HardQCD</span> process, which encodes $2\rightarrow2$ hard QCD events. We made event samples for $p_{T\text{min}} = \{100, 200, 400, 800\}$ GeV, each with 20 million events.[^12] Because the efficiency for finding isolated photon subjets is so small, we turn off ISR and underlying event to speed up event generation, leaving all other <span style="font-variant:small-caps;">Pythia</span> settings at their default values. Since the isolated photon subjet condition is based on jet grooming, we do not expect these modifications to make a large impact on our results, though a detailed study of these effects is warranted.
Events were clustered into anti-$k_T$ jets of radius $R = 0.4$ with a transverse momentum cut $p_{T\text{jet}} > p_{T\text{min}}$ and a rapidity cut $|y_{\text{jet}}| < 2$. The clustering step and the isolated photon subjet step were implemented using <span style="font-variant:small-caps;">FastJet</span> and <span style="font-variant:small-caps;">FastJet Contrib</span> using the same code for the order $\alpha_e \alpha_s$ calculation in . As in , we used <span style="font-variant:small-caps;">Pythia</span> truth information to perform particle identification.
[0.47]{} ![**(a)** Top: <span style="font-variant:small-caps;">Pythia</span> cross sections of the $q \rightarrow q \gamma$ signal as a function of $\theta_{\rm min}$, given as a ratio to the cross section at $\theta_{\rm min} = 0$. The signal also decreases with $p_T$, and we found $\sigma_S(\theta_{\rm min} = 0) = \{1000, 96, 6.2, 0.22\}$ pb at $p_{T \rm min}= \{100, 200, 400, 800\}$ GeV. This background does not include backgrounds from “fake” photons from collinear $\pi^0 \rightarrow \gamma \gamma$ decays. Bottom: ratio of the signal cross section to the sum of signal and background cross sections. **(b)** Probability distributions of ${z_{\rm iso}}$ for the isolated photon subjet at order $\alpha_e$, order $\alpha_e \alpha_s$, and in <span style="font-variant:small-caps;">Pythia</span> with $p_{T\text{min}} = 400$ GeV and $\theta_{\text{min}} = 0.1$. ](figures/pythia_thetas.pdf "fig:")
[0.47]{} ![**(a)** Top: <span style="font-variant:small-caps;">Pythia</span> cross sections of the $q \rightarrow q \gamma$ signal as a function of $\theta_{\rm min}$, given as a ratio to the cross section at $\theta_{\rm min} = 0$. The signal also decreases with $p_T$, and we found $\sigma_S(\theta_{\rm min} = 0) = \{1000, 96, 6.2, 0.22\}$ pb at $p_{T \rm min}= \{100, 200, 400, 800\}$ GeV. This background does not include backgrounds from “fake” photons from collinear $\pi^0 \rightarrow \gamma \gamma$ decays. Bottom: ratio of the signal cross section to the sum of signal and background cross sections. **(b)** Probability distributions of ${z_{\rm iso}}$ for the isolated photon subjet at order $\alpha_e$, order $\alpha_e \alpha_s$, and in <span style="font-variant:small-caps;">Pythia</span> with $p_{T\text{min}} = 400$ GeV and $\theta_{\text{min}} = 0.1$. ](figures/pythia_zs_pt400.pdf "fig:")
At low energies and low angles, the isolated photon subjet sample was found to be dominated by neutral pion decays: because the observable identifies the photon “prongs” of a jet, it was in many cases identifying one of the photons produced in such a decay. These contributions are relatively easily avoided by choosing appropriate values for $\theta_{\text{min}}$ and $p_{T\text{min}}$; whereas pion decays become more collinear at higher energies, the angular aspect of QED branchings is energy independent. Using <span style="font-variant:small-caps;">Pythia</span> truth information, we were able to identify signal (photons from QED branchings) and background (all other photons). In , we show signal and background rates for isolated photons at different values of $\theta_{\rm min}$ and $p_{T\text{min}}$. We choose to use $p_{T\text{min}} = 400$ GeV and $\theta_{\text{min}} = 0.1$ for the remainder of this study, as these values yielded signal cross section of around 3 pb for a background cross section of around $0.006$ pb. This corresponds to around 150,000 recorded events for the 45 $\text{fb}^{-1}$ 2017 run of CMS [@CMSLumi], of which only about 300 events would be from the pion background. This value of $\theta_{\text{min}}$ is also a sensible cut from the perspective of the granularity of a typical hadronic calorimeter.
As alluded to in , there is also a potential background from closely collinear $\pi^0 \rightarrow \gamma \gamma$ decays, since in a realistic detector it is possible for two nearly-collinear photons to register as a single photon. To obtain an approximate sense of this background rate, we relaxed our definition of a photon to include two photons within a distance $\Delta R = 0.025$ from each other, roughly corresponding to the granularity of a typical electromagnetic calorimeter (ECAL). At 400 GeV with $\theta_{\rm min} = 0.1$, this yielded a background rate of 6%. The use of shower-shape observables, which are already well studied at both CMS and ATLAS [@Khachatryan:2015iwa; @Aaboud:2016yuq], would mitigate this background. To properly quantify this effect, a full study including detector simulation would be necessary.
In , we show the probability distribution in ${z_{\rm iso}}$ for $p_{T\text{min}} = 400$ GeV and $\theta_{\text{min}} = 0.1$ plotted against the corresponding distributions for order $\alpha_e$ and $\alpha_e \alpha_s$ theoretical results. The <span style="font-variant:small-caps;">Pythia</span> distribution exhibits quite good correspondence with the perturbative results. It appears that the higher-order corrections are somewhat amplified, albeit with the same functional form. This is likely due to non-perturbative effects arising from the non-collinear hadronization of the quark subjet, which introduces some soft radiation into the photon subjet. In order to test the effect of hadronization, we applied the same isolated photon subjet criterion to <span style="font-variant:small-caps;">Pythia</span> events with hadronization disabled and found slightly closer matching to the order $\alpha_e \alpha_s$ distribution.
It is clear from that, even with higher-order effects, the isolated photon subjet clearly exposes the form of the QED splitting function. This parton shower study therefore validates the use of isolated photon subjets to expose the splitting function in realistic collider scenarios.
Conclusion {#sec:conclusion}
==========
In the first half of this paper, we introduced soft drop isolation, a new form of photon isolation based on techniques from jet substructure. Soft drop isolation is infrared and collinear safe and equivalent at leading (non-trivial) order to the most common form of Frixione isolation, making it well suited to perturbative calculations of direct photons. Soft drop isolation is also democratic and based on clustering algorithms, making it well suited to identify direct photons in jet-rich environments. Together, these features make soft drop isolation a natural choice for photon studies at the LHC.
In the second half of this paper, we turned to indirect photons, using a combination of soft drop declustering and soft drop isolation to define isolated photon subjets. We showed how the momentum fraction carried by isolated photon subjets can be used to expose the QED splitting function, which describes the momentum sharing distribution of quark-photon branchings in the collinear limit. This is a novel test of gauge theories which complements previous soft-drop studies of the QCD splitting function.
As a further extension of this method, soft drop isolation could provide a new way to handle detector granularity. All collinear-safe isolation criteria are complicated by granularity, which forces the isolation to cut off at the detector’s angular resolution when implemented in experiment. This makes matching between calculations (in which there is no cut-off) and experimental implementations more difficult. has addresses this issue for Frixione isolation by using a set of concentric cones instead of a smoothly varying cone. Treating angular resolution with soft drop isolation would be quite straightforward, owing to its clustering basis. One could introduce a parameter $\theta_{\rm min}$ (analogous to that in ) related to the detector’s angular resolution and stop the declustering when the angle between the two subjets was less than $\theta_{\rm min}$. Because the C/A declustering is angular ordered, this means that the isolation would only treat features with angular separation greater than the detector resolution. While this is not identical to the behavior in granular detectors, we expect it to closely approximate that behavior.
It is possible to envision a number of extensions to the QED splitting analysis performed in . Parallel to the analysis performed in for the QCD splitting function, the isolated photon subjet criterion could be used in combination with flavor tagging to identify heavy-flavor QED splittings. Additionally, the same QED splitting analysis could be performed on leptons. While lepton QED splittings are well studied given the lack of lepton hadronization, it could nevertheless be an interesting test of this new democratic isolation scheme.
Finally, the isolated photon subjet also opens the door to additional photon substructure studies and observables beyond the QED splitting function. In this paper, we analyzed two-prong substructure with one hadronic subjet and one isolated photon subjet; by recursively applying the soft drop condition [@Dreyer:2018tjj], one could study jets with two (or more) isolated photon subjets. Such multi-photon configurations could be interesting for studying photon jets [@Ellis:2012zp]: jets composed primarily of photons that arise from scenarios beyond the standard model. Additionally, isolated photon subjets could be used to tag boosted decays such as $h \rightarrow Z \gamma$ or, more broadly, possible decays to jets and photons of boosted beyond-the-standard-model objects.
Isolated photon subjets provide a powerful framework for the study of QED substructure within QCD jets. We hope that the existence of this technique—and more generally, of a democratic, collinear-safe photon isolation criterion—will encourage the further development of photon-based jet substructure observables.
We thank Frédéric Dreyer, Markus Ebert, Stefano Frixione, Andrew Larkoski, Simone Marzani, and Mike Williams for interesting discussions and feedback on this manuscript. This work was supported by the Office of High Energy Physics of the U.S. Department of Energy (DOE) under grant DE-SC0012567. The work of ZH was supported by the MIT Undergraduate Research Opportunities Program.
[^1]: Traditional cone isolation is collinear unsafe to quark-to-photon fragmentation because of the non-zero energy threshold at zero opening angle. This is logically distinct from the infrared and/or collinear unsafety of certain cone jet algorithms that make use of unsafe seed axes.
[^2]: Strictly speaking, this corresponds to soft drop in “grooming mode” [@Larkoski:2014wba]. In “tagging mode”, the singlet would simply be vetoed.
[^3]: When considering electroweak corrections to photon production, it may also be desirable to label vacuum photon-to-lepton splittings $\gamma \rightarrow \ell^+\,\ell^-$ as singlet photons. We thank Stefano Frixione for discussions on this point.
[^4]: As in , this isolation criterion would not be safe for simultaneous soft and collinear divergences. Luckily, this is not relevant for quark and gluon radiation in the presence of a photon, where only one kind of divergence can appear at a time.
[^5]: Implementations of Frixione isolation often use the transverse energy $E_T$ in place of the transverse momentum $p_T$. Given the ambiguities in defining transverse energy and the assumption of high energies, we will instead use $p_T$ throughout.
[^6]: This equivalence gives another way to understand why, with appropriate choice in parameters, soft drop isolation is safe to infrared and collinear divergences. Just like Frixione isolation, soft drop isolation fully eliminates collinear fragmentation without restricting the soft gluon phase space.
[^7]: To ensure that there were sufficient events at high photon ${p}_T$, we used binned event generation with bin edges imposed on the hard process of $\hat{p}_T = (100,200,300,400,600,800,1000,1500,\infty)$ GeV. The events were then reweighted proportional to the generated cross sections.
[^8]: We also performed a study using $R_0 = R$ in the soft drop isolation criterion (while still applying the isolation only to the subjet constituents); although this version of the criterion does lead to sensible results, we found it to be more sensitive to non-perturbative hadronization effects.
[^9]: We also performed a study using $p_{T \gamma}/p_{T\text{jet,SD}}$ as the proxy for partonic $z$, where $p_{T \gamma}$ is the transverse momentum of the photon as opposed to the entire isolated photon subjet and $p_{T\text{jet,SD}}$ is the transverse momentum of the soft-dropped jet. We elected to use the definition in because it ensures a hard cutoff at ${z_{\rm iso}}= z_{\rm cut}$ and because it is less sensitive to the effects of hadronizaton.
[^10]: The name $z_g$ for the momentum fraction of the gluon should not be confused with the groomed momentum fraction from .
[^11]: At the perturbative level, <span style="font-variant:small-caps;">Pythia</span> has the same formal accuracy as for a single gluon emission in the collinear limit.
[^12]: In each case, we set the <span style="font-variant:small-caps;">Pythia</span> parameter $\hat{p}_{T\text{min}}$ to be $20\%$ lower than the jet $p_T$ cut. For the final 400 GeV run in , we generated 40 million events in order to decrease the statistical uncertainties.
|
---
abstract: 'We prove that the modified Korteweg- de Vries equation (mKdV) equation is unconditionally well-posed in $H^s(\mathbb R)$ for $s> \frac14$. Our method of proof combines the improvement of the energy method introduced recently by the first and third authors with the construction of a modified energy. Our approach also yields *a priori* estimates for the solutions of mKdV in $H^s(\mathbb R)$, for $s>\frac1{10}$.'
address:
- 'Luc Molinet, Laboratoire de Mathématiques et Physique Théorique, Université François Rabelais, Tours, Fédération Denis Poisson-CNRS, Parc Grandmont, 37200 Tours, France.'
- 'Didier Pilod, Instituto de Matemática, Universidade Federal do Rio de Janeiro, Caixa Postal 68530, CEP: 21945-970, Rio de Janeiro, RJ, Brasil.'
- 'Stéphane Vento, Université Paris 13, Sorbonne Paris Cité, LAGA, CNRS ( UMR 7539), 99, avenue Jean-Baptiste Clément, F-93 430 Villetaneuse, France.'
author:
- 'Luc Molinet$^*$, Didier Pilod$^\dagger$ and Stéphane Vento$^*$'
title: 'Unconditional uniqueness for the modified Korteweg-de Vries equation on the line '
---
[^1] [^2]
Introduction
============
We consider the initial value problem (IVP) associated to the modified Korteweg-de Vries (mKdV) equation $$\label{mKdV}
\left\{ \begin{array}{l}\partial_tu+\partial_x^3u+\kappa\partial_x(u^3)=0 \, , \\
u(\cdot,0)=u_0 \, , \end{array} \right.$$ where $u=u(x,t)$ is a real function, $\kappa=1$ or $-1$, $x \in \mathbb R$, $t \in \mathbb R$.
In the seminal paper [@KPV2], Kenig, Ponce and Vega proved the well-posedness of in $ H^s(\R) $ for $ s\ge 1/4 $. This result is sharp in the sense that the flow map associated to mKdV fails to be uniformly continuous in $H^s(\mathbb R)$ if $s<\frac14$ in both the focusing case $\kappa=1$ (cf. Kenig, Ponce and Vega [@KPV3]) and the defocusing case $\kappa=-1$ (cf. Christ, Colliander and Tao [@ChCoTao]). Global well-posedness for mKdV was proved in $H^s(\mathbb R)$ for $s>\frac14$ by Colliander, Keel, Staffilani, Takaoka and Tao [@CKSTT] by using the $I$-method. We also mention that another proof of the local well-posedness result for $s \ge \frac14$ was given by Tao by using the Fourier restriction norm method [@Tao].
The proof of the well-posedness result in [@KPV2] relies on the dispersive estimates associated with the linear group of , namely the Strichartz estimates, the local smoothing effect and the maximal function estimate. A normed function space is constructed based on those estimates and allows to solve via a fixed point theorem on the associated integral equation. Of course the solutions obtained in this way are unique in this resolution space. The same occurs for the solutions constructed by Tao which are unique in the space $X_T^{s,\frac12+} $.
The question to know wether uniqueness holds for solutions which do not belong to these resolution spaces turns out to be far from trivial at this level of regularity. This kind of question was first raised by Kato [@Ka] in the Schrödinger equation context. We refer to such uniqueness in $C([0, T] : H^s(\mathbb R))$, or more generally in $ L^\infty(]0,T[ : H^{s}(\mathbb R))$, without intersecting with any auxiliary function space as *unconditional uniqueness*. This ensures the uniqueness of the weak solutions to the equation at the $ H^s$-regularity. This is useful, for instance, to pass to the limit on perturbations of the equation as the perturbative coefficient tends to zero (see for instance [@M] for such an application).
Unconditional uniqueness was proved for the KdV equation to hold in $L^2(\mathbb R)$ [@Zhou] and in $L^2(\mathbb T)$ [@BaIlTi] and for the mKdV in $H^{\frac12}(\mathbb T)$ [@KwonOh].
The aim of this paper is to prove the unconditional uniqueness of the mKdV equation in $H^s(\mathbb R)$ for $s > \frac14$. By doing so, we also provide a different proof of the existence result. Next, we state our main result.
\[maintheo\] Let $s > 1/4$ be given.\
For all $u_0 \in H^s(\mathbb R)$, there exists $T=T(\|u_0\|_{H^s}) >0$ and a solution $u$ of the IVP such that $$\label{maintheo.1}
u \in C([0,T] : H^s(\mathbb R)) \cap L^4_TL^{\infty}_x \cap X^{s-1,1}_T\cap \widetilde{L^{\infty}_T}H^s_x \, .$$
The solution is unique in the class $$\label{maintheo.1b}
u\in L^\infty(]0,T[ : H^{s}(\mathbb R)) \, .$$
Moreover, the flow map data-solution $:u_0 \mapsto u$ is Lipschitz from $H^s(\mathbb R)$ into $C([0,T] : H^s(\mathbb R))$.
We refer to Section 2.2 for the definition of the norms $\|u\|_{\widetilde{L^{\infty}_T}H^s_x}$ and $\|u\|_{X^{s-1,1}_T}$.
According to the equation, the time derivative of a weak solution satisfying belongs to $L^\infty(]0,T[ : H^{s-3}(\mathbb R))$. Thus, such a solution has to belong to $C([-T,T] : H^{s-3}(\mathbb R))$, so that the initial value condition in still makes sense.
Our technique of proof also yields *a priori* estimates for the solutions of mKdV in $H^s(\mathbb R)$ for $s>\frac1{10}$. It is worth noting that *a priori* estimates in $H^s(\mathbb R)$ were already proved by Christ, Holmer and Tataru for $-\frac18<s<\frac14$ in [@ChHoTa]. Their proof relies on short time Fourier restriction norm method in the context of the atomic spaces $U$, $V$ and the $I$-method. Although our result is not as strong as Christ, Holmer and Tataru’s one, we hope that it still may be of interest due to the simplicity of his proof.
\[secondtheo\] Assume that $s>\frac1{10}$. For any $M>0$, there exist a positive time $T=T(M)>0$ and a positive constant $C$ such that for any initial data $u_0 \in H^{\infty}(\mathbb R)$ such that $\|u_0\|_{H^s} \le M$, the smooth solution of satisfies $$\label{secondtheo.1}
\|u\|_{Z^s_T}:=\|u\|_{\widetilde{L^{\infty}_T}H^s_x}+\|u\|_{X^{s-1,1}_T}+\|u\|_{L^4_TL^{\infty}_x} \le C \|u_0\|_{H^s_x} \, .$$
By passing to the limit on a sequence of smooth solutions, the above *a priori* estimate ensures the existence of a $L^\infty_T H^s_x $- weak solution of for $ s>1/10$. Note that, since $ s>0$, there is no difficulty to pass to the limit on the nonlinear term by a compactness argument.
To prove Theorems \[maintheo\] and \[secondtheo\], we derive energy estimates on the dyadic blocks $ \|P_Nu\|_{H^s_x}^2$ by using the norms $\|u\|_{\widetilde{L^{\infty}_T}H^s_x}$ and $\|u\|_{X^{s-1,1}_T}$. This technique has been introduced by the first and the third authors in [@MoVe]. Note however that in addition to use the fractional Leibniz rule to control the $X^{s-1,1}_T$-norm as in [@MoVe], we also introduce the norm $\|\cdot\|_{L^4_TL^{\infty}_x}$. This norm is in turn controlled by using a refined Strichartz estimate derived by chopping the time interval in small pieces whose length depends on the spatial frequency. It is this estimate which provides the restriction $s>\frac1{10}$ in Theorem \[secondtheo\]. Note that it was first established by Koch and Tzvetkov [@KoTz] (see also Kenig and Koenig [@KeKo] for an improved version) in the Benjamin-Ono context.
The main difficulty to estimate $\frac{d}{dt}\|P_Nu\|_{H^s_x}^2$ is to handle the resonant term $\mathcal{R}_N$, typical of the cubic nonlinearity $\partial_x(u^3)$. When $u$ is the solution of mKdV, $\mathcal{R}_N$ writes $\mathcal{R}_N=\int \partial_x\big( P_{+N}uP_{+N}uP_{-N}u\big)P_{-N}u dx$. Actually, it turns out that we can always put the derivative appearing in $\mathcal{R}_N$ on a low frequency product by integrating by parts$\footnote{For technical reason we perform this integration by parts in Fourier variables.}$, as it was done in [@IoKeTa] for quadratic nonlinearities. This allows us to derive the *a priori* estimate of Theorem \[secondtheo\] in $H^s(\mathbb R)$ for $s>\frac1{10}$. Unfortunately, this is not the case anymore for the difference of two solutions of mKdV due to the lack of symmetry of the corresponding equation. To overcome this difficulty we modify the $H^s$-norm by higher order terms up to order 6. These higher order terms are constructed so that the contribution of their time derivatives coming from the linear part of the equation will cancel out the resonant term $\mathcal{R}_N$. The use of a modified energy is well-known to be a quite powerful tool in PDE’s (see for instance [@MN] and [@KePi]). Note however that, in our case, we need to define the modified energy in Fourier variables due to the resonance relation associated to the cubic nonlinearity. This way to construct the modified energy has much in common with the way to construct the modified energy in the I-method (cf. [@CKSTT]).
Finally let us mention that the tools developed in this paper together with some ideas of [@TT] and [@NTT] will enable us, in a forthcoming paper, to get the unconditional well-posedness of the periodic mKdV equation in $ H^s({\mathbb{T}}) $ for $ s>1/3$. We also hope that the techniques introduced here could be useful in the study of the Cauchy problem at low regularity of other cubic nonlinear dispersive equations such as the modified Benjamin-Ono equation and the derivative nonlinear Schrödinger equation.
The rest of the paper is organized as follows. In Section \[notation\], we introduce the notations, define the function spaces and state some preliminary estimates. The multilinear estimates at the $L^2$-level are proved in Section \[Secmultest\]. Those estimates are used to derive the energy estimates in Section \[Secenergy\]. Finally, we give the proofs of Theorems \[maintheo\] and \[secondtheo\] respectively in Sections \[Secmaintheo\] and \[Secsecondtheo\].
Notation, Function spaces and preliminary estimates {#notation}
===================================================
Notation
--------
For any positive numbers $a$ and $b$, the notation $a \lesssim b$ means that there exists a positive constant $c$ such that $a \le c
b$. We also denote $a \sim b$ when $a \lesssim b$ and $b \lesssim
a$. Moreover, if $\alpha \in \mathbb R$, $\alpha_+$, respectively $\alpha_-$, will denote a number slightly greater, respectively lesser, than $\alpha$.
Let us denote by $\mathbb D =\{N>0 : N=2^n \ \text{for some} \ n \in \mathbb Z \}$ the dyadic numbers. Usually, we use $n_i$, $j_i$, $m_i$ to denote integers and $N_i=2^{n_i}$, $L_i=2^{j_i}$ and $M_i=2^{m_i}$ to denote dyadic numbers.
For $N_1, \ N_2 \in \mathbb D$, we use the notation $N_1 \vee N_2=\max\{N_1,N_2\}$ and $N_1 \wedge N_2 =\min\{N_1,N_2\}$. Moreover, if $N_1, \, N_2, \, N_3 \in \mathbb D$, we also denote by $N_{max} \ge N_{med} \ge N_{min}$ the maximum, sub-maximum and minimum of $\{N_1,N_2,N_3\}$.
For $u=u(x,t) \in \mathcal{S}'(\mathbb R^2)$, $\mathcal{F}u$ will denote its space-time Fourier transform, whereas $\mathcal{F}_xu=\widehat{u}$, respectively $\mathcal{F}_tu$, will denote its Fourier transform in space, respectively in time. For $s \in \mathbb R$, we define the Bessel and Riesz potentials of order $-s$, $J^s_x$ and $D_x^s$, by $$J^s_xu=\mathcal{F}^{-1}_x\big((1+|\xi|^2)^{\frac{s}{2}}
\mathcal{F}_xu\big) \quad \text{and} \quad
D^s_xu=\mathcal{F}^{-1}_x\big(|\xi|^s \mathcal{F}_xu\big).$$
We also denote by $U(t)=e^{-t\partial_x^3}$ the unitary group associated to the linear part of , *i.e.*, $$U(t)u_0=e^{-t\partial_x^3}u_0=\mathcal{F}_x^{-1}\big(e^{it\xi^3}\mathcal{F}_x(u_0)(\xi) \big) \, .$$
Throughout the paper, we fix a smooth cutoff function $\chi$ such that $$\chi \in C_0^{\infty}(\mathbb R), \quad 0 \le \chi \le 1, \quad
\chi_{|_{[-1,1]}}=1 \quad \mbox{and} \quad \mbox{supp}(\chi)
\subset [-2,2].$$ We set $ \phi(\xi):=\chi(\xi)-\chi(2\xi) $. For $l \in \mathbb Z$, we define $$\phi_{2^l}(\xi):=\phi(2^{-l}\xi),$$ and, for $ l\in \mathbb N^* $, $$\psi_{2^{l}}(\xi,\tau)=\phi_{2^{l}}(\tau-\xi^3).$$ By convention, we also denote $$\phi_0(\xi)=\chi(2\xi) \quad \text{and} \quad \psi_{0}(\xi,\tau):=\chi(2(\tau-\xi^3)) \, .$$ Any summations over capitalized variables such as $N, \, L$, $K$ or $M$ are presumed to be dyadic. Unless stated otherwise, we will work with non-homogeneous dyadic decompositions in $N$, $L$ and $K$, *i.e.* these variables range over numbers of the form $\mathbb D_{nh}=\{2^k
: k \in \mathbb N \} \cup \{0\}$, whereas we will work with homogeneous dyadic decomposition in $M$, *i.e.* these variables range over $\mathbb D$ . We call the numbers in $\mathbb D_{nh}$ *nonhomogeneous dyadic numbers*. Then, we have that $\displaystyle{\sum_{N}\phi_N(\xi)=1}$, $$\mbox{supp} \, (\phi_N) \subset
I_N:=\{\frac{N}{2}\le |\xi| \le 2N\}, \ N \ge 1, \quad \text{and} \quad
\mbox{supp} \, (\phi_0) \subset I_0:=\{|\xi| \le 1\}.$$
Finally, let us define the Littlewood-Paley multipliers $P_N$, $R_K$ and $Q_L$ by $$P_Nu=\mathcal{F}^{-1}_x\big(\phi_N\mathcal{F}_xu\big), \quad R_Ku=\mathcal{F}^{-1}_t\big(\phi_K\mathcal{F}_tu\big) \quad \text{and} \quad
Q_Lu=\mathcal{F}^{-1}\big(\psi_L\mathcal{F}u\big),$$ $P_{\ge N}:=\sum_{K \ge N} P_{K}$, $P_{\le N}:=\sum_{K \le N} P_{K}$, $Q_{\ge L}:=\sum_{K \ge L} Q_{K}$ and $Q_{\le L}:=\sum_{K \le L} Q_{K}$.
Sometimes, for the sake of simplicity and when there is no risk of confusion, we also denote $u_N=P_Nu$.
Function spaces {#spaces}
---------------
For $1 \le p \le \infty$, $L^p(\mathbb R)$ is the usual Lebesgue space with the norm $\|\cdot\|_{L^p}$. For $s \in \mathbb R$, the Sobolev space $H^s(\mathbb R)$ denotes the space of all distributions of $\mathcal{S}'(\mathbb R)$ whose usual norm $\|u\|_{H^s}=\|J^s_xu\|_{L^2}$ is finite.
If $B$ is one of the spaces defined above, $1 \le p \le \infty$ and $T>0$, we define the space-time spaces $L^p_ t B_x$, $L^p_TB_x$, $ \widetilde{L^p_t} B_x $ and $\widetilde{L^p_T}B_x$ equipped with the norms $$\|u\|_{L^p_ t B_x} =\Big(\int_{\R}\|f(\cdot,t)\|_{B}^pdt\Big)^{\frac1p} , \quad
\|u\|_{L^p_ T B_x} =\Big(\int_0^T\|f(\cdot,t)\|_{B}^pdt\Big)^{\frac1p}$$ with obvious modifications for $ p=\infty $, and $$\|u\|_{\widetilde{L^p_ t }B_x} =\Big(\sum_{N}
\| P_N u \|_{L^p_ t B_x}^2\Big)^{\frac12}, \quad \|u\|_{\widetilde{L^p_ T }B_x} =\Big(\sum_{N}
\| P_N u \|_{L^p_ T B_x}^2\Big)^{\frac12} \, .$$
For $s$, $b \in \mathbb R$, we introduce the Bourgain spaces $X^{s,b}$ related to the linear part of as the completion of the Schwartz space $\mathcal{S}(\mathbb R^2)$ under the norm $$\label{X1}
\|u\|_{X^{s,b}} := \left(
\int_{\mathbb{R}^2}\langle\tau-\xi^3\rangle^{2b}\langle \xi\rangle^{2s}|\mathcal{F}(u)(\xi, \tau)|^2
d\xi d\tau \right)^{\frac12},$$ where $\langle x\rangle:=1+|x|$. By using the definition of $U$, it is easy to see that $$\label{X2}
\|u\|_{X^{s,b}}\sim\| U(-t)u \|_{H^{s,b}_{x,t}} \quad \text{where} \quad \|u\|_{H^{s,b}_{x,t}}=\|J^s_xJ^b_tu\|_{L^2_{x,t}} \, .$$
We will also use restriction in time versions of these spaces. Let $T>0$ be a positive time. The restriction space $ X^{s,b}_T $ will be the space of functions $u: \mathbb R \times
]0,T[\rightarrow \mathbb R$ or $\mathbb C$ satisfying $$\|u\|_{X^{s,b}_{T}}:=\inf \{\|\tilde{u}\|_{X^{s,b}} \ | \ \tilde{u}: \mathbb R
\times \mathbb R \rightarrow \mathbb R \ \text{or} \ \mathbb C, \ \tilde{u}|_{\mathbb R
\times ]0,T[} = u\}<\infty \; .$$
Finally, we define our resolution spaces $Y^s=X^{s-1,1} \cap \widetilde{L^{\infty}_t}H^s_x $, $Y^s_T=X^{s-1,1}_T \cap \widetilde{L^{\infty}_T}H^s_x$ and $Z^s_T=Y^s_T \cap L^4_TL^{\infty}_x$ with the associated norms $$\label{YT}
\|u\|_{Y^s}=\|u\|_{X^{s-1,1}}+\|u\|_{\widetilde{L^{\infty}_t}H^s_x}, \quad \|u\|_{Y^s_T}=\|u\|_{X^{s-1,1}_T}+\|u\|_{\widetilde{L^{\infty}_T}H^s_x}$$ and $$\label{ZT}
\|u\|_{Z^s_T}= \|u\|_{Y^s_T}+\|u\|_{L^4_TL^{\infty}_x} \, .$$ It is clear from the definition that $\widetilde{L^{\infty}_T}H^s_x \hookrightarrow L^{\infty}_TH^s_x$., *i.e.* $$\label{tildenorm}
\|u\|_{L^{\infty}_TH^s_x} \lesssim \|u\|_{\widetilde{L^{\infty}_T}H^s_x}, \quad \forall \, u \in \widetilde{L^{\infty}_T}H^s_x \, .$$
Extension operator
------------------
In this subsection, we introduce an extension operator $\rho_T$ which is a bounded operator from $X^{s-1,1}_T\cap L^{\infty}_TH^s_x$ into $X^{s-1,1} \cap L^{\infty}_tH^s_x\cap L^2_tH^s_x$ for any $s \in \mathbb R$.
\[def.extension\] Let $0<T\le 1$ and $u:\mathbb R \times [0,T] \rightarrow \mathbb R$ or $\mathbb C$ be given. Let us first define $$\label{def.extension.1}
v(x,t)=U(-t)u(\cdot,t)(x), \quad \forall \, (x,t) \in \mathbb R \times [0,T] \, .$$ Then, we extend $v$ on $[-2,2]$ by setting $\partial_tv=0$ on $[-2,2] \setminus [0,T]$. We define the extension operator $\rho_T$ by $$\label{def.extension.2}
\rho_T(u)(x,t) := \chi(t)U(t)v(\cdot,t)(x), \quad \forall \, (x,t) \in \mathbb R^2 \, ,$$ which extends functions defined on $\mathbb R\times [0,T]$ to $\mathbb R^2$.
It is clear from the definition that $\rho_T(u)(x,t)=u(x,t)$ for $(x,t) \in \mathbb R \times [0,T]$, $\rho_T(P_Nu)=P_N\big(\rho_T(u)\big)$ and $\text{supp} \, \rho_T(u) \subset [-2,2]$.
\[extension\] Let $0<T \le 1$ and $s \in \mathbb R$. Then, $$\rho_T:X^{s-1,1}_T \cap L^{\infty}_TH^s_x \longrightarrow X^{s-1,1} \cap L^{\infty}_tH^{s}_x, \quad u \mapsto \rho_T(u)$$ is a bounded linear operator, *i.e.* there exists $C_1>0$ independent of $T \in (0,1]$ such that $$\label{extension.1}
\|\rho_T(u)\|_{X^{s-1,1}}+\|\rho_T(u)\|_{L^{\infty}_tH^s_x}\le C_1 \big(\|u\|_{X^{s-1,1}_T}+\|u\|_{L^{\infty}_TH^s_x} \big) \, ,$$ for all $u \in X^{s-1,1}_T \cap L^{\infty}_TH^{s}_x$.
First, it is clear from the construction of $\rho_T(u)$ that $$\label{extension.2}
\|\rho_T(u)\|_{L^{\infty}_tH^s_x} \lesssim \|u\|_{L^{\infty}_TH^s_x} \ \, ,$$ since $U$ is a unitary group in $H^s$.
Next, we explain how to bound $\|\rho_T(u)\|_{X^{s-1,1}}$. From the definition of $X^{s-1,1}_T$, there exists an extension $\tilde{u}$ of $u$ on $\mathbb R^2$ such that $$\label{extension.3}
\|\tilde{u}\|_{X^{s-1,1}} \le 2\|u\|_{X^{s-1,1}_T} \, .$$ Now, by using , we have that $$\label{extension.4}
\|\rho_T(u)\|_{X^{s-1,1}}=\|\chi J_x^{s-1}v\|_{H^1_tL^2_x} \lesssim \|J^{s-1}_xv\|_{L^2_{[-2,2]}L^2_x}+\|J^{s-1}_x\partial_tv\|_{L^2_{[-2,2]}L^2_x} \, .$$ Since $\tilde{u}=u$ on $[0,T]$, we deduce from the definition of $v$ that $$\label{extension.5}
\begin{split}
\|J^{s-1}_xv\|_{L^2_{[-2,2]}L^2_x} &\lesssim \|J^{s-1}_xu(\cdot,0)\|_{L^2_{[-2,0]}L^2_x}+
\|J^{s-1}_xU(-t)\tilde{u}(\cdot,t)\|_{L^2_{[0,T]}L^2_x} \\ & \quad+
\|J^{s-1}_xu(\cdot,T)\|_{L^2_{[T,2]}L^2_x} \, ,
\end{split}$$ and $$\label{extension.6}
\|J^{s-1}_x\partial_tv\|_{L^2_{[-2,2]}L^2_x} =
\|J^{s-1}_x\partial_tU(-t)\tilde{u}(\cdot,t)\|_{L^2_{[0,T]}L^2_x} \, .$$ It follows then gathering – and using that $$\label{extension.7}
\|\rho_T(u)\|_{X^{s-1,1}} \lesssim \|u\|_{L^{\infty}_TH^s_x}+\|\tilde{u}\|_{X^{s-1,1}}\, ,$$ which implies in view of .
Refined Strichartz estimates
----------------------------
First, we recall the Strichartz estimate associated to the unitary Airy group derived in [@KPV1]. Then $$\label{strichartz}
\|e^{-t\partial^3_x}D_x^{\frac14}u_0 \|_{L^4_tL^{\infty}_x} \lesssim \|u_0\|_{L^2} \ ,$$ for all $u_0 \in L^2(\mathbb R)$.
Following the arguments in [@KeKo] and [@KoTz], we derive a refined Strichartz estimate for the solutions of the linear problem $$\label{linearKdV}
\partial_tu+\partial_x^3u=F \, .$$
\[refinedStrichartz\] Assume that $T>0$ and $\delta \ge 0$. Let $u$ be a smooth solution to defined on the time interval $[0,T]$. Then, $$\label{refinedStrichartz1}
\|u\|_{L^4_TL^{\infty}_x} \lesssim \|J_x^{\frac{\delta-1}4+\theta}u\|_{L^{\infty}_TL^2_x}+\|J_x^{-\frac{3\delta+1}4+\theta}F\|_{L^4_TL^2_x} \ ,$$ for any $\theta>0$.
Let $u$ be solution to defined on a time interval $[0,T]$. We use a nonhomogeneous Littlewood-Paley decomposition, $u=\sum_Nu_N$ where $u_N=P_Nu$, $N$ is a nonhomogeneous dyadic number and and also denote $F_N=P_NF$. Then, we get from the Minkowski inequalitiy that $$\|u\|_{L^4_TL^{\infty}_x}\le \sum_N\|u_N\|_{L^4_TL^{\infty}_x}
\lesssim \sup_{N}N^{\theta}\|u_N\|_{L^4_TL^{\infty}_x} \, ,$$ for any $\theta>0$. Recall that $P_0$ corresponds to the projection in low frequencies, so that we set $0^{\theta}=1$ by convention. Therefore, it is enough to prove that $$\label{refinedStrichartz2}
\|u_N\|_{L^4_TL^{\infty}_x} \lesssim \|D_x^{\frac{\delta-1}4}u_N\|_{L^{\infty}_TL^2_x}+\|D_x^{-\frac{3\delta+1}4}F_N\|_{L^4_TL^2_x} \, ,$$ for any $\delta \ge 0$ and any dyadic number $N \in \{2^k : k \in \mathbb N\} \cup \{0\}$.
Let $\delta$ be a nonnegative number to be fixed later. we chop out the interval in small intervals of $N^{-\delta}$. In other words, we have that $[0,T]=\underset{j \in J}{\bigcup}I_j$ where $I_j=[a_j, b_j]$, $|I_j|\thicksim N^{-\delta}$ and $\# J\sim N^{\delta}$. Since $u_N$ is a solution to the integral equation $$u_N(t) =e^{-(t-a_j)\partial_x^3}u_N(a_j)+\int_{a_j}^te^{-(t-t')\partial_x^3}F_N(t')dt'$$ for $t \in I_j$, we deduce from that $$\begin{split}
\|u_N\|_{L^4_TL^{\infty}_x} &\lesssim \Big(\sum_j \|D^{-\frac14}_xu_N(a_j)\|_{L^2_x}^4 \Big)^{\frac14}+
\Big(\sum_j \big(\int_{I_j}\|D^{-\frac14}_xF_N(t')\|_{L^2_x}dt'\big)^4 \Big)^{\frac14} \\
& \lesssim N^{\frac{\delta}4}\|D^{-\frac14}_xu_N\|_{L^{\infty}_TL^2_x}
+\Big(\sum_j |I_j|^3\int_{I_j}\|D^{-\frac14}_xF_N(t')\|_{L^2_x}^4dt' \Big)^{\frac14} \\
& \lesssim \|D^{\frac{\delta-1}4}_xu_N\|_{L^{\infty}_TL^2_x}+\|D^{-\frac{3\delta+1}4}_xF_N\|_{L^4_TL^2_x} \, ,
\end{split}$$ which concludes the proof of .
L2 multilinear estimates {#Secmultest}
========================
In this section we follow some notations of [@Tao]. For $k \in \mathbb Z_+$ and $\xi \in \mathbb R$, let $\Gamma^k(\xi)$ denote the $k$-dimensional affine hyperplane of $\mathbb R^{k+1}$ defined by $$\Gamma^k(\xi)=\big\{ (\xi_1,\cdots,\xi_{k+1}) \in \mathbb R^{k+1} : \ \xi_1+\cdots \xi_{k+1}=\xi\big\} \, ,$$ and endowed with the obvious measure $$\int_{\Gamma^k(\xi)}F = \int_{\Gamma^k(\xi)}F(\xi_1,\cdots,\xi_{k+1}) :=
\int_{\mathbb R^k} F\big(\xi_1,\cdots,\xi_k,\xi-(\xi_1+\cdots+\xi_k)\big)d\xi_1\cdots d\xi_k \, ,$$ for any function $F: \Gamma^k(\xi) \rightarrow \mathbb C$. When $\xi=0$, we simply denote $\Gamma^k=\Gamma^k(0)$ with the obvious modifications.
Moreover, given $T>0$, we also define $\mathbb R_T=\mathbb R \times [0,T]$ and $\Gamma^k_T=\Gamma^k \times [0,T]$ with the obvious measures $$\int_{\mathbb R_T} u := \int_{\mathbb R \times [0,T]}u(x,t)dxdt$$ and $$\int_{\Gamma^k_T} F :=\int_{\mathbb R^k\times [0,T]} F\big(\xi_1,\cdots,\xi_k,\xi-(\xi_1+\cdots+\xi_k),t\big)d\xi_1\cdots d\xi_k dt \, .$$
3-linear estimates
------------------
\[prod4-est\] Let $f_j\in L^2(\mathbb R)$, $j=1,...,4$ and $M \in \mathbb D$. Then it holds that $$\label{prod4-est.1}
\int_{\Gamma^3} \phi_M(\xi_1+\xi_2) \prod_{j=1}^4|f_j(\xi_j)| \lesssim M \prod_{j=1}^4 \|f_j\|_{L^2} \, .$$
Let us denote by $\mathcal{I}^3_M(f_1,f_2,f_3,f_4)$ the integral on the left-hand side of . We can assume without loss of generality that $f_i \ge 0$ for $i=1,\cdots,4$. Then, we have that $$\label{prod4-est.2}
\mathcal{I}^3_M(f_1,f_2,f_3,f_4) \le \mathcal{J}_M(f_1,f_2) \, \times\sup_{\xi_1,\xi_2} \int_{\mathbb R} f_3(\xi_3)f_4(-(\xi_1+\xi_2+\xi_3))d\xi_3 \, ,$$ where $$\label{prod4-est.3}
\mathcal{J}_M(f_1,f_2)=\int_{\mathbb R^2}\phi_M(\xi_1+\xi_2)f_1(\xi_1)f_2(\xi_2)d\xi_1d\xi_2 \, .$$ Hölder’s inequality yields $$\label{prod4-est.4}
\mathcal{J}_M(f_1,f_2)=\int_{\mathbb R}\phi_M(\xi_1)(f_1 \ast f_2) (\xi_1)d\xi_1 \lesssim M\|f_1 \ast f_2\|_{L^{\infty}} \lesssim M\|f_1\|_{L^2} \|f_2\|_{L^2} \, .$$ Moreover, the Cauchy-Schwarz inequality yields $$\label{prod4-est.5}
\int_{\mathbb R} f_3(\xi_3)f_4(-(\xi_1+\xi_2+\xi_3))d\xi_3 \le \|f_3\|_{L^2} \|f_4\|_{L^2} \, .$$ Therefore, estimate follows from –.
For a fixed $N\ge 1$ dyadic, we introduce the following disjoint subsets of $\mathbb D^3$: $$\begin{aligned}
\mathcal{M}_3^{low} &= \big\{(M_1,M_2,M_3)\in \D^3 \, : M_{min} \le N^{-\frac12} \textrm{ and } M_{med}\le 2^{-9}N\big\} \, ,\\
\mathcal{M}_3^{med} &= \big\{(M_1,M_2,M_3)\in\D^3 \, : \, N^{-\frac12} < M_{min} \le M_{med} \le 2^{-9}N\big\} \, ,\\
\mathcal{M}_3^{high,1} &= \big\{(M_1,M_2,M_3)\in\D^3 \, : \, M_{min} \le N^{-1} \textrm{ and } 2^{-9}N < M_{med} \le M_{max} \big\} \, , \\
\mathcal{M}_3^{high,2} &= \big\{(M_1,M_2,M_3)\in\D^3 \, : \, N^{-1} < M_{min} \le 1 \textrm{ and } 2^{-9}N < M_{med} \le M_{max} \big\} \, , \\
\mathcal{M}_3^{high,3} &= \big\{(M_1,M_2,M_3)\in\D^3 \, : \, 1 < M_{min} \textrm{ and } 2^{-9}N < M_{med} \le M_{max} \big\} \, ,\end{aligned}$$ where $M_{min} \le M_{med} \le M_{max}$ denote respectively the minimum, sub-maximum and maximum of $\{ M_1,M_2,M_3\}$. Moreover, we also denote $$\mathcal{M}_3^{high}=\mathcal{M}_3^{high,1} \cup \mathcal{M}_3^{high,2} \cup \mathcal{M}_3^{high,3} \, .$$
We will denote by $\phi_{M_1,M_2,M_3}$ the function $$\phi_{M_1,M_2,M_3}(\xi_1,\xi_2,\xi_3) = \phi_{M_1}(\xi_2+\xi_3) \phi_{M_2}(\xi_1+\xi_3) \phi_{M_3}(\xi_1+\xi_2).$$
Next, we state a useful technical lemma.
\[teclemma\] Let $(\xi_1,\xi_2,\xi_3) \in \mathbb R^3$ that satisfies $|\xi_j| \sim N_j$ for $j=1,2,3$ and $|\xi_1+\xi_2+\xi_3| \sim N$. Let $(M_1,M_2,M_3) \in \mathcal{M}_3^{low} \cup \mathcal{M}_3^{med}$. Then it holds that $$N_1 \sim N_2 \sim N_3\sim M_{max} \sim N \quad \text{if} \quad (\xi_1,\xi_2,\xi_3) \in \text{supp} \, \phi_{M_1,M_2,M_3} \, ,$$ where $M_{max}$ denote the maximum of $\{M_1,M_2,M_3\}$.
Without loss of generality, we can assume that $M_1 \le M_2 \le M_3$. Let $(\xi_1,\xi_2,\xi_3) \in \text{supp} \, \phi_{M_1,M_2,M_3}$. Then, we have $|\xi_2+\xi_3| \ll N$ and $|\xi_1+\xi_3| \ll N$, so that $N_1 \sim N_2 \sim N$ since $|\xi_1+\xi_2+\xi_3| \sim N$.
On one hand $N_3 \ll N$ would imply that $M_1 \sim M_2 \sim N$ which is a contradiction. On the other hand, $N_3 \gg N$ would imply that $|\xi_1+\xi_2+\xi_3| \gg N$ which is also a contradiction. Therefore, we must have $N_3 \sim N$.
Finally, $M_1 \ll N$ imply that $\xi_2 \cdot \xi_3 <0$ and $M_2 \ll N$ imply $\xi_1 \cdot \xi_3<0$. Thus, $\xi_1 \cdot \xi_2>0$, so that $M_3 \sim N$.
For $\eta\in L^\infty$, let us define the trilinear pseudo-product operator $\Pi^3_{\eta,M_1,M_2,M_3}$ in Fourier variables by $$\label{def.pseudoproduct3}
\mathcal{F}_x\big(\Pi^3_{\eta,M_1,M_2,M_3}(u_1,u_2,u_3) \big)(\xi)=\int_{\Gamma^2(\xi)}(\eta\phi_{M_1,M_2,M_3})(\xi_1,\xi_2,\xi_3)\prod_{j=1}^3\widehat{u}_j(\xi_j) \, .$$ It is worth noticing that when the functions $u_j$ are real-valued, the Plancherel identity yields $$\label{def.pseudoproduct3b}
\int_{\mathbb R} \Pi^3_{\eta,M_1,M_2,M_3}(u_1,u_2,u_3) \, u_4 \, dx=\int_{\Gamma^3} \big(\eta \phi_{M_1,M_2,M_3}\big)(\xi_1,\xi_2,\xi_3) \prod_{j=1}^4 \widehat{u}_j(\xi_j) \, .$$
Finally, we define the resonance function of order $3$ by $$\begin{aligned}
\Omega^3(\xi_1,\xi_2,\xi_3) &= \xi_1^3+\xi_2^3+\xi_3^3-(\xi_1+\xi_2+\xi_3)^3 \notag\\
&= -3(\xi_1+\xi_2)(\xi_1+\xi_3)(\xi_2+\xi_3) \, .\label{res3}\end{aligned}$$
The following proposition gives suitable estimates for the pseudo-product $\Pi^3_{M_1,M_2,M_3}$ when $(M_1,M_2,M_3) \in \mathcal{M}_3^{high}$.
\[L2trilin\] Assume that $0<T \le 1$, $\eta$ is a bounded function and $u_i$ are real-valued functions in $Y^0=X^{-1,1} \cap L^{\infty}_tL^2_x$ with time support in $[0,2]$ and spatial Fourier support in $I_{N_i}$ for $i=1,\cdots 4$. Here, $N_i$ denote nonhomogeneous dyadic numbers. Assume also that $N_{max} \ge N_4=N \gg 1$, and $(M_1,M_2,M_3)\in \mathcal{M}_3^{high}$. Then $$\label{L2trilin.2}
\Big| \int_{\R \times [0,T]}\Pi^3_{\eta,M_1,M_2,M_3}(u_1,u_2,u_3) \, u_4 \, dxdt \Big| \lesssim N_{max}^{-1} \prod_{i=1}^4\|u_i\|_{Y^0} \, .$$ Moreover, the implicit constant in estimate only depends on the $L^{\infty}$-norm of the function $\eta$.
Before giving the proof of Proposition \[L2trilin\], we derive some important technical lemmas.
\[QL\] Let $L$ be a nonhomogeneous dyadic number. Then the operator $Q_{\le L}$ is bounded in $L^{\infty}_tL^2_x$ uniformly in $L$. In other words, $$\label{QL.1}
\|Q_{\le L}u\|_{L^{\infty}_tL^2_x} \lesssim \|u\|_{L^{\infty}_tL^2_x} \, ,$$ for all $u \in L^{\infty}_tL^2_x$ and the implicit constant appearing in does not depend on $L$.
A direct computation shows that $$\label{QL.2}
Q_{\le L}u=e^{itw(D)}R_{\le L}e^{-itw(D)}u \, .$$ Since $e^{itw(D)}$ is a unitary group in $L^2$, it follows from , Minkowski and Hölder’s inequalities that $$\begin{split}
\|Q_{\le L}u(\cdot,t)\|_{L^2_x}&=\|R_{\le L}e^{-itw(D)}u(\cdot,t)\|_{L^2_x} \\ &\le \int_{\mathbb R} \big|(\chi_L)^{\vee}(t_1)\big|\|e^{-i(t-t_1)w(D)}u(\cdot,t-t_1)\|_{L^2_x}dt_1 \\ & \le \|(\chi)^{\vee}\|_{L^1}\|u\|_{L^{\infty}_tL^2_x} \, .
\end{split}$$ which implies estimate .
For any $0<T \le 1$, let us denote by $1_T$ the characteristic function of the interval $[0,T]$. One of the main difficulty in the proof of Proposition \[L2trilin\] is that the operator of multiplication by $1_T$ does not commute with $Q_L$. To handle this situation, we follow the arguments introduced in [@MoVe] and use the decomposition $$\label{1T}
1_T=1_{T,R}^{low}+1_{T,R}^{high}, \quad \text{with} \quad \mathcal{F}_t\big(1_{T,R}^{low} \big)(\tau)=\chi(\tau/R)\mathcal{F}_t\big(1_{T} \big)(\tau) \, ,$$ for some $R>0$ to be fixed later. The following lemmas were derived in [@MoVe]. For the sake of completeness, we will give their proof here.
\[ihigh-lem\] For any $ R>0 $ and $ T>0 $ it holds $$\label{high}
\|1_{T,R}^{high}\|_{L^1}\lesssim T\wedge R^{-1} \, ,$$ and $$\label{high2}
\| 1_{T,R}^{low}\|_{L^\infty}\lesssim 1 \; .$$
It follows from the definition of $1_{T,R}^{high}$ in that $$\begin{aligned}
\|1_{T,R}^{high}\|_{L^1} &= \int_{\mathbb R} \left|\int_{\mathbb R} (1_T(t)-1_T(t-s/R))\mathcal{F}_t^{-1}(\chi)(s)ds\right|dt\\
&\le \int_{\mathbb R}\int_{([0,T]\setminus [s/R,T+s/R])\cup ([s/R,T+s/R]\setminus [0,T])}|\mathcal{F}_t^{-1}(\chi)(s)|dtds\\
&\lesssim \int_{\mathbb R} (T\wedge |s|/R) |\mathcal{F}_t^{-1}(\chi)(s)|ds\\
&\lesssim T\wedge R^{-1} \, .\end{aligned}$$ Finally it is easy check that $ \| 1_{T,R}^{low}\|_{L^\infty}\lesssim \|\widehat{\chi}\|_{L^1} \|1_T\|_{L^\infty} \lesssim 1$.
\[ilow-lem\] Assume that $T>0$, $R>0$ and $ L \gg R $. Then, it holds $$\label{ihigh-lem.1}
\|Q_L (1_{T,R}^{low}u)\|_{L^2}\lesssim \|Q_{\sim L} u\|_{L^2} \, ,$$ for all $u \in L^2(\mathbb R^2)$.
By Plancherel we get $$\begin{aligned}
\notag
A_{L} &=\|Q_L(1_{T,R}^{low}u)\|_{L^2} \notag\\
&=\|\phi_L(\tau-\omega(\xi))\mathcal{F}_t(1_{T,R}^{low})\ast_\tau \mathcal{F}(u)(\tau,\xi)\|_{L^2} \notag\\
&= \left\|\sum_{L_1}\phi_L(\tau-\omega(\xi))\int_\R \phi_{L_1}(\tau'-\omega(\xi))\mathcal{F}(u)(\tau',\xi)\chi((\tau-\tau')/R)\frac{e^{-iT(\tau-\tau')}-1}{\tau-\tau'}d\tau'\right\|_{L^2}.\end{aligned}$$ In the region where $L_1\ll L$ or $L_1\gg L$, we have $|\tau-\tau'|\sim L\vee L_1\gg R$, thus $A_L$ vanishes. On the other hand, for $L\sim L_1$, we get $$A_L \lesssim \sum_{L_1\sim L} \|Q_L(1_{T,R}^{low}Q_{L_1}u)\|_{L^2}
\lesssim \|Q_{\sim L}u\|_{L^2}.$$
Given $u_i$, $1 \le i \le 4$, satisfying the hypotheses of Proposition \[L2trilin\], let $G_{M_1,M_2,M_3}^3=G_{M_1,M_2,M_3}^3(u_1,u_2,u_3,u_4)$ denote the left-hand side of . We use the decomposition in and obtain that $$\label{L2trilin.4}
G_{M_1,M_2,M_3}^3=G_{M_1,M_2,M_3,R}^{3,low}+G_{M_1,M_2,M_3,R}^{3,high} \, ,$$ where $$G_{M_1,M_2,M_3,R}^{3,low}=\int_{\mathbb R^2}1^{low}_{T,R}\,\Pi_{\eta,M_1,M_2,M_3}^3(u_1,u_2,u_3) u_4 \, dxdt$$ and $$G_{M_1,M_2,M_3,R}^{3,high}=\int_{\mathbb R^2}1^{high}_{T,R}\,\Pi_{\eta,M_1,M_2,M_3}^3(u_1,u_2,u_3)u_4 \, dxdt \, .$$
We deduce from Hölder’s inequality in time, , and that $$\begin{split}
\big|G_{M_1,M_2,M_3,R}^{3,high}\big| & \le \|1_{T,R}^{high}\|_{L^1}\big\|\int_{\mathbb R}\Pi_{\eta,M_1,M_2,M_3}^3(u_1,u_2,u_3)u_4 \, dx\big\|_{L^{\infty}_t}
\\ & \lesssim R^{-1}M_{min}\prod_{i=1}^4\|u_i\|_{L^{\infty}_tL^2_x} \, ,
\end{split}$$ which implies that $$\label{L2trilin.5}
\big|G_{M_1,M_2,M_3,R}^{3,high}\big| \lesssim N_{max}^{-1}\prod_{i=1}^4\|u_i\|_{L^{\infty}_tL^2_x}$$ if we choose $R=M_{min}N_{max}$.
To deal with the term $G_{M_1,M_2,M_3,R}^{3,low}$, we decompose with respect to the modulation variables. Thus, $$G_{M,R}^{3,low}=\sum_{L_1,L_2,L_3,L}\int_{\mathbb R^2}\Pi_{\eta,M_1,M_2,M_3}^3(Q_{L_1}(1^{low}_{T,R}u_1),Q_{L_2}u_2,Q_{L_3}u_3)Q_{L_4}u_4 \, dxdt \, .$$ Moreover, observe from the resonance relation in and the hypothesis $(M_1,M_2,M_3) \in \mathcal{M}_3^{high}$ that $$\label{L2trilin5b}
L_{max} \gtrsim M_{min}N_{max}^2 \, .$$ Indeed, in the case where $N_{max} \sim N$, is clear from the definition of $\mathcal{M}^{high}$. In the case where $N_{max} \gg N$, then we have $N_{max} \sim N_{med} \gg N_{min}$ since $|\xi_1+\xi_2+\xi_3| \sim N$. This implies that $M_{max} \sim M_{med} \gtrsim N_{max}$.
In particular, implies that $L_{max} \gg R=M_{min}N_{max}$, since $N_{max} \gg 1$.
In the case where $L_{max}=L_1$, we deduce from , , and that $$\begin{split}
\big| G_{M_1,M_2,M_3,R}^{3,low} \big| &\lesssim \sum_{L_1 \gtrsim M_{min}N_{max}^2}M_{min}L_1^{-1}L_1\|Q_{L_1}(1^{low}_{T,R}u_1)\|_{L^2_{x,t}}\prod_{i=2}^4\|Q_{\le L_1}u_i\|_{L^{\infty}_tL^2_x} \\ & \lesssim N_{max}^{-1}\|u_1\|_{X^{-1,1}}\prod_{i=2}^{4}\|u_i\|_{L^{\infty}_tL^2_x} \, ,
\end{split}$$ which implies that $$\label{L2trilin.6}
\big| G_{M_1,M_2,M_3,R}^{3,low} \big| \lesssim N_{max}^{-1}\prod_{i=1}^4\|u_i\|_{Y^0} \, .$$
We can prove arguing similarly that still holds true in all the other cases, *i.e.* $L_{max}=L_2, \ L_3$ or $L$. Note that for those cases we do not have to use but we only need . Therefore, we conclude the proof of estimate gathering , and .
5-linear estimates
------------------
\[prod6-est\] Let $f_j\in L^2(\R)$, $j=1,...,6$ and $M_1,M_4 \in \mathbb D$. Then it holds that $$\label{prod6.est.1}
\int_{\Gamma^5} \phi_{M_1}(\xi_2+\xi_3)\phi_{M_4}(\xi_5+\xi_6) \prod_{j=1}^6|f_j(\xi_j)| \lesssim M_1M_4 \prod_{j=1}^6 \|f_j\|_{L^2}.$$ If moreover $f_j$ are localized in an annulus $\{|\xi|\sim N_j\}$ for $j=5,6$, then $$\label{prod6.est.2}
\int_{\Gamma^5} \phi_{M_1}(\xi_2+\xi_3)\phi_{M_4}(\xi_5+\xi_6) \prod_{i=1}^6|f_i(\xi_i)| \lesssim M_1M_4^{\frac12}N_5^{\frac14}N_6^{\frac14} \prod_{i=1}^6 \|f_i\|_{L^2} \, .$$
Let us denote by $\mathcal{I}^5=\mathcal{I}^5(f_1,\cdots,f_6)$ the integral on the right-hand side of . We can assume without loss of generality that $f_j \ge 0$, $j=1,\cdots,6$. We have by using the notation in that $$\label{prod6.est.3}
\mathcal{I}^5\le \mathcal{J}_{M_1}(f_2,f_3) \times \mathcal{J}_{M_4}(f_5,f_6) \times
\sup_{\xi_2,\xi_3,\xi_5,\xi_6} \int_{\mathbb R} f_1(\xi_1)f_4(-\sum_{\genfrac{}{}{0pt}{}{j=1}{ j\neq 4}}^6\xi_j) \, d\xi_1 \, .$$ Thus, estimate follows applying and the Cauchy-Schwarz inequality to .
Assuming furthermore that $f_j$ are localized in an annulus $\{|\xi| \sim N_j\}$ for $j=5,6$, then we get arguing as above that $$\label{prod6.est.4}
\mathcal{I}^5\le M_1\times\mathcal{J}_{M_4}(f_5,f_6) \times
\prod_{j=1}^4\|f_j\|_{L^2}\, .$$ From the Cauchy-Schwarz inequality $$\mathcal{J}_{M_4}(f_5,f_6) \le \Big( \int_{\mathbb R}f_5(\xi_5)d\xi_5\Big) \times \Big( \int_{\mathbb R} f_6(\xi_6)d\xi_6\Big) \lesssim N_5^{\frac12}N_6^{\frac12}\|f_5\|_{L^2}\|f_6\|_{L^2} \, ,$$ which together with yields $$\label{prod6.est.6}
\mathcal{I}^5\lesssim M_1N_5^{\frac12}N_6^{\frac12} \prod_{i=1}^6 \|f_i\|_{L^2} \, .$$ Therefore, we conclude the proof of interpolating and .
For a fixed $N\ge 1$ dyadic, we introduce the following subsets of $\D^6$: $$\begin{aligned}
\mathcal{M}_5^{low} &= \big\{(M_1,...,M_6)\in\D^6 \, : \, (M_1,M_2,M_3)\in \mathcal{M}_3^{med}, \ \\ &\quad \quad \quad M_{min(5)}\le 2^9 M_{med(3)}
\textrm{ and } \ M_{med(5)}\le 2^{-9}N \big\},\\
\mathcal{M}_5^{med} &= \big\{(M_1,...,M_6)\in\D^6 \, : \, (M_1,M_2,M_3)\in \mathcal{M}_3^{med} \ \text{and} \\
& \quad \quad \quad 2^9 M_{med(3)} <M_{min(5)} \le M_{med(5)} \le 2^{-9}N
\big\},\\
\mathcal{M}_5^{high,1} &= \big\{(M_1,...,M_6)\in\D^6 \, : \, (M_1,M_2,M_3)\in \mathcal{M}_3^{med},\\ & \quad \quad \quad M_{min(5)} \le N^{-1} \quad \text{and} \quad 2^{-9}N < M_{med(5)} \le M_{max(5)}\big\} \, ,
\\
\mathcal{M}_5^{high,2} &= \big\{(M_1,...,M_6)\in\D^6 \, : \, (M_1,M_2,M_3)\in \mathcal{M}_3^{med}, \\ & \quad \quad \quad N^{-1} < M_{min(5)}\quad \text{and} \quad 2^{-9}N < M_{med(5)} \le M_{max(5)}\big\} \, ,\end{aligned}$$ and $$\begin{aligned}
\widetilde{ \mathcal{M}}_5^{low} &= \big\{(M_1,...,M_6)\in\D^6 \, : \, (M_1,M_2,M_3)\in \mathcal{M}_3^{high,2}, \ M_{min(5)}\le 2^9 M_{min(3)}
\big\}, \nonumber \\
\widetilde{\mathcal{M}}_5^{med, 1} &= \big\{(M_1,...,M_6)\in\D^6 \, : \, (M_1,M_2,M_3)\in \mathcal{M}_3^{high,2}, \nonumber \\
& \quad \quad \quad 2^9 M_{min(3)} <M_{min(5)} \le M_{med(5)} \le 2^{-9}N \ \text{and} \ M_{min(5)} \le 2^9N^{\frac12}
\big\}, \label{prod6.est.6b} \\
\widetilde{\mathcal{M}}_5^{med, 2} &= \big\{(M_1,...,M_6)\in\D^6 \, : \, (M_1,M_2,M_3)\in \mathcal{M}_3^{high,2} \nonumber \\
& \quad \quad \quad 2^9 M_{min(3)} <M_{min(5)} \le M_{med(5)} \le 2^{-9}N \ \text{and} \ M_{min(5)} > 2^9N^{\frac12}
\big\}, \nonumber \\
\widetilde{\mathcal{M}}_5^{high} &= \big\{(M_1,...,M_6)\in\D^6 \, : \, (M_1,M_2,M_3)\in \mathcal{M}_3^{high,2} \nonumber \\
& \quad \quad \quad 2^9 M_{min(3)} <M_{min(5)} \ \text{and} \ 2^{-9}N<M_{med(5)} \le M_{max(5)} \big\} \, , \nonumber\end{aligned}$$ where $M_{max(3)} \ge M_{med(3)} \ge M_{min(3)}$, respectively $M_{max(5)} \ge M_{med(5)} \ge M_{min(5)}$, denote the maximum, sub-maximum and minimum of $\{M_1, M_2, M_3\}$, respectively $\{M_4,M_5,M_6\}$. We will also denote by $\phi_{M_1,...,M_6}$ the function defined on $\R^6$ by $$\phi_{M_1,...,M_6}(\xi_1,...,\xi_6) = \phi_{M_1,M_2,M_3}(\xi_1,\xi_2,\xi_3) \phi_{M_4,M_5,M_6}(\xi_4,\xi_5,\xi_6)\, .$$ For $\eta\in L^\infty$, let us define the operator $\Pi^5_{\eta,M_1,...,M_6}$ in Fourier variables by $$\label{def.pseudoproduct6}
\mathcal{F}_x\big(\Pi^5_{\eta,M_1,...,M_6}(u_1,...,u_5) \big)(\xi)=\int_{\Gamma^4(\xi)}(\eta\phi_{M_1,...,M_6})(\xi_1,...,\xi_5,-\sum_{j=1}^5\xi_j) \prod_{j=1}^5 \widehat{u}_j(\xi_j) \, .$$ Observe that, if the functions $u_j$ are real valued, the Plancherel identity yields $$\label{def.pseudoproduct6b}
\int_{\mathbb R}\Pi^5_{\eta,M_1,...,M_6}(u_1,...,u_5)\, u_6 \, dx=\int_{\Gamma^5}(\eta\phi_{M_1,...,M_6})\prod_{j=1}^6 \widehat{u}_j(\xi_j) \, .$$
Finally, we define the resonance function of order $5$ for $\vec{\xi}_{(5)}=(\xi_1,\cdots,\xi_6) \in \Gamma^5$ by $$\label{res5}
\Omega^5(\vec{\xi}_{(5)}) = \xi_1^3+\xi_2^3+\xi_3^3+\xi_4^3+\xi_5^3+\xi_6^3 \; .$$ It is worth noticing that a direct calculus leads to $$\label{res55}
\Omega^5(\vec{\xi}_{(5)}) = \Omega^3(\xi_1,\xi_2,\xi_3) + \Omega^3(\xi_4,\xi_5,\xi_6) \, .$$
The following proposition gives suitable estimates for the pseudo-product $\Pi^5_{M_1,\cdots,M_6}$ when $(M_1,\cdots,M_6) \in \mathcal{M}^{high}_5$ in the non resonant case $M_1M_2M_3\not\sim M_4M_5M_6$, when $(M_1,\cdots,M_6) \in \widetilde{\mathcal{M}}^{high}_5$ and when $(M_1,\cdots,M_6) \in \widetilde{\mathcal{M}}^{med,2}_5$.
\[L25lin\] Assume that $0<T \le 1$, $\eta$ is a bounded function and $u_i$ are functions in $Y^0=X^{-1,1} \cap L^{\infty}_tL^2_x$ with time support in $[0,2]$ and spatial Fourier support in $I_{N_i}$ for $i=1,\cdots 6$. Here, $N_i$ denote nonhomogeneous dyadic numbers. Assume also that $\max\{N_1,\cdots,N_6\} \ge N\gg 1$.
1. If $(M_1,...,M_6)\in \mathcal{M}_5^{high}$ satisfies the non resonance assumption $M_1M_2M_3\not\sim M_4M_5M_6$, then $$\label{L25lin.2}
\big| \int_{\R \times [0,T]}\Pi^5_{\eta,M_1,...,M_6}(u_1,...,u_5)\, u_6 \, dxdt\big| \lesssim M_{min(3)}N_{max(5)}^{-1} \prod_{i=1}^6\|u_i\|_{Y^0} \, ,$$ where $N_{max(5)}=\max\{N_4,N_5,N_6\}$.
2. If $(M_1,...,M_6)\in \widetilde{\mathcal{M}}_5^{high}$, $\max\{N_1,N_2,N_3\} \sim N$ and $\text{med} \, \{N_1,N_2,N_3\} \ll N$, then $$\label{L25lin.3}
\big| \int_{\R \times [0,T]}\Pi^5_{\eta,M_1,...,M_6}(u_1,...,u_5)\, u_6 \, dxdt\big| \lesssim M_{min(3)}N^{-1} \prod_{i=1}^6\|u_i\|_{Y^0} \, .$$
3. If $(M_1,...,M_6)\in \widetilde{\mathcal{M}}_5^{med,2}$, $\max\{N_1,N_2,N_3\} \sim N$ and $\text{med} \, \{N_1,N_2,N_3\} \ll N$, then $$\label{L25lin.4}
\big| \int_{\R \times [0,T]}\Pi^5_{\eta,M_1,...,M_6}(u_1,...,u_5)\, u_6 \, dxdt\big| \lesssim M_{min(3)}N^{-\frac12} \prod_{i=1}^6\|u_i\|_{Y^0} \, .$$
Here, we used the notations $M_{min(3)} = \min(M_1,M_2,M_3)$. Moreover, the implicit constant in estimate only depends on the $L^{\infty}$-norm of the function $\eta$.
The proof is similar to the proof of Proposition \[L2trilin\]. We may always assume $M_1\le M_2\le M_3$ and $M_4\le M_5\le M_6$.
In Case $1$, we deduce from identities and , the non resonance assumption and the fact that $(M_1,...,M_6)\in \mathcal{M}_5^{high}$ that $$L_{max} \gtrsim \max(M_1M_2M_3, M_4M_5M_6) \gtrsim M_{4}N_{max(5)}^2 \, .$$ Estimate follows from this and estimate .
In Case $2$, we get from the assumptions on $M_i$ and $N_i$ that $$M_4M_5M_6\ge 2^9 M_1 2^{-9} N N_{max(5)} \gtrsim M_1 N^2 \gg M_1 M_2 M_3$$ $$\quad \text{so that} \quad L_{max} \gtrsim M_3 M_5 M_6 \gtrsim N_{max(5)}^2 M_4 \, .$$ The proof of estimate follows then exactly as above.
In Case $3$, we get from the assumptions on $M_i$ and $N_i$ $$M_4M_5M_6 \gg N^2 \ge N^2M_{1}, \quad \text{so that} \quad L_{max} \gtrsim N^{\frac32} M_{4} \, .$$ Moreover, observe from Lemma \[teclemma\] that $\max\{N_1,\cdots,N_6\} \sim N$. The proof of estimate follows then exactly as above.
7-linear estimates
------------------
\[prod8-est\] Let $f_i\in L^2(\R)$, $i=1,...,8$ and $M_1, \, M_2, \, M_4, \, M_5$ and $M_7 \in \mathbb D$. Then it holds that $$\label{prod8-est.1}
\int_{\Gamma^7} \phi_{M_1}(\xi_2+\xi_3) \phi_{M_4}(\xi_5+\xi_6)\phi_{M_7}(\xi_7+\xi_8) \prod_{i=1}^8|f_i(\xi_i)| \lesssim M_1M_4M_7 \prod_{i=1}^8 \|f_i\|_{L^2} \, .$$ and $$\label{prod8-est.0}
\begin{split}
\int_{\Gamma^7} \phi_{M_1}(\xi_2+\xi_3)\phi_{M_2}(\xi_1+\xi_3) \phi_{M_4}(\xi_5+\xi_6)&\phi_{M_5}(\xi_4+\xi_6) \prod_{i=1}^8|f_i(\xi_i)| \\ &\lesssim M_1M_2^{\frac12}M_4M_5^{\frac12} \prod_{i=1}^8 \|f_i\|_{L^2} \, .
\end{split}$$
If moreover $f_j$ is localized in an annulus $\{|\xi|\sim N_j\}$ for $j=7, \, 8$, then $$\label{prod8-est.0b}
\begin{split}
\int_{\Gamma^7} \phi_{M_1}(\xi_2+\xi_3)\phi_{M_4}(\xi_5+\xi_6) &\phi_{M_7}(\xi_7+\xi_8) \prod_{i=1}^8|f_i(\xi_i)| \\ & \lesssim M_1M_4M_7^{\frac12}N_7^{\frac14}N_8^{\frac14} \prod_{i=1}^8 \|f_i\|_{L^2} \, .
\end{split}$$
Let us denote by $\mathcal{I}^7=\mathcal{I}^7(f_1,\cdots,f_8)$ the integral on the right-hand side of . We can assume without loss of generality that $f_j \ge 0$, $j=1,\cdots,8$. We have by using the notation in that $$\label{prod8-est.3}
\mathcal{I}^7\le \mathcal{J}_{M_1}(f_2,f_3) \times \mathcal{J}_{M_4}(f_5,f_6) \times \mathcal{J}_{M_7}(f_7,f_8) \times
\sup_{\xi_2,\xi_3,\xi_5,\xi_6,\xi_7,\xi_8} \int_{\mathbb R} f_1(\xi_1)f_4(-\sum_{\genfrac{}{}{0pt}{}{j=1}{ j\neq 4}}^8\xi_j) \, d\xi_1 \, .$$ Thus, estimate follows applying and the Cauchy-Schwarz inequality to .
Assuming furthermore that $f_j$ are localized in an annulus $\{|\xi| \sim N_j\}$ for $j=7,8$, then we get arguing as above that $$\label{prod8-est.4}
\mathcal{I}^7\le M_1M_4 \times\mathcal{J}_{M_7}(f_7,f_8) \times
\prod_{j=1}^6\|f_j\|_{L^2}\, .$$ From the Cauchy-Schwarz inequality $$\label{prod8-est.5}
\mathcal{J}_{M_7}(f_7,f_8) \le \Big( \int_{\mathbb R}f_7(\xi_7)d\xi_7\Big) \times \Big( \int_{\mathbb R} f_8(\xi_8)d\xi_8\Big) \lesssim N_7^{\frac12}N_8^{\frac12}\|f_7\|_{L^2}\|f_8\|_{L^2} \, ,$$ which together with yields $$\label{prod8-est.6}
\mathcal{I}^7\lesssim M_1M_4N_7^{\frac12}N_8^{\frac12} \prod_{j=1}^8 \|f_j\|_{L^2} \, .$$ Therefore, we conclude the proof of interpolating and .
Now, we prove . Let denote by $\widetilde{\mathcal{I}}^7=\widetilde{\mathcal{I}}^7(f_1,\cdots,f_8)$ the left-hand side of . Then, $$\label{prod8-est.7}
\widetilde{\mathcal{I}}^7 \le\mathcal{J}_{M_1}(f_1,f_2) \times \mathcal{J}_{M_4}(f_5,f_6) \times \sup_{\xi_2,\xi_3,\xi_5,\xi_6} \mathcal{K}(\xi_2,\xi_3,\xi_5,\xi_6)$$ where $\mathcal{K}=\mathcal{K}(\xi_2,\xi_3,\xi_5,\xi_6)$ is defined by $$\mathcal{K}(\xi_2,\xi_3,\xi_5,\xi_6) = \int_{\R^3} \phi_{M_2}(\xi_1+\xi_3)\phi_{M_5}(\xi_4+\xi_6) f_1(\xi_1)f_4(\xi_4)f_7(\xi_7)f_8(
-\sum_{k=1}^7 \xi_k) \, d\xi_1d\xi_4d\xi_7.$$ We deduce from Cauchy-Schwarz in $(\xi_1,\xi_4)$ that $$\begin{aligned}
\mathcal{K}&= \int_{\R^2} \phi_{M_2}(\xi_1+\xi_3)\phi_{M_5}(\xi_4+\xi_6) f_1(\xi_1) f_4(\xi_4) (f_7\ast f_8)(-(\xi_1+...+\xi_6)) d\xi_1 d\xi_4\\
&\le \|\phi_{M_2}(\xi_1+\xi_3)\phi_{M_5}(\xi_4+\xi_6)\|_{L^2_{\xi_1,\xi_4}} \|f_1(\xi_1)f_4(\xi_4)\|_{L^2_{\xi_1,\xi_4}} \|f_7\ast f_8\|_{L^\infty}\\
&\lesssim (M_2M_5)^{1/2} \|f_1\|_{L^2} \|f_4\|_{L^2} \|f_7\|_{L^2} \|f_8\|_{L^2} \, ,
\end{aligned}$$ which together with and concludes the proof of .
For a fixed $N\ge 1$ dyadic, we introduce the following subsets of $\D^9$: $$\begin{aligned}
\mathcal{M}_7^{low} &= \big\{(M_1,...,M_9)\in\D^9 \, : \, (M_1,...,M_6)\in \mathcal{M}_5^{med}, \ M_{min(7)} \le M_{med(7)}\le 2^{-9}N\big\} \, ,\\
\mathcal{M}_7^{high} &= \big\{(M_1,...,M_9)\in\D^9 \, : \, (M_1,...,M_6)\in \mathcal{M}_5^{med}, \ 2^{-9}N < M_{med(7)} \le M_{max(7)}\big\}
\end{aligned}$$ where $M_{max(7)} \ge M_{med(7)} \ge M_{min(7)}$ denote respectively the maximum, sub-maximum and minimum of $\{M_7,M_8,M_9\}$.
We will denote by $\phi_{M_1,...,M_9}$ the function defined on $\Gamma_7$ by $$\phi_{M_1,...,M_9}(\xi_1,...,\xi_7,\xi_8) = \phi_{M_1,...,M_6}(\xi_1,...,\xi_5,-\sum_{j=1}^5\xi_j) \, \phi_{M_7,M_8,M_9}(\xi_6,\xi_7,\xi_8) \, .$$ For $\eta\in L^\infty$, let us define the operator $\Pi^7_{\eta,M_1,...,M_9}$ in Fourier variables by $$\label{def.pseudoproduct8}
\mathcal{F}_x\big(\Pi^7_{\eta,M_1,...,M_9}(u_1,...,u_7) \big)(\xi)=\int_{\Gamma^6(\xi)}(\eta\phi_{M_1,...,M_9})(\xi_1,...,\xi_7) \prod_{j=1}^7 \widehat{u}_j(\xi_j) \, .$$ Observe that, if the functions $u_j$ are real valued, the Plancherel identity yields $$\label{def.pseudoproduct8b}
\int_{\mathbb R}\Pi^7_{\eta,M_1,...,M_9}(u_1,...,u_7)\, u_8 \, dx=\int_{\Gamma^7}(\eta\phi_{M_1,...,M_9})\prod_{j=1}^8 \widehat{u}_j(\xi_j) \, .$$
We define the resonance function of order $7$ for $\vec{\xi}_{(7)}=(\xi_1,\cdots,\xi_8) \in \Gamma^7$ by $$\label{res7}
\Omega^7(\vec{\xi}_{(7)}) = \sum_{j=1}^8\xi_j^3 \, .$$ Again it is direct to check that $$\label{res77}
\Omega^7(\vec{\xi}_{(7)})
=\Omega^5(\xi_1,...,\xi_5, -\sum_{i=1}^5 \xi_i) + \Omega^3(\xi_6,\xi_7,\xi_8) \, .$$
The following proposition gives suitable estimates for the pseudo-product $\Pi^7_{M_1,\cdots,M_9}$ when $(M_1,\cdots,M_9) \in \mathcal{M}^{high}_7$ in the nonresonant case $M_4M_5M_6\not\sim M_7M_8M_9$.
\[L27lin\] Assume that $0<T \le 1$, $\eta$ is a bounded function and $u_j$ are functions in $Y^0=X^{-1,1} \cap L^{\infty}_tL^2_x$ with time support in $[0,2]$ and spatial Fourier support in $I_{N_j}$ for $j=1,\cdots 8$. Here, $N_j$ denote nonhomogeneous dyadic numbers. Assume also that $N_{max} \ge N \gg 1$, and $(M_1,...,M_9)\in \mathcal{M}_7^{high}$ satisfies the non resonance assumption $M_4M_5M_6\not\sim M_7M_8M_9$. Then $$\label{L27lin.2}
\Big| \int_{\R \times [0,T]}\Pi^7_{\eta,M_1,...,M_9}(u_1,...,u_7) \, u_8 \, dxdt\Big| \lesssim M_{min(3)}M_{min(5)}N^{-1} \prod_{j=1}^8\|u_j\|_{Y^0} \, ,$$ where $M_{min(3)} = \min(M_1,M_2,M_3)$ and $M_{min(5)}=\min(M_4,M_5,M_6)$. Moreover, the implicit constant in estimate only depends on the $L^{\infty}$-norm of the function $\eta$.
The proof is similar to the proof of Proposition \[L2trilin\]. From identities and , the non resonance assumption and the fact that $(M_1,...,M_9)\in \mathcal{M}_7^{high}$ we get $$L_{max} \gtrsim \max(M_4 M_5 M_6, M_7 M_8 M_9) \gtrsim \min(M_7,M_8,M_9) \, N_{max}^2 \, .$$ The claim follows from this and estimate .
Energy estimates {#Secenergy}
================
The aim of this section is to derive energy estimates for the solutions of and the solutions of the equation satisfied by the difference of two solutions of (see equation below).
In order to simplify the notations in the proofs below, we will instead derive energy estimates on the solutions $u$ of the more general equation $$\label{eq-u0}
\partial_t u+\partial_x^3 u = c_4\partial_x(u_1u_2u_3) \, ,$$ where for any $i\in\{1,2,3\}$, $u_i$ solves $$\label{eq-u1}
\partial_t u_i+\partial_x^3 u_i = c_i\partial_x(u_{i,1}u_{i,2}u_{i,3}) \, .$$ Finally we also assume that each $u_{i,j}$ solves $$\label{eq-u2}
\partial_t u_{i,j}+\partial_x^3 u_{i,j} = c_{i,j}\partial_x(u_{i,j,1}u_{i,j,2}u_{i,j,3}) \, ,$$ for any $(i,j) \in \{1,2,3\}^2$. We will sometimes use $u_4, \, u_{4,1}, \, u_{4,2}, \, u_{4,3}$ to denote respectively $u, \, u_1,\, u_2,\, u_3$. Here $c_j, \, j \in \{1,\cdots,4\}$ and $c_{i,j}, \, (i,j) \in \{1,2,3\}^2$ denote real constant. Moreover, we assume that all the functions appearing in -- are real-valued.
Also, we will use the notations defined at the begining of Section \[Secmultest\].
The main obstruction to estimate $\frac{d}{dt} \| P_Nu \|_{L^2}^2$ at this level of regularity is the resonant term $\int \partial_x\big(P_{+N}u_1P_{+N}u_2 P_{-N}u_3 \big) \, P_{-N}u \, dx$ for which the resonance relation is not strong enough. In this section we modify the energy by a fourth order term, whose part of the time derivative coming from the linear contribution of will cancel out this resonant term. Note however, that we need to add a second modification to the energy to control the part of the time derivative of the first modification coming from the resonant nonlinear contribution of .
Definition of the modified energy
---------------------------------
Let $N_0 =2^9$ and $N$ be a nonhomogeneous dyadic number. For $t \ge 0$, we define the modified energy at the dyadic frequency $N$ by $$\label{defEN}
\mathcal{E}_N(t) = \left\{ \begin{array}{ll} \frac 12 \|P_Nu(\cdot,t)\|_{L^2_x}^2 & \text{for} \ N \le N_0 \, \\
\frac 12 \|P_Nu(\cdot,t)\|_{L^2_x}^2 + \alpha \mathcal{E}_N^{3,med}(t) + \gamma \mathcal{E}_N^{3,high}(t) +\beta \mathcal{E}_N^5(t) & \text{for} \ N> N_0 \, , \end{array}\right.$$ where $\alpha$, $\gamma$ and $\beta$ are real constants to be determined later, $$\mathcal{E}_N^{3,med}(t) = \sum_{(M_1,M_2,M_3)\in\mathcal{M}_3^{med}} \int_{\Gamma^3}\phi_{M_1,M_2,M_3}\big(\vec{\xi}_{(3)}\big) \phi_N^2(\xi_4) \frac{\xi_4}{\Omega^3(\vec{\xi}_{(3)})} \prod_{j=1}^4 \widehat{u}_j(\xi_j) \, ,$$
$$\begin{split}
\mathcal{E}_N^{3,high}(t) &= \sum_{N_j, \, N_{med} \le N^{1/2}}\sum_{(M_1,M_2,M_3)\in\mathcal{M}_3^{high,2}} \int_{\Gamma^3}\phi_{M_1,M_2,M_3}\big(\vec{\xi}_{(3)}\big) \phi_N(\xi_4) \frac{\xi_4}{\Omega^3(\vec{\xi}_{(3)})} \\&
\quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \times \prod_{j=1}^3 \, \big(P_{N_j}u_j\big)^{\wedge}(\xi_j) \, \big(P_{N}u_4\big)^{\wedge}(\xi_4),
\end{split}$$
where $\vec{\xi}_{(3)}=(\xi_1,\xi_2,\xi_3)$ and the dyadic decompositions in $N_j$ are nonhomogeneous, $$\begin{split}
\mathcal{E}_N^5(t) &= \sum_{(M_1,...M_6)\in \mathcal{M}_5^{med}} \sum_{j=1}^4 c_j \int_{\Gamma^5} \phi_{M_1,...,M_6}(\vec{\xi_j}_{(5)}) \phi_N^2(\xi_4) \frac{\xi_4 \xi_j}{\Omega^3(\vec{\xi_j}_{(3)})\Omega^5(\vec{\xi_j}_{(5)})}
\\ & \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad
\times\prod_{\genfrac{}{}{0pt}{}{k=1}{ k\neq j}}^4 \widehat{u}_k(\xi_k) \prod_{l=1}^3 \widehat{u}_{j,l}(\xi_{j,l}) \, ,
\end{split}$$ with the convention $\displaystyle{\xi_j=-\sum_{\genfrac{}{}{0pt}{}{k=1}{k \neq j}}^4\xi_k=\sum_{l=1}^3\xi_{j,l}}$ and the notation $$\vec{\xi_j}_{(5)}=(\vec{\xi_j}_{(3)},\xi_{j,1},\xi_{j,2},\xi_{j,3}) \in \Gamma^5$$ where $\vec{\xi_j}_{(3)}$ is defined by $$\vec{\xi_1}_{(3)}=(\xi_2,\xi_3,\xi_4), \ \vec{\xi_2}_{(3)}=(\xi_1,\xi_3,\xi_4), \ \vec{\xi_3}_{(3)}=(\xi_1,\xi_2,\xi_4), \ \vec{\xi_4}_{(3)}=(\xi_1,\xi_2,\xi_3) \, .$$
For $T >0$, we define the modified energy by using a nonhomogeneous dyadic decomposition in spatial frequency $$\label{def-EsT}
E^s_T(u) = \sum_{N} N^{2s} \sup_{t \in [0,T]} \big|\mathcal{E}_N(t)\big| \, .$$ By convention, we also set $E^s_0(u) = \displaystyle{\sum_{N} N^{2s} \big|\mathcal{E}_N(0)\big|}$.
Next, we show that if $s >\frac14$, the energy $E^s_T(u)$ is coercive.
\[lem-EsT\] Let $s > 1/4$, $0<T \le 1$ and $u, u_i, u_{i,j}\in \widetilde{L^\infty_T}H^s_x$ be solutions of (\[eq-u0\])-(\[eq-u1\])-(\[eq-u2\]) on $[0,T]$. Then it holds that $$\label{lem-Est.1}
\|u\|_{\widetilde{L^\infty_T}H^s_x}^2 \lesssim E_T^s(u) + \prod_{j=1}^4\|u_j\|_{\widetilde{L^\infty_T}H^s_x} + \sum_{j=1}^4 \prod_{k=1\atop k\neq j}^4 \|u_k\|_{\widetilde{L^\infty_T}H^s_x} \prod_{l=1}^3 \|u_{j,l}\|_{\widetilde{L^\infty_T}H^s_x} \, .$$
We infer from (\[def-EsT\]) and the triangle inequality that $$\begin{aligned}
\label{lem-Est.2}
\|u\|_{\widetilde{L^\infty_T}H^s_x}^2 &\lesssim &E_T^s(u) + \sum_{N\ge N_0}N^{2s} \sup_{t \in [0,T]}\big|\mathcal{E}_N^{3,med}(t)\big|+
\sum_{N\ge N_0}N^{2s} \sup_{t \in [0,T]}\big|\mathcal{E}_N^{3,high}(t)\big| \nonumber \\
&&+ \sum_{N\ge N_0}N^{2s}\sup_{t \in [0,T]}\big|\mathcal{E}_N^5(t)\big| \, .\end{aligned}$$
We first estimate the contribution of $\mathcal{E}_N^{3,med}$. By symmetry, we can always assume that $M_1 \le M_2 \le M_3$, so that we have $N^{-\frac12} < M_1 \le M_2 \ll N$ and $M_3 \sim N$, since $(M_1,M_2,M_3) \in \mathcal{M}_3^{med}$. Then, we have thanks to Lemma \[prod4-est\], $$\label{lem-Est.3}
\begin{split}
N^{2s}\big|\mathcal{E}_N^{3,med}(t)\big| &\lesssim \sum_{N^{-1/2}<M_1,M_2\ll N\atop M_3\sim N} \frac{N^{2s+1}}{M_1M_2M_3} M_1 \prod_{j=1}^4\|P_{\sim N}u_{j}(t)\|_{L^2_x} \\ &
\lesssim \prod_{j=1}^4\|P_{\sim N}u_{j}(t)\|_{H^s_x} \, .
\end{split}$$
Now, we deal with the contribution of $\mathcal{E}_N^{3,high}$. Observe from the frequency localization, that we have $N^{-1} < M_{min} \le 1$, $M_{med} \sim M_{max} \sim N_{max} \sim N$ and $N_{min} \le N_{med} \le N^{\frac{1}{2}}$. Without loss of generality, assume moreover that $N_1 \le N_2 \le N_3$. Thus it follows from estimate that $$\label{lem-Est.3b}
\begin{split}
N^{2s}&\big|\mathcal{E}_N^{3,high}(t)\big| \\ &\lesssim \sum_{N^{-1}<M_{min}\le 1} \sum_{N_1 \le N_2 \le N^{1/2} \atop N_3 \sim N} \frac{N^{2s+1}}{M_{min}N^2} M_{min} \prod_{j=1}^3\|P_{ N_j}u_{j}(t)\|_{L^2_x}\|P_{ N}u(t)\|_{L^2_x} \\ &
\lesssim \prod_{j=1}^2\|u_{j}(t)\|_{L^2_x} \|P_{\sim N}u_{3}(t)\|_{H^s_x}\|P_Nu(t)\|_{H^s_x} \, .
\end{split}$$
To estimate the contribution of $\mathcal{E}_N^5(t)$, we notice that for $(M_1,...,M_6)\in \mathcal{M}_5^{med}$, the integrand in the definition of $\mathcal{E}_N^5$ vanishes unless $|\xi_1|\sim ...\sim |\xi_4| \sim N$ and $|\xi_{j,1}| \sim |\xi_{j,2}| \sim |\xi_{j,3}| \sim N$. Moreover, we assume without loss of generality $M_1\le M_2\le M_3$ and $M_4\le M_5\le M_6$, so that $$\left|\frac{\xi_4 \xi_j}{\Omega^3(\vec{\xi_j}_{(3)})\Omega^5(\vec{\xi_j}_{(6)})} \right| \sim \frac{N^2}{M_1M_2N\cdot M_4M_5N}\sim \frac{1}{M_1M_2M_4M_5} \, .$$ Thus we infer from that $$\begin{aligned}
\label{lem-Est.4}
N^{2s}|\mathcal{E}_N^5(t)| &\lesssim \sum_{j=1}^4 \sum_{M_2> N^{-1/2}}\sum_{M_5\gtrsim M_2} \frac{N^{2s}}{M_2M_5} \prod_{k=1\atop k\neq j}^4 \|P_{\sim N}u_k(t)\|_{L^2_x} \prod_{l=1}^3 \|P_{\sim N}u_{j,l}(t)\|_{L^2_x} \nonumber \\ & \lesssim
\sum_{j=1}^4 \prod_{k=1\atop k\neq j}^4 \|P_{\sim}u_k(t)\|_{H^s_x} \prod_{l=1}^3 \|P_{\sim N}u_{j,l}(t)\|_{H^s_x} \, ,\end{aligned}$$ since $2s+1<6s$.
Finally, we conclude the proof of gathering – and using the Cauchy-Schwarz inequality.
Estimates for the modified energy
---------------------------------
\[prop-ee\] Let $s>1/4$, $0<T \le 1$ and $u, \, u_i , \,u_{i,j}\in Y^s_T$ be solutions of (\[eq-u0\])-(\[eq-u1\])-(\[eq-u2\]) on $[0,T]$. Then we have $$\label{prop-ee.1}
\begin{split}
E^s_T(u) &\lesssim E^s_0(u) + \prod_{j=1}^4\|u_j\|_{Y^s_T} + \sum_{j=1}^4 \prod_{k=1\atop k\neq j}^4 \|u_k\|_{Y^s_T} \prod_{l=1}^3 \|u_{j,l}\|_{Y^s_T} \\ & \quad
+\sum_{j=1}^4\sum_{m=1}^3\prod_{k=1 \atop k \neq j}^4\|u_k\|_{Y^s_T} \prod_{l=1 \atop l\neq m}^3 \|u_{j,l}\|_{Y^s_T} \prod_{n=1}^3\|u_{j,m,n}\|_{Y^s_T} \, .
\end{split}$$
Let $0<t \le T \le 1$. First, assume that $N \le N_0=2^9$. By using the definition of $\mathcal{E}_N$ in , we have $$\frac{d}{dt} \mathcal{E}_N(t) =c_4 \int_{\mathbb R}P_N\partial_x\big( u_1u_2u_3 \big) P_Nu \, dx \, ,$$ which yields after integrating between $0$ and $t$ and applying Hölder’s inequality that $$\begin{split}
\mathcal{E}_N(t) &\le \mathcal{E}_N(0)+|c_4|\Big|\int_{\mathbb R_t}P_N\partial_x\big( u_1u_2u_3 \big) P_Nu \, \Big| \\ &
\lesssim \mathcal{E}_N(0)+\prod_{i=1}^4 \|u_i\|_{L^\infty_TL^{4}_x}\lesssim \mathcal{E}_N(0)+\prod_{i=1}^4 \|u_i\|_{L^\infty_TH^{1/4}_x}
\end{split}$$ where the notation $\mathbb R_t=\mathbb R \times [0,t]$ defined at the begining of Section \[Secmultest\] has been used. Thus, we deduce after taking the supreme over $t \in [0,T]$ and summing over $N \le N_0$ that $$\label{prop-ee.2}
\sum_{N \le N_0} N^{2s} \sup_{t \in [0,T]} \big|\mathcal{E}_N(t) \big| \lesssim \sum_{N \le N_0} N^{2s} \big|\mathcal{E}_N(0) \big|+\prod_{j=1}^4\|u_j\|_{Y_T^{1/4}} \, .$$
Next, we turn to the case where $N\ge N_0$. As above, we differentiate $\mathcal{E}_N$ with respect to time and then integrate between 0 and $t$ to get $$\begin{aligned}
N^{2s}\mathcal{E}_N(t) &= N^{2s}\mathcal{E}_N(0) + c_4N^{2s}\int_{\R_t}P_N\partial_x(u_1u_2u_3)P_Nu + \alpha N^{2s}\int_0^t\frac{d}{dt}\mathcal{E}_N^{3,med}(t')dt' \nonumber \\
&\quad +\gamma N^{2s}\int_0^t\frac{d}{dt}\mathcal{E}_N^{3,high}(t')dt' + \beta N^{2s} \int_0^t \frac{d}{dt} \mathcal{E}_N^5(t')dt' \nonumber \\
&=: N^{2s}\mathcal{E}_N(0) + c_4I_N + \alpha J_N + \gamma L_N + \beta K_N \, . \label{prop-ee.3}\end{aligned}$$
We rewrite $I_N$ in Fourier variable and get $$\begin{aligned}
I_N &= N^{2s} \int_{\Gamma^3_t} (-i\xi_4) \phi_N^2(\xi_4) \widehat{u}_1(\xi_1) \widehat{u}_2(\xi_2) \widehat{u}_3(\xi_3) \widehat{u}_4(\xi_4) \\
&= \sum_{(M_1,M_2,M_3) \in \mathbb D^3} N^{2s} \int_{\Gamma^3_t} (-i\xi_4) \phi_{M_1,M_2,M_3}(\vec{\xi}_{(3)}) \phi_N^2(\xi_4) \prod_{j=1}^4\widehat{u}_j(\xi_j) \, .\end{aligned}$$ Next we decompose $I_N$ as $$\begin{aligned}
I_N &= N^{2s}\left(\sum_{\mathcal{M}_3^{low}} + \sum_{\mathcal{M}_3^{med}} + \sum_{k=1}^3\sum_{\mathcal{M}_3^{high,k}}\right) \int_{\Gamma^3_t} (-i\xi_4) \phi_{M_1,M_2,M_3}(\vec{\xi}_{(3)}) \phi_N^2(\xi_4) \prod_{j=1}^4\widehat{u}_j(\xi_j) \nonumber \\
&=: I_N^{low} + I_N^{med} + \sum_{k=1}^3I_N^{high,k} \, , \label{prop-ee.4}\end{aligned}$$ by using the notations in Section \[Secmultest\].
*Estimate for $I_N^{low}$.* Thanks to Lemma \[teclemma\], the integral in $I_N^{low}$ is non trivial for $|\xi_1|\sim |\xi_2|\sim |\xi_3|\sim |\xi_4|\sim N$ and $M_{min}\le N^{-1/2}$. Therefore we get from Lemma \[prod4-est\] that $$\begin{split}
|I_N^{low}| &\lesssim \sum_{M_{min}\le N^{-\frac12} \atop M_{min} \le M_{med} \ll N} N^{2s+1}M_{min} \prod_{j=1}^4 \|P_{\sim N}u_j\|_{L^\infty_TL^2_x}
\lesssim \prod_{j=1}^4 \|P_{\sim N}u_j\|_{L^\infty_TH^s_x} \, ,
\end{split}$$ since $(2s+\frac12)<4s$. This leads to $$\label{prop-ee.5}
\sum_{N \ge N_0}|I_N^{low}| \lesssim \prod_{j=1}^4 \|u_j\|_{Y^s_T} \, .$$
*Estimate for $I_N^{high,1}$.* We perform nonhomogeneous dyadic decompositions $\displaystyle{u_j =\sum_{N_j} P_{N_j}u_j}$ for $j=1,2,3$. We assume without loss of generality that $ N_1=\max(N_1,N_2,N_3) $. Recall that this ensures that $M_{max}\sim N_1$. Then we apply Lemma \[prod4-est\] on the sum over $ M_{med} $ and use the discrete Young’s inequality to get $$\begin{aligned}
|I_N^{high,1}| &\lesssim \sum_{M_{min}\le N^{-1} } N^{2s+1}M_{min}\sum_{N_{1} \gtrsim N, N_2,N_3} \prod_{j=2}^3 \|P_{ N_j}u_j\|_{L^\infty_TL^2_x} \|P_{ N_1}u_1\|_{L^2_{T,x}} \|P_{ N}u_4\|_{L^2_{T,x}} \nonumber \\
\lesssim & \sum_{N_{1}\ge N}
\Bigl(\frac{N}{N_{1}}\Bigr)^{s}
\|P_{N_1} u_1\|_{L^2_T H^{s}_x} \|P_{N} u_4\|_{L^2_T H^{s}_x} \|u_2\|_{L^\infty_T H^{0+}_x} \|u_3\|_{L^\infty_T H^{0+}_x} \nonumber\\
\lesssim & \, \delta_N \|P_{N} u_4\|_{L^2_T H^{s}_x} \prod_{i=1}^3 \|u_i\|_{L^\infty_T H^{s}_x} \; , \label{yoyo}\end{aligned}$$ with $ \{\delta_{2^j}\}\in l^2(\mathbb N) $. Summing over $N$ this leads to $$\label{prop-ee.5b}
\sum_{N \ge N_0}|I_N^{high,1}| \lesssim \prod_{j=1}^4 \|u_j\|_{Y^s_T} \, .$$
*Estimate for $I_N^{high,3}$.* For $j=1,\cdots,4$, let $\tilde{u}_j=\rho_T(u_j)$ be the extension of $u_j$ to $\mathbb R^2$ defined in . Now, we define $u_{N_j}=P_{N_j}\tilde{u}_j$ and perform nonhomogeneous dyadic decompositions in $N_j$, so that $I_N^{high,3}$ can be rewritten as $$I_N^{high,3} =N^{2s+1} \sum_{N_j, N_4 \sim N} \sum_{(M_1,M_2,M_3) \in \mathcal{M}_3^{high,3}} \int_{\mathbb R_t}
\Pi^3_{\eta,M_1,M_2,M_3}(u_{N_1},u_{N_2},u_{N_3}) \, u_{N_4} \, ,$$ with $\eta(\xi_1,\xi_2,\xi_3)=\phi_{M_1M_2,M_3}(\xi_1,\xi_2,\xi_3) \phi_N^2(\xi_4)\frac{i\xi_4}{N} \in L^{\infty}(\Gamma^3)$. Thus, it follows from that $$|I_N^{high,3}| \lesssim N^{2s} \sum_{N_j, N_4 \sim N} \frac{N}{N_{max}} \sum_{1 < M_{min} \lesssim N_{med} \atop N \lesssim M_{med} \le M_{max} \lesssim N_{max}}\|u_{N_1}\|_{Y^0} \|u_{N_2}\|_{Y^0} \|u_{N_3}\|_{Y^0} \|u_{N_4}\|_{Y^0}\, .$$ Proceeding as in (here we sum over $ M_{min} $ by using that $ M_{min}\le N_{med} $ ) we get $$\label{prop-ee.6}
\sum_{N \ge N_0}|I_N^{high,3}| \lesssim \prod_{j=1}^4 \|u_j\|_{Y^s_T} \, .$$
*Estimate for $c_4I_N^{high,2}+\gamma L_N$.* As above, we perform the nonhomogeneous dyadic decompositions $\displaystyle{u_j =\sum_{N_j} P_{N_j}u_j}$ for $j=1,2,3$ and use the notation $N_4=N$ to rewrite $I_N^{high,2}$ as $$N^{2s}\sum_{N_j}\sum_{\mathcal{M}_3^{high,2}} \int_{\Gamma^3_t} (-i\xi_4) \phi_{M_1,M_2,M_3}(\vec{\xi}_{(3)}) \phi_N(\xi_4) \prod_{j=1}^4(P_{N_j}u_j)^{\wedge}(\xi_j) \, .$$
First, we deal with the case $N_{med} > N^{\frac{1}{2}}$. Then arguing as for the contribution $I_N^{high,3}$, we conclude that $$\label{prop-ee.6b}
\sum_{N \ge N_0}|I_N^{high,2}| \lesssim \prod_{j=1}^4 \|u_j\|_{Y^s_T} \, .$$
Now, we treat the case $N_{med} \le N^{\frac{1}{2}}$. Using (\[eq-u0\])-(\[eq-u1\]), we can rewrite $\frac{d}{dt}\mathcal{E}_N^{3,high}$ as the sum of the linear contribution $$\sum_{N_j, N_{med} \le N^{\frac{1}{2}}}\sum_{\mathcal{M}_3^{high,2}} \int_{\Gamma^3} \phi_{M_1,M_2,M_3}(\vec{\xi}_{(3)}) \phi_N(\xi_4) \frac{i\xi_4(\xi_1^3+\xi_2^3+\xi_3^3+\xi_4^3)}{\Omega^3(\vec{\xi}_{(3)})} \prod_{j=1}^4(P_{N_j}u_j)^{\wedge}(\xi_j)$$ and the nonlinear contribution $$\begin{aligned}
& \sum_{j=1}^4 c_j \sum_{N_j, N_k, N_{med} \le N^{\frac{1}{2}}}\sum_{\mathcal{M}_3^{high,2}} \int_{\Gamma^3} \phi_{M_1,M_2,M_3}(\vec{\xi}_{(3)}) \phi_N(\xi_4)\frac{\xi_4}{\Omega^3(\vec{\xi}_{(3)})} \nonumber \\ & \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \times \prod_{k=1 \atop{k \neq j}}^4(P_{N_k}u_k)^{\wedge}(\xi_k) \mathcal{F}_x\partial_x P_{N_j}\big( u_{j,1}u_{j,2}u_{j,3} \big)(\xi_j) \, . \label{prop-ee.6b1}\end{aligned}$$ Using the resonance relation , we see by choosing $\gamma=c_4$ that $I_N^{high,2}$ is canceled out by the linear contribution of $\frac{d}{dt}\mathcal{E}_N^{3,high}$. Hence, $$\label{prop-ee.6b2}
c_4I_N^{high,2}+\gamma L_N=c_4\sum_{j=1}^4A_N^j \, ,$$ where $$\begin{split}
A_N^j&=iN^{2s}\sum_{N_j, N_k, N_{med} \le N^{\frac{1}{2}}}\sum_{\mathcal{M}_3^{high,2}}
\int_{\Gamma^5_t} \phi_{M_1,M_2,M_3}(\vec{\xi}_{(3)}) \phi_N(\xi_4)\phi_{N_j}(\xi_j)\frac{\xi_4 \xi_j}{\Omega^3(\vec{\xi}_{(3)})} \\ & \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \times \prod_{k=1 \atop{k \neq j}}^4(P_{N_k}u_k)^{\wedge}(\xi_k) \prod_{l=1}^3\widehat{u}_{j,l}(\xi_{j,l}) \, .
\end{split}$$
It thus remains to treat the terms $A_N^j$ corresponding to the nonlinear contribution $\frac{d}{dt}\mathcal{E}_N^{3,high}$. Observe that $$\label{pss}
N^{-1} \le M_{min} \le 1\; ,M_{med} \sim M_{max} \sim N_{max} \sim N \text{ and } N_{min} \le N_{med} \le N^{\frac{1}{2}}$$ in this case. Moreover, without loss of generality, we can always assume that $N_1\le N_2 \le N_3$.
*Estimates for $A_N^1$ and $A_N^2$.* We only estimate $A_N^2$, since the estimate for $A_N^1$ follows similarly. We get from that $$\begin{split}
|A_N^2| &\lesssim \sum_{N_1\le N_2 \le N^{\frac{1}{2}}} \sum_{N^{-1} \le M_{min} \le 1} \frac{N^{2s+1}}{M_{min} N^2}M_{min}N^{\frac{1}{2}}
\\ & \quad \quad \quad \times
\|P_{N_1}u_1\|_{L^{\infty}_TL^2_x} \|P_{N_2}\big(u_{2,1}u_{2,2}u_{2,3}\big)\|_{L^{\infty}_TL^2_x}\|P_{\sim N}u_3\|_{L^{\infty}_TL^2_x}\|P_Nu_4\|_{L^{\infty}_TL^2_x}\, .
\end{split}$$ Moreover thanks to Berstein and Hölder’s inequalities $$\|P_{N_2}\big(u_{2,1}u_{2,2}u_{2,3}\big)\|_{L^{\infty}_TL^2_x} \lesssim N^{\frac{1}{8}}\|u_{2,1}u_{2,2}u_{2,3}\|_{L^{\infty}_TL^{4/3}_x} \lesssim N^{\frac{1}{8}} \prod_{l=1}^3\|u_{2,l}\|_{L^{\infty}_TL^4_x} \, ,$$ which implies after summing over $N$ and using the Sobolev embedding $H^{\frac14}(\mathbb R) \hookrightarrow L^4(\mathbb R)$ that $$\label{prop-ee.6b22}
\sum_{N \ge N_0}|A_N^2| \lesssim \prod_{k=1\atop k \neq 2}^4\|u_k\|_{Y^s_T} \prod_{l=1}^3\|u_{2,l}\|_{Y^s_T} \, .$$
*Estimates for $A_N^3$ and $A_N^4$.* We only estimate $A_N^3$, since the estimate for $A_N^4$ follows similarly. By using the notations in , we decompose $A_N^3$ as $$\label{prop-ee.6b3}
\begin{split}
A_N^3&=iN^{2s}\Big( \sum_{\widetilde{\mathcal{M}}_5^{low}}+\sum_{\widetilde{\mathcal{M}}_5^{med,1}}+\sum_{\widetilde{\mathcal{M}}_5^{med,2}}+\sum_{\widetilde{\mathcal{M}}_5^{high}}\Big)\sum_{N_1 \le N_2 \le N^{\frac{1}{2}}, N_3 \sim N}\sum_{N_{3,l}} \\ & \quad \quad \times
\int_{\Gamma^5_t} \Lambda_{M_1,\cdots,M_6}^{N,N_3} \prod_{k=1 \atop{k \neq 3}}^4(P_{N_k}u_k)^{\wedge}(\xi_k) \prod_{l=1}^3(P_{N_{3,l}}u_{3,l})^{\wedge}(\xi_{3,l}) \\ &
=: A_N^{3,low}+A_N^{3,med,1}+A_N^{3,med,2}+A_N^{3,high} \, ,
\end{split}$$ where $$\Lambda_{M_1,\cdots,M_6}^{N,N_3}=\phi_{M_1,M_2,M_3}(\xi_1,\xi_2,\xi_3) \phi_{M_4,M_5,M_6}(\xi_{3,1},\xi_{3,2},\xi_{3,3})\phi_N(\xi_4)\phi_{N_3}(\xi_3)\frac{\xi_4 \xi_3}{\Omega^3(\vec{\xi}_{(3)})} \, .$$ We observe that $N_{max(5)}=\max\{N_{3,1},N_{3,2},N_{3,3}\} \gtrsim N$ since $N_3 \sim N$. Without loss of generality, we can assume $ M_4\le M_5\le M_6$. Note that this forces $ M_6 \sim N_{max(5)}$.
*Estimates for $A_N^{3,low}$*. We apply on the sum over $ M_5 $. On account of , we obtain that $$\label{mm}
\begin{split}
|A_N^{3,low}| &\lesssim \sum_{N^{-1} < M_{min} \le 1 \atop M_{4} \lesssim M_{min} }\sum_{N_1 \le N_2 \le N^{\frac{1}{2}} \atop N_4 \sim N}\sum_{N_{3,l}} \frac{N^{2s+2}}{M_{min}N^2}M_{min}M_4 \\ & \quad \times \prod_{k=1 \atop k \neq 3}^4 \|P_{N_k}u_k\|_{L^\infty_TL^2_x} \prod_{l=1}^3 \|P_{N_{3,l}}u_{3,l}\|_{L^\infty_TL^2_x} \, .
\end{split}$$ Therefore, we deduce after proceeding as in and summing over $N$ that $$\label{prop-ee.6b4}
\sum_{N \ge N_0} |A_N^{3,low}| \lesssim \prod_{k=1\atop k \neq 3}^4\|u_k\|_{Y^s_T} \prod_{l=1}^3\|u_{3,l}\|_{Y^s_T} \, .$$
*Estimates for $A_N^{3,med,1}$.* Let $N_{max(5)} \ge N_{med(5)} \ge N_{min(5)}$ denote the maximum, sub-maximum and minimum of $\{N_{3,1},N_{3,2},N_{3,3}\}$. Thanks to Lemma \[teclemma\], we know that $N_{max(5)} \sim N_{med(5)} \sim N_{min(5)} \sim N$ and $M_6 \sim N$. Thus, we deduce from and that $$\begin{split}
|A_N^{3,med,1}| &\lesssim \sum_{N^{-1} < M_{min} \le 1} \sum_{M_{min} \ll M_4 \lesssim N^{\frac12} \atop M_4 \le M_5 \ll N}\sum_{N_1 \le N_2 \le N^{\frac{1}{2}} \atop N_4 \sim N} \frac{N^{2}}{M_{min}N^2}M_{min}M_4 N^{-2s} \\ & \quad \times \prod_{k=1}^2 \|P_{N_k}u_k\|_{L^\infty_TL^2_x} \|P_{N}u_4\|_{L^\infty_TH^s_x} \prod_{l=1}^3 \|P_{\sim N}u_{3,l}\|_{L^\infty_TH^s_x} \, ,
\end{split}$$ which implies that $$\label{prop-ee.6b5}
\sum_{N \ge N_0} |A_N^{3,med,1}| \lesssim \prod_{k=1\atop k \neq 3}^4\|u_k\|_{Y^s_T} \prod_{l=1}^3\|u_{3,l}\|_{Y^s_T} \, ,$$ since $s>\frac14$.
*Estimates for $A_N^{3,med,2}$.* Again, Lemma \[teclemma\] implies that $N_{max(5)} \sim N_{med(5)} \sim N_{min(5)} \sim N$ and $M_6 \sim N$. For $1 \le k \le 4$, $k \neq 3$, $1 \le l \le 3$ let $\tilde{u}_k=\rho_T(u_k)$ and $\tilde{u}_{3,l}$ be the extensions of $u_k$ and $u_{3,l}=\rho_T(u_{3,l})$ to $\mathbb R^2$ defined in . We define $u_{N_k}=P_{N_k}\tilde{u}_k$, $u_{N_{j,l}}=P_{N_{j,l}}\tilde{u}_{j,l}$. We deduce from that $$\begin{split}
|A_N^{3,med,2}| &\lesssim \sum_{N^{-1} < M_{min} \le1 \atop N^{\frac12} \ll M_4 \le M_5 \ll N }\sum_{N_1 \le N_2 \le N^{\frac{1}{2}} \atop N_4 \sim N} \sum_{N_{3,l} \sim N}\frac{N^{2s+2}}{M_{min}N^2}M_{min}N^{-\frac12} \\ & \quad \times \prod_{k=1}^2 \|u_{N_k}\|_{Y^0} \|u_{N_4}\|_{Y^0} \prod_{l=1}^3 \|u_{N_{3,l}}\|_{Y^0} \, ,
\end{split}$$ Therefore, we conclude from that $$\label{prop-ee.6b6}
\sum_{N \ge N_0} |A_N^{3,med,2}| \lesssim \prod_{k=1\atop k \neq 3}^4\|u_k\|_{Y^s_T} \prod_{l=1}^3\|u_{3,l}\|_{Y^s_T} \, .$$
*Estimates for $A_N^{3,high}$.* We argue as above by using instead of . We obtain that $$\begin{split}
|A_N^{3,high}| &\lesssim \sum_{N^{-1} < M_{min} \le M_4 \le N \atop} \sum_{N \lesssim M_5 \le M_6 \le N_{max(5)} \atop}\sum_{N_1 \le N_2 \le N^{\frac{1}{2}} \atop N_4 \sim N} \sum_{N_{3,l} }\frac{N^{2s+2}}{M_{min}N^2}M_{min}N^{-1} \\ & \quad \times \prod_{k=1}^2 \|u_{N_k}\|_{Y^0} \|u_{N_4}\|_{Y^0} \prod_{l=1}^3 \|u_{N_{3,l}}\|_{Y^0} \, ,
\end{split}$$ which leads to $$\label{prop-ee.6b7}
\sum_{N \ge N_0} |A_N^{3,high}| \lesssim \prod_{k=1\atop k \neq 3}^4\|u_k\|_{Y^s_T} \prod_{l=1}^3\|u_{3,l}\|_{Y^s_T} \,$$ in this case. Note we can use the factor $ N^{-s}_{max} $ to sum over $ M_{min}, \, M_4,\, M_5 $ and $ M_6 $ here.
*Estimate for $c_4 I_N^{med}+\alpha J_N+\beta K_N$.* Using (\[eq-u0\])-(\[eq-u1\]), we can rewrite $\frac{d}{dt}\mathcal{E}_N^{3,med}$ as $$\begin{aligned}
&\sum_{\mathcal{M}_3^{med}} \int_{\Gamma^3} \phi_{M_1,M_2,M_3}(\vec{\xi}_{(3)}) \phi_N^2(\xi_4) \frac{i\xi_4(\xi_1^3+\xi_2^3+\xi_3^3+\xi_4^3)}{\Omega^3(\vec{\xi}_{(3)})} \prod_{j=1}^4\widehat{u}_j(\xi_j) \\
&+ \sum_{j=1}^4 c_j \sum_{\mathcal{M}_3^{med}} \int_{\Gamma^3} \phi_{M_1,M_2,M_3}(\vec{\xi}_{(3)}) \phi_N^2(\xi_4)\frac{\xi_4}{\Omega^3(\vec{\xi}_{(3)})} \prod_{k=1 \atop{k \neq j}}^4\widehat{u}_k(\xi_k) \mathcal{F}_x\partial_x \big( u_{j,1}u_{j,2}u_{j,3} \big)(\xi_j) \, .
\end{aligned}$$ Using (\[res3\]), we see by choosing $\alpha=c_4$ that $I_N^{med}$ is canceled out by the first term of the above expression. Hence, $$\label{prop-ee.7}
c_4 I_N^{med}+\alpha J_N = c_4\sum_{j=1}^4 c_j J_N^j \, ,$$ where, for $j=1,\cdots,4$, $$J_N^j = iN^{2s}\sum_{\mathcal{M}_3^{med}} \int_{\Gamma^5_t} \phi_{M_1,M_2,M_3}(\vec{\xi}_{(3)}) \phi_N^2(\xi_4) \frac{\xi_4\xi_j}{\Omega^3(\vec{\xi}_{(3)})} \prod_{k=1 \atop k\neq j}^4 \widehat{u}_k(\xi_k) \prod_{l=1}^3 \widehat{u}_{j,l}(\xi_{j,l}) \, ,$$ with the convention $\displaystyle{\xi_j=-\sum_{k=1 \atop{k \neq j}}^4\xi_k=\sum_{l=1}^3\xi_{j,l}}$ and the notation $\vec{\xi}_{(3)}=(\xi_1,\xi_2,\xi_3)$. Now, we define $\vec{\xi_j}_{(3)}$, for $j=1,2,3,4$ as follows: $$\vec{\xi_1}_{(3)}=(\xi_2,\xi_3,\xi_4), \ \vec{\xi_2}_{(3)}=(\xi_1,\xi_3,\xi_4), \ \vec{\xi_3}_{(3)}=(\xi_1,\xi_2,\xi_4), \ \vec{\xi_4}_{(3)}=(\xi_1,\xi_2,\xi_3) \, .$$ With this notation in hand and by using the symmetries of the functions $\sum_{\mathcal{M}_3^{med}}\phi_{M_1,M_2,M_3}$ and $\Omega^3$, we obtain that $$J_N^j = iN^{2s}\sum_{\mathcal{M}_3^{med}} \int_{\Gamma^5_t} \phi_{M_1,M_2,M_3}(\vec{\xi_j}_{(3)}) \phi_N^2(\xi_4) \frac{\xi_4\xi_j}{\Omega^3(\vec{\xi_j}_{(3)})} \prod_{ k=1\atop k\neq j}^4 \widehat{u}_k(\xi_k) \prod_{l=1}^3 \widehat{u}_{j,l}(\xi_{j,l}) \, .$$
Moreover, observe from the definition of $\mathcal{M}_3^{med}$ in Section \[Secmultest\] that $$|\xi_1|\sim |\xi_2|\sim |\xi_3|\sim |\xi_4|\sim N \quad \text{and} \quad \left|\frac{\xi_j\xi_4}{\Omega^3(\vec{\xi}_{(3)})}\right| \sim \frac{N}{M_{min(3)}M_{med(3)}} \, ,$$ on the integration domain of $J_N^j$. Here $M_{max(3)} \ge M_{med(3)} \ge M_{min(3)}$ denote the maximum, sub-maximum and minimum of $\{M_1,M_2,M_3\}$.
Since $\max(|\xi_{j,1}+\xi_{j,2}|, |\xi_{j,1}+\xi_{j,3}|, |\xi_{j,2}+\xi_{j,3}|) \gtrsim N$ on the integration domain of $J_N^j$, we may decompose $\sum_jc_jJ_N^j$ as $$\begin{aligned}
\sum_{j=1}^4 c_jJ_N^j &= iN^{2s}\left(\sum_{\mathcal{M}_5^{low}} + \sum_{\mathcal{M}_5^{med}} + \sum_{\mathcal{M}_5^{high,1}}+ \sum_{\mathcal{M}_5^{high,2}}\right) \sum_{j=1}^4c_j \nonumber \\
&\quad \times \int_{\Gamma^5_t} \phi_{M_1,...,M_6}(\vec{\xi_j}_{(5)}) \phi_N^2(\xi_4) \frac{\xi_4\xi_j}{\Omega^3(\vec{\xi_j}_{(3)})} \prod_{k=1\atop k\neq j}^4 \widehat{u}_k(\xi_k) \prod_{l=1}^3 \widehat{u}_{j,l}(\xi_{j,l}) \nonumber \\
&:= J_N^{low} + J_N^{med} + J_N^{high,1}+J_N^{high,2} \, , \label{prop-ee.8}\end{aligned}$$ where $\vec{\xi_j}_{(5)}=(\vec{\xi_j}_{(3)},\xi_{j,1},\xi_{j,2},\xi_{j,3}) \in \Gamma^5$.
Moreover, we may assume by symmetry that $M_1 \le M_2 \le M_3$ and $M_4 \le M_5 \le M_6$.
*Estimate for $J^N_{low}$.* In the region $\mathcal{M}^{low}_5$, we have that $M_4 \lesssim M_2$. Moreover thanks to Lemma \[teclemma\], the integral in $J_N^{low}$ is non trivial for $|\xi_1|\sim \cdots \sim |\xi_4|\sim N$, $|\xi_{j,1}|\sim |\xi_{j,2}| \sim |\xi_{j,3}| \sim N$ and $M_3 \sim M_6 \sim N$. Therefore by using , we can bound $|J_N^{low}|$ by $$\begin{aligned}
& \sum_{j=1}^4 \sum_{N^{-\frac12} < M_1\le M_2\ll N\atop}\sum_{M_4\lesssim M_2 \atop M_4 \le M_5 \ll N} N^{2s}M_1M_4 \frac{N}{M_1M_2} \prod_{k=1\atop k\neq j}^4 \|P_{\sim N}u_k\|_{L^\infty_TL^2_x} \prod_{l=1}^3 \|P_{\sim N}u_{j,l}\|_{L^\infty_TL^2_x}\\
&\lesssim \sum_{j=1}^4 \prod_{k=1\atop k\neq j}^4 \|P_{\sim N}u_k\|_{L^\infty_TH^s_x} \prod_{l=1}^3 \|P_{\sim N}u_{j,l}\|_{L^\infty_TH^s_x} \, ,\end{aligned}$$ since $s>1/4$. Thus, we deduce that $$\label{prop-ee.9}
\sum_{N \ge N_0}|J_N^{low}| \lesssim \sum_{j=1}^4 \prod_{k=1\atop k\neq j}^4 \|u_k\|_{Y^s_T} \prod_{l=1}^3 \|u_{j,l}\|_{Y^s_T} \, .$$
*Estimate for $J_N^{high,1}$.* From Lemma \[teclemma\], the integral in $J_N^{high,1}$ is non trivial for $|\xi_1|\sim \cdots \sim |\xi_4|\sim N$, $M_3 \sim N$, $N_{max(5)}=\max\{N_{j,1}, N_{j,2}, N_{j,3}\} \gtrsim N$, $M_4\le N^{-1} $ and $ M_5\sim M_6\sim N_{max(5)}$ . Therefore by using , we can bound $|J_N^{high,1}|$ by $$\begin{aligned}
& \sum_{j=1}^4 \sum_{N^{-\frac12} <M_1\le M_2\ll N \atop}\sum_{M_4\le N^{-1} } \sum_{N_{j,l}} \frac{N^{2s+1}M_1M_4}{M_1M_2} \prod_{k=1\atop k\neq j}^4 \|P_{\sim N}u_k\|_{L^\infty_TL^2_x} \prod_{l=1}^3 \|P_{N_{j,l}}u_{j,l}\|_{L^\infty_TL^2_x}\\
&\lesssim \sum_{j=1}^4 \prod_{k=1\atop k\neq j}^4 \|P_{\sim N}u_k\|_{L^\infty_TH^s_x} \prod_{l=1}^3 \|u_{j,l}\|_{\widetilde{L^\infty_T}H^s_x} \, ,\end{aligned}$$ since $s>1/4$. This leads to $$\label{prop-ee.9b}
\sum_{N \ge N_0}|J_N^{high,1}| \lesssim \sum_{j=1}^4 \prod_{k=1\atop k\neq j}^4 \|u_k\|_{Y^s_T} \prod_{l=1}^3 \|u_{j,l}\|_{Y^s_T} \, .$$
*Estimate for $J_N^{high,2}$.* For $1 \le k \le 4$, and $1 \le l \le 3$ let $\tilde{u}_k=\rho_T(u_k)$ and $\tilde{u}_{j,l}$ be the extensions of $u_k$ and $u_{j,l}=\rho_T(u_{j,l})$ to $\mathbb R^2$ defined in . We define $u_{N_k}=P_{N_k}\tilde{u}_k$, $u_{N_{j,l}}=P_{N_{j,l}}\tilde{u}_{j,l}$ and perform nonhomogeneous dyadic decompositions in $N_k$ and $N_{j,l}$.
We first estimate $J_N^{high,2}$ in the resonant case $M_1M_2M_3\sim M_4M_5M_6$. We assume to simplify the notations that $M_1\le M_2\le M_3$ and $M_4\le M_5\le M_6$. Since we are in $\mathcal{M}_5^{high}$, we have that $M_5,M_6\gtrsim N$ and $M_1,M_2\ll N$ which yields $$M_3\sim N \quad \text{and} \quad M_4\sim \frac{M_1M_2N}{M_5M_6}\ll N \, .$$ This forces $ N_{j,1} \sim N$ and it follows from that $$\begin{aligned}
|J_N^{high,2}| &\lesssim \sum_{j=1}^4\sum_{\mathcal{M}^{high}_5}\sum_{N_{j,l}}\frac{N^{2s+1}}{M_1M_2} M_1M_4^{\frac12}N_{j,2}^{\frac14}N_{j,3}^{\frac14} \prod_{k=1\atop k\neq j}^4 \|P_{\sim N}\tilde{u}_k\|_{L^\infty_TL^2_x} \prod_{l=1}^3\|u_{N_{j,l}}\|_{L^\infty_TL^2_x} \\
&\lesssim \sum_{j=1}^4 \sum_{N^{-\frac12} \le M_1 \le M_2 \ll N \atop}N^{s+\frac12}\frac{(M_1M_2)^{\frac12}}{M_2} \prod_{k=1\atop k\neq j}^4 \|P_{\sim N}\tilde{u}_k\|_{L^\infty_TL^2_x} \prod_{l=1}^3
\|u_{j,l}\|_{\widetilde{L^\infty_T}H^s_x} \, .\end{aligned}$$ Summing over $N^{-1/2} \le M_1, \ M_2 \ll N$ and $N \ge N_0$ and using the assumption $s>\frac14$, we get $$\label{prop-ee.10}
\sum_{N \ge N_0}|J_N^{high,2}| \lesssim \sum_{j=1}^4 \prod_{k=1\atop k\neq j}^4 \|u_k\|_{Y^s_T} \prod_{l=1}^3 \|u_{j,l}\|_{Y^s_T} \, ,$$ in the resonant case.
Thanks to , we easily estimate $J_N^{high,2}$ in the non resonant case $M_1M_2M_3\not\sim M_4M_5M_6$ by $$\begin{split}
|J_N^{high,2}| &\lesssim \sum_{j=1}^4\sum_{N^{-\frac12} \le M_1\le M_2 \ll N \atop } \sum_{N^{-1} < M_4 \le N \atop N \lesssim M_5 \le M_6 \lesssim N_{max(5)}}\sum_{N_{j,l}}\\ & \quad \quad \times \frac{N^{2s+1}}{M_1M_2} M_1N^{-1} \prod_{k=1\atop k\neq j}^4 \| P_{\sim N} \tilde{u}_{k}\|_{Y^0} \prod_{k=1}^3 \|u_{N_{j,l}}\|_{Y^0} \, .
\end{split}$$ Recalling that $\max\{N_{j,1},N_{j,2},N_{j,3}\} \gtrsim N$, we conclude after summing over $N$ that also holds, for $ s>1/4$, in the non resonant case.
*Estimate for $\alpha J_N^{med}+\beta K_N$*. Using equations (\[eq-u0\])-(\[eq-u1\])-(\[eq-u2\]) and the resonance relation , we can rewrite $N^{2s}\int_0^t\frac{d}{dt}\mathcal{E}_N^5dt$ as $$\begin{aligned}
&N^{2s}\sum_{\mathcal{M}_5^{med}}\sum_{j=1}^4 c_j \int_{\Gamma^5_t}\phi_{M_1,...,M6}(\vec{\xi_j}_{(5)}) \phi_N^2(\xi_4) \frac{i\xi_4\xi_j}{\Omega^3(\vec{\xi_j}_{(3)})} \prod_{k=1\atop k\neq j}^3 \widehat{u}_k(\xi_k) \prod_{l=1}^3 \widehat{u}_{j,l}(\xi_{j,l})\\
&+ N^{2s}\sum_{\mathcal{M}_5^{med}}\sum_{j=1}^4 c_j \sum_{m=1\atop l\neq j}^4 c_m \int_{\Gamma^5_t}\phi_{M_1,...,M6}(\vec{\xi_j}_{(5)}) \phi_N^2(\xi_4) \frac{\xi_4\xi_j}{\Omega^3(\vec{\xi_j}_{(3)})\Omega^5(\vec{\xi_j}_{(5)}) } \\
&\quad\quad \times \prod_{k=1\atop k\neq j,m}^4 \widehat{u}_k(\xi_k) \, \mathcal{F}_x\partial_x(u_{m,1}u_{m,2}u_{m,3})(\xi_m) \prod_{l=1}^3 \widehat{u}_{j,l}(\xi_{j,l})\\
&+ N^{2s}\sum_{\mathcal{M}_5^{med}}\sum_{j=1}^4 c_j \sum_{m=1}^3 c_{j,m} \int_{\Gamma^5_t}\phi_{M_1,...,M6}(\vec{\xi_j}_{(5)}) \phi_N^2(\xi_4) \frac{\xi_4\xi_j}{\Omega^3(\vec{\xi_j}_{(3)})\Omega^5(\vec{\xi_j}_{(5)}) }\\
&\quad\quad \times \prod_{k=1\atop k\neq j}^4 \widehat{u}_k(\xi_k) \prod_{l=1\atop l\neq m}^3 \widehat{u}_{j,l}(\xi_{j,l}) \, \mathcal{F}_x\partial_x(u_{j,m,1}u_{j,m,2}u_{j,m,3})(\xi_{j,m})
\\
&:= K_N^1+K_N^2+K_N^3.\end{aligned}$$ By choosing $\beta=-\alpha$, we have that $$\label{prop-ee.11}
\alpha J_N^{med} + \beta K_N = \beta(K_N^2+K_N^3) \, .$$
For the sake of simplicity, we will only consider the contribution of $K_N^3$ corresponding to a fixed $(j,m) \in \{1,2,3,4\} \times \{1,2,3\}$, since the other contributions on the right-hand side of can be treated similarly.
Thus, for $(j,m)$ fixed, we need to bound $$\tilde{K}_N:= iN^{2s}\sum_{\mathcal{M}_5^{med}} \int_{\Gamma^7_t} \sigma(\vec{\xi_j}_{(5)}) \prod_{k=1\atop k\neq j}^4 \widehat{u}_k(\xi_k) \prod_{l=1\atop l\neq m}^3 \widehat{u}_{j,l}(\xi_{j,l}) \prod_{n=1}^3\widehat{u}_{j,l}(\xi_{j,m,n})(\xi_{j,m,n}) \, ,$$ with the conventions $\displaystyle{\xi_j=-\sum_{k=1 \atop k \neq j}^4\xi_k=\sum_{l=1}^3\xi_{j,l}}$ and $\displaystyle{\xi_{j,m}=\sum_{n=1}^3\xi_{j,m,n}}$ and where $$\sigma(\vec{\xi_j}_{(5)}) = \phi_{M_1,...,M_6}(\vec{\xi_j}_{(5)}) \, \phi_N^2(\xi_4) \, \frac{\xi_4 \, \xi_j \, \xi_{j,m}}{\Omega^3(\vec{\xi_j}_{(3)}) \, \Omega^5(\vec{\xi_j}_{(5)}) } \, .$$ Now, we define $\vec{\xi}_{j,m_{(7)}} \in \Gamma^7$ as follows: $$\begin{aligned}
\vec{\xi}_{j,1_{(7)}}&=\big(\vec{\xi_j}_{(3)},\xi_{j,2}, \xi_{j,3},\xi_{j,1,1},\xi_{j,1,2},\xi_{j,1,3}\big) \, ,\\ \vec{\xi}_{j,2_{(7)}}&=\big(\vec{\xi_j}_{(3)},\xi_{j,1},\xi_{j,3},\xi_{j,2,1},\xi_{j,2,2},\xi_{j,2,3}\big) \, ,
\\ \vec{\xi}_{j,3_{(7)}}&=\big(\vec{\xi_j}_{(3)},\xi_{j,1},\xi_{j,2},\xi_{j,3,1},\xi_{j,3,2},\xi_{j,3,3}\big) \, .\end{aligned}$$ Observe from Lemma \[teclemma\] that the integrand is non trivial for $$|\xi_1|\sim\cdots \sim |\xi_4|\sim |\xi_{j,1}| \sim |\xi_{j,2}| \sim |\xi_{j,3}| \sim |\xi_{j,m,1}+\xi_{j,m,2}+\xi_{j,m,3}| \sim N \, .$$ Hence, $$\big|\sigma(\vec{\xi_j}_{(5)})\big| \sim \frac{N^3}{M_1M_2N\cdot M_4M_5N}\sim \frac{N}{M_1M_2M_4M_5} \, .$$
Now we decompose $\tilde{K}_N$ as $$\begin{aligned}
\tilde{K}_N &= iN^{2s} \left(\sum_{\mathcal{M}_7^{low}} + \sum_{\mathcal{M}_7^{high}}\right) \nonumber \\
&\quad \times\int_{\Gamma^7_t} \widetilde{\sigma}(\vec{\xi}_{{j,m}_{(7)}}) \prod_{k=1\atop k\neq j}^4 \widehat{u}_k(\xi_k) \prod_{l=1\atop l\neq m}^3 \widehat{u}_{j,l}(\xi_{j,l}) \prod_{n=1}^3\widehat{u}_{j,l}(\xi_{j,m,n})(\xi_{j,m,n}) \nonumber\\
&=\tilde{K}_N^{low} + \tilde{K}_N^{high} \, , \label{prop-ee.12}\end{aligned}$$ where $$\widetilde{\sigma}(\vec{\xi}_{{j,m}_{(7)}}) = \phi_{M_1,...,M_9}(\vec{\xi}_{{j,m}_{(7)}}) \, \sigma(\vec{\xi_j}_{(5)}) \, .$$ Moreover, we may assume without loss of generality that $M_1 \le M_2 \le M_3$, $M_4 \le M_5 \le M_6$ and $M_7 \le M_8 \le M_9$. This forces $M_2 \ll M_4$ and $ M_3\sim M_6\sim N $ since $(M_1,\cdots,M_6) \in \mathcal{M}_5^{med}$.
*Estimate for $\tilde{K}_N^{low}$.* In the integration domain of $\tilde{K}_N^{low}$ we have from Lemma \[teclemma\] that $|\xi_{j,m,1}|\sim |\xi_{j,m,2}|\sim |\xi_{j,m,3}|\sim N$. Then, applying on the sum over $ (M_7,M_8,M_9) $ we get $$\begin{aligned}
|\tilde{K}_N^{low}| &\lesssim \sum_{N^{-1/2}<M_1 \le M_2\ll N\atop M_2 \ll M_4 \le M_5\ll N} \frac{N^{2s+1}}{M_1M_2M_4M_5} M_1M_2^{\frac12}M_4M_5^{\frac12} \\ & \quad \quad \times \prod_{k=1 \atop k \neq j}^4\|P_{\sim N}u_k\|_{L^\infty_TL^2_x} \prod_{l=1 \atop l\neq m}^3 \|P_{\sim N}u_{j,l}\|_{L^\infty_TL^2_x} \prod_{n=1}^3\|P_{\sim N}u_{j,m,n}\|_{L^\infty_TL^2_x} \, .\end{aligned}$$ This implies that $$\label{prop-ee.13}
\sum_{N \ge N_0}|\tilde{K}_N^{low}| \lesssim \prod_{k=1 \atop k \neq j}^4\|u_k\|_{L^\infty_TH^s_x} \prod_{l=1 \atop l\neq m}^3 \|u_{j,l}\|_{L^\infty_TH^s_x} \prod_{n=1}^3\|u_{j,m,n}\|_{L^\infty_TH^s_x} \, ,$$ since $2s+\frac32<8s$.
*Estimate for $\tilde{K}_N^{high}$.* We first estimate $\tilde{K}_N^{high}$ in the resonant case $M_4M_5M_6\sim M_7M_8M_9$. Since we are in $\mathcal{M}_7^{high}$, we have that $M_9\ge M_8\gtrsim N$ and $M_4\le M_5\ll N$. It follows that $M_6\sim N$ and $$M_7\sim \frac{M_4M_5N}{M_8M_9}\ll N \, .$$ This forces $N_{j,m,1} \sim N$ and we deduce from that $$\begin{aligned}
|\tilde{K}_N^{high}| &\lesssim \sum_{N^{-\frac12}<M_1 \le M_2 \ll N \atop M_2 \ll M_4 \le M_5 \ll N}\sum_{M_9\ge M_8\gtrsim N} \sum_{N_{j,m,n}, N_{j,m,1} \sim N\atop}\frac{N^{2s+\frac12}}{M_2} N_{j,m,2}^{\frac14}N_{j,m,3}^{\frac14}\\
&\quad \times \prod_{k=1 \atop k \neq j}^4 \|P_{\sim N}u_k\|_{L^\infty_TL^2_x} \prod_{l=1 \atop l\neq m}^3\|P_{\sim N}u_{j,l}\|_{L^\infty_TL^2_x} \prod_{n=1}^3\|P_{N_{j,m,n}} u_{j,m,n}\|_{L^{\infty}_TL^2_x} \, ,\end{aligned}$$ which yields summing over $N \ge N_0$ and using the assumption $s>\frac14$ that $$\label{prop-ee.14}
\sum_{N \ge N_0}|\tilde{K}_N^{high}| \lesssim \prod_{k=1 \atop k \neq j}^4\|u_k\|_{Y^s_T} \prod_{l=1 \atop l\neq m}^3 \|u_{j,l}\|_{Y^s_T} \prod_{n=1}^3\|u_{j,m,n}\|_{Y^s_T} \, .$$ Now, in the non resonant case we separate the contributions of the region $ M_7\le N^{-1} $ and $ M_7>N^{-1}$. In the first region, applying on the sum over $ (M_8,M_9)$, we get $$\begin{aligned}
|\tilde{K}_N^{high}| &\lesssim \sum_{N^{-\frac12}<M_1 \le M_2 \ll N \atop M_2 \ll M_4 \le M_5 \ll N} \sum_{M_7 \le N^{-1}}\sum_{N_{j,m,n}}\frac{N^{2s+1}M_7}{M_2M_5} \\
&\quad \times \prod_{k=1 \atop k \neq j}^4 \|P_{\sim N}u_k\|_{L^\infty_TL^2_x} \prod_{l=1 \atop l\neq m}^3\|P_{\sim N}u_{j,l}\|_{L^\infty_TL^2_x} \prod_{n=1}^3\|P_{N_{j,m,n}}u_{j,m,n}\|_{L^{\infty}_TL^2_x} \, .\end{aligned}$$ Observing that $\max\{N_{j,m,1},N_{j,m,2},N_{j,m,3} \} \gtrsim N$, we conclude after summing over $N \ge N_0$ that $$\label{prop-ee.13b}
\sum_{N \ge N_0}|\tilde{K}_N^{high}| \lesssim \prod_{k=1 \atop k \neq j}^4\|u_k\|_{L^\infty_TH^s_x} \prod_{l=1 \atop l\neq m}^3 \|u_{j,l}\|_{L^\infty_TH^s_x} \prod_{n=1}^3\|u_{j,m,n}\|_{L^\infty_TH^s_x} \, ,$$ since $2s+1<6s$.
Finally we treat contribution of the region $ M_7>N^{-1}$. For $1 \le j \le 4$, $1 \le l \le 3$ and $1 \le n \le 3$ let $\tilde{u}_j=\rho_T(u_j)$, $\tilde{u}_{j,l}=\rho_T(u_{j,l})$ and $\tilde{u}_{j,l,n}=\rho_T(u_{j,l,n})$ be the extensions of $u_j$, $u_{j,l}$ and $u_{j,l,n}$ to $\mathbb R^2$ defined in . We define $u_{N_k}=P_{N_k}\tilde{u}_k$, $u_{N_{j,l}}=P_{N_{j,l}}\tilde{u}_{j,l}$, $u_{N_{j,l,n}}=P_{N_{j,l,n}}\tilde{u}_{j,l,n}$ and perform nonhomogeneous dyadic decompositions in $N_k$, $N_{j,l}$ and $N_{j,m,n}$. Thanks to Proposition \[L27lin\] we easily estimate $\tilde{K}_N^{high}$ on this region by $$\begin{split}
|\tilde{K}_N^{high}| &\lesssim \sum_{N^{-\frac12}<M_1 \le M_2 \ll N \atop M_2 \ll M_4 \le M_5 \ll N} \sum_{N^{-1} \le M_7 \le N}
\sum_{N\lesssim M_8\le M_9\lesssim N_{max(7)}}\sum_{N_k \sim N}\sum_{N_{j,l} \sim N} \sum_{N_{j,m,n}} \frac{N^{2s+1}}{M_2M_5} N^{-1}\\ & \quad \times \prod_{k=1 \atop k \neq j}^4\|u_{N_k}\|_{Y^0} \prod_{l=1 \atop l\neq m}^3 \|u_{N_{j,l}}\|_{Y^0} \prod_{n=1}^3\|u_{N_{j,m,n}}\|_{Y^0} \, ,
\end{split}$$ where $N_{max(7)}=\max\{N_{j,m,1},N_{j,m,2},N_{j,m,3} \} $. Therefore, we deduce from that also holds, for $ s>1/4 $, in this region.
Finally, we conclude the proof of Proposition \[prop-ee\] gathering –.
Estimates of the X norm
-----------------------
In this subsection, we explain how to control the $X^{s-1,1}_T$ norm that we used in the energy estimates.
\[trilin\] Assume that $0<T \le 1$ and $s \ge 0$. Let $u\in L^\infty(0,T;H^s(\R))\cap L^4(0,T;L^\infty(\R)) $ be a solution to . Then, $$\label{trilin.1}
\|u\|_{X^{s-1,1}_T} \lesssim \|u_0\|_{H^s}+\sum_{i=1}^3 \prod_{j=1 \atop j\neq i}^3\|u_j\|_{L^4_TL^{\infty}_x}\|J^s_xu_i\|_{L^{\infty}_TL^2_x} \, .$$
By using the Duhamel formula associated to , the standard linear estimates in Bourgain’s spaces and the fractional Leibniz rule (*c.f.* Theorem A.12 in [@KPV2]), we have that $$\label{be}
\begin{split}
\|u\|_{X^{s-1,1}_T} &\lesssim \|u_0\|_{H^{s-1}}+ \|\partial_x(u_1u_2u_3)\|_{X^{s-1,0}_T} \\
& \lesssim \|u_0\|_{H^{s}}+\|J_x^s(u_1u_2u_3)\|_{L^2_{x,T}} \\ & \lesssim \|u_0\|_{H^{s}}+\sum_{i=1}^3 \prod_{j=1 \atop j\neq i}^3\|u_j\|_{L^4_TL^{\infty}_x}\|J^s_xu_i\|_{L^{\infty}_TL^2_x} \, .
\end{split}$$
It remains to derive a suitable Strichartz estimate for the solutions of .
\[se\] Assume that $0<T \le 1$ and $s > \frac14$. Let $u\in L^\infty(0,T; H^s(\R))$ be a solution to . Then, $$\label{se.1}
\|u\|_{L^4_TL^{\infty}_x} \lesssim \|u\|_{L^{\infty}_TH^s_x}+\prod_{j=1}^3\|u_j\|_{L^{\infty}_TH^s_x} \, .$$
Since $u$ is a solution to , we use estimate with $F=\partial_x(u_1u_2u_3)$ and $\delta=2$ and the Sobolev embedding to obtain $$\begin{split}
\|u\|_{L^4_TL^{\infty}_x} &\lesssim \|u\|_{L^{\infty}_TH^s_x}+\|u_1u_2u_3\|_{L^4_TL^1_x} \lesssim \|u\|_{L^{\infty}_TH^s_x}+\prod_{j=1}^3\|u_j\|_{L^{\infty}_TH^s_x} \, .
\end{split}$$
Proof of Theorem \[maintheo\] {#Secmaintheo}
=============================
Fix $s>\frac14$. First it is worth noticing that we can always assume that we deal with data that have small $ H^s $-norm. Indeed, if $u$ is a solution to the IVP on the time interval $[0,T]$ then, for every $0<\lambda<\infty $, $u_{\lambda}(x,t)=\lambda u(\lambda x,\lambda^3t)$ is also a solution to the equation in on the time interval $[0,\lambda^{-3}T]$ with initial data $u_{0,\lambda}=\lambda
u_{0}(\lambda \cdot)$. For $\varepsilon>0 $ let us denote by $ \mathcal{B}^s(\varepsilon) $ the ball of $ H^s(\mathbb R)$, centered at the origin with radius $ \varepsilon $. Since $$\|u_{\lambda}(\cdot,0)\|_{H^s} \lesssim\lambda^{\frac12}(1+\lambda^s)\|u_0\|_{H^s},$$ we see that we can force $u_{0,\lambda}$ to belong to $ \mathcal{B}^s(\epsilon)$ by choosing $\lambda \sim \min( \varepsilon^2\|u_0\|_{H^s}^{-2},1) $. Therefore the existence and uniqueness of a solution of on the time interval $ [0,1] $ for small $ H^s$-initial data will ensure the existence of a unique solution $u$ to for arbitrary large $H^s$-initial data on the time interval $T\sim \lambda^3 \sim \min( \|u_0\|_{H^s}^{-6},1)$.
Existence
---------
First, we begin by deriving *a priori* estimates on smooth solutions associated to initial data $u_0\in H^{\infty}(\mathbb R)$ that is small in $H^s(\mathbb R) $ . In other words, we assume that $u_0 \in \mathcal{B}^s(\varepsilon)$. It is known from the classical well-posedness theory that such an initial data gives rise to a global solution $u \in C(\mathbb R; H^{\infty}(\mathbb R))$ to the Cauchy problem .
Then, we deduce gathering estimates , , , and that $$\|u\|_{\widetilde{L^{\infty}_T}H^s_x}^2 \lesssim \|u_0\|_{H^s}^2 \big(1+\|u_0\|_{H^s}^2 \big)^2+\|u\|_{\widetilde{L^{\infty}_T}H^s_x}^4 \big(1+\|u\|_{\widetilde{L^{\infty}_T}H^s_x}^2\big)^4 \, ,$$ for any $0<T \le 1$. Moreover, observe that $\lim_{T \to 0} \|u\|_{\widetilde{L^{\infty}_T}H^s_x}=c\|u_0\|_{H^s}$. Therefore, it follows by using a continuity argument that there exists $\epsilon_0>0$ and $C_0>0$ such that $$\label{maintheo.2}
\|u\|_{\widetilde{L^{\infty}_T}H^s_x} \le C_0\|u_0\|_{H^s} \quad \text{provided} \quad \|u_0\|_{H^s} \le \epsilon \le \epsilon_0 \, .$$
Now, let $u_1$ and $u_2$ be two solutions of the equation in in $\widetilde{L^{\infty}_T}H^s_x$ for some $0<T\le 1$ emanating respectively from $u_1(\cdot,0)=\varphi_1$ and $u_2(\cdot,0)=\varphi_2$. We also assume that $$\label{maintheo.3}
\|u_i\|_{\widetilde{L^{\infty}_T}H^s_x} \le C_0 \epsilon_0, \quad \text{for} \ i=1,2 \, .$$
Let us define $w=u_1-u_2$ and $z=u_1+u_2$. Then $(w,z)$ solves $$\label{diffmKdV}
\left\{ \begin{array}{l}
\partial_tw+\partial_x^3w+ \frac {3\kappa}4\partial_x(z^2w)+\frac {\kappa}4 \partial_x(w^3)=0 \, , \\
\partial_tz+\partial_x^3z+\frac {\kappa}4\partial_x(z^3) + \frac{3\kappa}4\partial_x(zw^2) =0\, .
\end{array}\right.$$ Therefore, it follows from , , , and that $$\label{maintheo.4}
\|u_1-u_2\|_{L^{\infty}_TH^s_x} \le \|u_1-u_2\|_{\widetilde{L^{\infty}_T}H^s_x} \lesssim \|\varphi_1-\varphi_2\|_{H^s}$$ provided $u_1$ and $u_2$ satisfy with $0<\epsilon <\epsilon_1$, for some $0 \le \epsilon_1 \le \epsilon_0$.
We are going to apply to construct our solutions. Let $ u_0 \in H^s $ with $ s>1/4$ satisfying $\|u_0\|_{H^s}\le \varepsilon_1$. We denote by $ u_N $ the solution of emanating from $ P_{\le N} u_0$ for any dyadic integer $ N\ge 1$. Since $ P_{\le N} u_0 \in H^{\infty}(\mathbb R)$, there exists a solution $u_N$ of satisfying $$u_{N} \in C(\mathbb R : H^{\infty}(\mathbb R)) \quad \text{and} \quad u_{N}(\cdot,0)=P_{\le N} u_{0} \, .$$ We observe that $\|u_{0,N}\|_{H^s} \le \|u_0\|_{H^s} \le \epsilon_1$. Thus, it follows from - that for any couple of dyadic integers $ (N,M) $ with $ M<N$, $$\|u_{N}-u_{M}\|_{\widetilde{L^{\infty}_1} H^s_x} \lesssim \|(P_{\le N}-P_{\le M})u_{0}\|_{H^s}
\underset{M \to +\infty}{\longrightarrow} 0 \, .$$ Therefore $\{u_{N}\}$ is a Cauchy sequence in $C([0,1]; H^s(\mathbb R))$ which converges to a solution $u \in C([0,1] ; H^s(\mathbb R))$ of . Moreover, it is clear from Propositions \[trilin\] and \[se\] that $u$ belongs to the class .
Uniqueness
----------
Next, we state our uniqueness result.
\[uniqueness\] Let $u_1$ and $u_2$ be two solutions of the equation in in $L^{\infty}_TH^s_x$ for some $T>0$ and satisfying $u_1(\cdot,0)=u_2(\cdot,0)=\varphi$. Then $u_1=u_2$ on $[-T,T]$.
Let us define $K=\max\{\|u_1\|_{L^{\infty}_TH^s_x},\|u_2\|_{L^{\infty}_TH^s_x}\}$. Let $s'$ be a real number satisfying $\frac14<s'<s$. We get from the Cauchy-Schwarz inequality that $$\label{uniqueness.1}
\|u_i\|_{\widetilde{L^{\infty}_T}H^{s'}_x}=\Big(\sum_{N}\|P_Nu_i\|_{L^{\infty}_TH^{s'}_x}^2 \Big)^{\frac12} \lesssim \|u_i\|_{L^{\infty}_TH^s_x}, \quad \text{for} \quad i=1,2 \, .$$
As explained above, we use the scaling property of and define $u_{i,\lambda}(x,t)=\lambda u_i(\lambda x,\lambda^3 t)$. Then, $u_{i,\lambda}$ are solutions to the equation in on the time interval $[-S,S]$ with $S=\lambda^{-3} T$ and with the same initial data $\varphi_{\lambda}=\lambda\varphi(\lambda\cdot)$. Thus, we deduce from that $$\label{uniqueness.2}
\|u_{i,\lambda}\|_{\widetilde{L^{\infty}_S}H^{s'}_x} \lesssim \lambda^{\frac12}(1+\lambda^{s'})\|u_i\|_{\widetilde{L^{\infty}_S}H^{s'}_x} \lesssim \lambda^{\frac12}(1+\lambda^{s'})K, \quad \text{for} \quad i=1,2 \, .$$ Thus, we can always choose $\lambda=\lambda>0$ small enough such that $\|u_{i,\lambda}\|_{\widetilde{L^{\infty}_S}H^{s'}_x} \le C_0 \epsilon$ with $0<\epsilon \le \epsilon_1$. Therefore, it follows from that $u_{\lambda,1} = u_{\lambda,2}$ on $[0,S]$. This concludes the proof of Lemma \[uniqueness\] by reverting the change of variable.
Finally, the Lipschitz bound on the flow is a consequence of estimate .
*A priori* estimates in Hs for s>1/10 {#Secsecondtheo}
========================================
Let $u$ be a smooth solution of defined in the time interval $[0,T]$ with $0<T\le 1$. Fix $\frac1{10}<s<\frac14$. The aim of this section is to derive estimates for $u$ in the norms $\|\cdot\|_{\widetilde{L^{\infty}_T}H^s_x}$, $\|\cdot\|_{L^4_TL^{\infty}_x}$ and $\|\cdot\|_{X^{s-1,1}_T}$.
Estimate for the X and L4 norms
-------------------------------
\[apriori.triline\] Assume that $0<T \le 1$ and $s \ge 0$. Let $u$ be a solution to defined in the time interval $[0,T]$. Then, $$\label{apriori.triline.1}
\|u\|_{X^{s-1,1}_T} \lesssim \|u_0\|_{H^s}+\|u\|_{L^4_TL^{\infty}_x}^2\|u\|_{L^{\infty}_TH^s_x} \, .$$
The proof is exactly the same as the one of Proposition \[trilin\].
\[apriori.se\] Assume that $0<T \le 1$ and $s > \frac1{10}$. Let $u$ be a solution to defined in the time interval $[0,T]$. Then, $$\label{apriori.se.1}
\|u\|_{L^4_TL^{\infty}_x} \lesssim \|u\|_{L^{\infty}_TH^s_x}+\|u\|_{L^4_TL^{\infty}_x}\|u\|_{L^{\infty}_TH^s_x}^2 \, .$$
Since $u$ is a solution to we use estimate with $F=\partial_x(u^3)$. Then, it follows from the Sobolev inequality and the fractional Leibniz rule that $$\label{apriori.se0}
\begin{split}
\|J^{-\frac{3\delta+1}4}_xF\|_{L^4_TL^2_x} &\lesssim \|J^{-\frac{3\delta+1}4+\frac1p-\frac12}_x\partial_x(u^3)\|_{L^4_TL^p_x}
\\ & \lesssim \|u\|_{L^4_TL^{\infty}_x}\|u\|_{L^{\infty}_TL^{q_1}_x}\|D^{\kappa}_xu\|_{L^{\infty}_TL^2_x} \, ,
\end{split}$$ for all $1 \le p \le 2$ and $2 \le q_1 \le \infty$ satisfying $\frac1{q_1}+\frac12=\frac1p$ and $\kappa=-\frac{3\delta}4+\frac14+\frac1p$. Thus, the Sobolev inequality yields $$\label{apriori.se2}
\|J^{-\frac{3\delta+1}4}_xF\|_{L^4_TL^2_x} \lesssim \|u\|_{L^4_TL^{\infty}_x}\|u\|_{L^{\infty}_TH^{\kappa}_x}^2 \, ,$$ if we choose $\kappa$ satisfying $\kappa=\frac12-\frac1{q_1}=1-\frac1p$. This implies that $$\kappa=-\frac{3\delta}4+\frac14+\frac1p=-\frac{3\delta}4+\frac54-\kappa \quad \Rightarrow \quad \kappa=-\frac{3\delta}8+\frac58 \, .$$ Then, we choose $\delta$ such that $\frac{\delta-1}4=-\frac{3\delta}8+\frac58$, which gives $$\delta=\frac75, \quad \kappa=\frac1{10}, \quad p=\frac{10}{9} \quad \text{and} \quad q_1=\frac52 \, .$$ Therefore, we conclude estimate by using with $\delta=\frac75$ and arguing as in –.
Integration by parts
--------------------
In this Section, we will use the notations of Section \[Secmultest\]. We also denote $\displaystyle{m=\min_{1 \le i \neq j \le 3} |\xi_i+\xi_j|}$ and $$\label{m2}
A_j=\big\{(\xi_1,\xi_2,\xi_3) \in \mathbb R^3 \, : \, |\sum_{k=1\atop k \neq j}^3\xi_k|=m \big\}, \quad \text{for} \quad j=1,2,3 \, .$$ Then, it is clear from the definition that $$\label{m3}
\sum_{j=1}^3 \chi_{A_j}(\xi_1,\xi_2,\xi_3)=1, \quad \textit{a.e.} \ (\xi_1,\xi_2,\xi_3) \in \mathbb R^3 \, .$$
For $\eta\in L^\infty$, let us define the trilinear pseudo-product operator $\widetilde{\Pi}^{(j)}_{\eta,M}$ in Fourier variables by $$\label{def.pseudoproduct.ee.1}
\mathcal{F}_x\big(\widetilde{\Pi}^{(j)}_{\eta,M}(u_1,u_2,u_3) \big)(\xi)=\int_{\Gamma^2(\xi)}(\chi_{A_j}\eta)(\xi_1,\xi_2,\xi_3)\phi_{M}(\sum_{k=1\atop k \neq j}^3\xi_k)\prod_{j=1}^3\widehat{u}_j(\xi_j) \, .$$ Moreover, if the functions $u_j$ are real-valued, the Plancherel identity yields $$\label{def.pseudoproduct.ee.2}
\int_{\mathbb R} \widetilde{\Pi}^{(j)}_{\eta,M}(u_1,u_2,u_3) \, u_4 \, dx=\int_{\Gamma^3}(\chi_{A_j}\eta)(\xi_1,\xi_2,\xi_3)\phi_{M}(\sum_{k=1\atop k \neq j}^3\xi_k) \prod_{j=1}^4 \widehat{u}_j(\xi_j) \, .$$
Next, we derive a technical lemma involving the pseudo-products which will be useful in the derivation of the energy estimates.
\[technical.pseudoproduct\] Let $N$ and $M$ be two homogeneous dyadic numbers satisfying $N \gg 1$. Then, for $M \ll N$, it holds $$\label{technical.pseudoproduct.2}
\int_{\mathbb R} P_N \widetilde{\Pi}^{(3)}_{1,M}(f_1,f_2,g) \, P_N\partial_x g = M\sum_{N_3 \sim N}\int_{\mathbb R} \widetilde{\Pi}_{\eta_3,M}^{(3)}(f_1,f_2,P_{N_3}g) \, P_Ngdx ,$$ for any real-valued functions $f_1, \, f_2, \, f_3, \, g \in L^2(\mathbb R)$ and where $\eta_3$ is a function of $(\xi_1,\xi_2,\xi_3)$ whose $L^{\infty}-$norm is uniformly bounded in $N$ and $M$.
Let us denote by $T_{M,N}(f_1,f_2,g,g)$ the right-hand side of . From Plancherel’s identity we have $$\begin{split}
&T_{M,N}(f_1,f_2,g,g) \\ & \quad=\int_{\mathbb R^3} \chi_{A_3}(\xi_1,\xi_2,\xi_3)\phi_M(\xi_1+\xi_2)\xi\phi_N(\xi)^2\widehat{f}_1(\xi_1)\widehat{f}_2(\xi_2)\widehat{g}(\xi_3)\overline{\widehat{g}(\xi)}d\tilde{\xi} \, ,
\end{split}$$ where $\xi=\xi_1+\xi_2+\xi_3$ and $d\tilde{\xi}=d\xi_1d\xi_2d\xi_3$. We use that $\xi=\xi_1+\xi_2+\xi_3$ to decompose $T_{M,N}(f_1,f_2,g,g)$ as follows. $$\label{technical.pseudoproduct.3}
\begin{split}
T_{M,N}(f_1,f_2,g,g) &=M\sum_{\frac{N}2\le N_3 \le 2N}\int_{\mathbb R} \Pi_{\tilde{\eta}_1,M}^3(f_1,f_2,P_{N_3}g)P_{N}gdx \\ & \quad +M\sum_{\frac{N}2\le N_3 \le 2N}\int_{\mathbb R} \Pi_{\tilde{\eta}_2,M}^3(f_1,f_2,P_{N_3}g)P_{N}gdx\\ & \quad +\widetilde{T}_{M,N}(f_1,f_2,g,g)\, ,
\end{split}$$ where $$\tilde{\eta}_1(\xi_1,\xi_2,\xi_3)=\phi_N(\xi)\frac{\xi_1+\xi_2}M\chi_{\supp \phi_M}(\xi_1+\xi_2) \, ,$$ $$\tilde{\eta}_2(\xi_1,\xi_2,\xi_3)=\frac{\phi_N(\xi)-\phi_N(\xi_3)}M\xi_3\chi_{\supp \phi_M}(\xi_1+\xi_2) \, ,$$ and $$\begin{split}
\widetilde{T}_{M,N}(f_1,f_2,g,g)=\int_{\mathbb R^3} \chi_{A_3}(\xi_1,\xi_2,\xi_3)\phi_M(\xi_1+\xi_2)\xi_3\widehat{f}_1(\xi_1)\widehat{f}_2(\xi_2)\widehat{g_N}(\xi_3)\overline{\widehat{g_N}(\xi)}d\tilde{\xi}\,
\end{split}$$ with the notation $g_N=P_Ng$.
First, observe from the mean value theorem and the frequency localization that $\tilde{\eta}_1$ and $\tilde{\eta}_2$ are uniformly bounded in $M$ and $N$.
Next, we deal with $\widetilde{T}_{M,N}(f_1,f_2,g,g)$. By using that $\xi_3=\xi-(\xi_1+\xi_2)$ observe that $$\begin{split}
\widetilde{T}_{M,N}(f_1,f_2,g,g)&=-\int_{\mathbb R^3}\chi_{A_3}(\xi_1,\xi_2,\xi_3)\phi_M(\xi_1+\xi_2)(\xi_1+\xi_2)\widehat{f_1}(\xi_1)\widehat{f_2}(\xi_2)\widehat{g_N}(\xi_3)\overline{\widehat{g_N}(\xi)}d\tilde{\xi}\\ & \quad +S_{M,N}(f_1,f_2,g,g)
\end{split}$$ with $$S_{M,N}(f_1,f_2,g,g)=\int_{\mathbb R^3}\chi_{A_3}(\xi_1,\xi_2,\xi_3)\phi_M(\xi_1+\xi_2)\widehat{f_1}(\xi_1)\widehat{f_2}(\xi_2)\widehat{g_{N}}(\xi_3)\xi\overline{\widehat{g_N}(\xi)}d\tilde{\xi} \, .$$ Since $g$ is real-valued, we have $\overline{\widehat{g_N}(\xi)}=\widehat{g_N}(-\xi)$, so that $$S_{M,N}(f_1,f_2,g,g)=\int_{\mathbb R^3}\chi_{A_3}(\xi_1,\xi_2,\xi_3)\phi_M(\xi_1+\xi_2)\widehat{f_1}(\xi_1)\widehat{f_2}(\xi_2)\overline{\widehat{g_{N}}(-\xi_3)}\xi\widehat{g_N}(-\xi)d\tilde{\xi} \, .$$ We change variable $\hat{\xi_3}=-\xi=-(\xi_1+\xi_2+\xi_3)$, so that $-\xi_3=\xi_1+\xi_2+\hat{\xi}_3$. Thus, $S_{M,N}(f_1,f_2,g,g)$ can be rewritten as $$-\int_{\mathbb R^3}\chi_{A_3}(\xi_1,\xi_2,-\xi_1-\xi_2-\hat{\xi}_3)\phi_M(\xi_1+\xi_2)\widehat{f_1}(\xi_1)\widehat{f_2}(\xi_2)\hat{\xi}_3\widehat{g_N}(\hat{\xi}_3)\overline{\widehat{g_{N}}(\xi_1+\xi_2+\hat{\xi}_3)}d\hat{\xi} \, ,$$ where $d\hat{\xi}=d\xi_1d\xi_2d\hat{\xi}_3$. Now, observe that $|\xi_1+(-\xi_1-\xi_2-\hat{\xi}_3)|=|\xi_2+\hat{\xi}_3|$ and $|\xi_2+(-\xi_1-\xi_2-\hat{\xi}_3)|=|\xi_1+\hat{\xi}_3|$. Thus $\chi_{A_3}(\xi_1,\xi_2,-\xi_1-\xi_2-\hat{\xi}_3)=\chi_{A_3}(\xi_1,\xi_2,\hat{\xi}_3)$ and we obtain $$S_{M,N}(f_1,f_2,g,g)=-\widetilde{T}_{M,N}(f_1,f_2,g,g) \, ,$$ so that $$\label{technical.pseudoproduct.4}
\widetilde{T}_{M,N}(f_1,f_2,g,g)= M\int_{\mathbb R} \Pi_{\eta_2,M}^3(f_1,f_2,P_{N}g)P_Ngdx$$ where $$\eta_2(\xi_1,\xi_2,\xi_3)=-\frac12\frac{\xi_1+\xi_2}M\chi_{\supp \phi_M}(\xi_1+\xi_2)$$ is also uniformly bounded function in $M$ and $N$.
Finally, we define $\eta_1=\tilde{\eta}_1+\tilde{\eta}_2$ and $\eta_3=\eta_1+\eta_2$. Therefore the proof of follows gathering and .
Finally, we state a $L^2$-trilinear estimate involving the $X^{-1,1}$-norm and whose proof is similar to the one of Proposition \[L2trilin\].
\[apriori.L2trilin\] Assume that $0<T \le 1$, $\eta$ is a bounded function and $u_i$ are real-valued functions in $Y^0=X^{-1,1} \cap L^{\infty}_tL^2_x$ with time support in $[0,2]$ and spatial Fourier support in $I_{N_i}$ for $i=1,\cdots 4$. Here, $N_i$ denote nonhomogeneous dyadic numbers. Assume also that $N_{max}\gg 1$, and $m=\min_{1 \le i \neq j \le 3}|\xi_i+\xi_j| \sim M \ge 1$. Then $$\label{apriori.L2trilin.2}
\Big| \int_{\R \times [0,T]}\widetilde{\Pi}^{(3)}_{\eta,M}(u_1,u_2,u_3) \, u_4 \, dxdt \Big| \lesssim M^{-1} \prod_{i=1}^4\|u_i\|_{Y^0} \, .$$ Moreover, the implicit constant in estimate only depends on the $L^{\infty}$-norm of the function $\eta$.
Energy estimates {#energy-estimates}
----------------
The aim of this subsection is to prove the following energy estimates for the solutions of .
\[apriori.ee\] Assume that $0<T \le 1$ and $s > 0$. Let $u$ be a solution to defined in the time interval $[0,T]$. Then, $$\label{apriori.ee.0}
\|u\|_{\widetilde{L^{\infty}_T}H^s_x} \lesssim \|u_0\|_{H^s}+\|u\|_{Z^s_T}^2 \, ,$$ where $\|\cdot\|_{Z^s_T}$ is defined in .
Observe from the definition that $$\label{apriori.ee.1}
\|u\|_{\widetilde{L^{\infty}_T}H^s_x}^2 \sim \sum_{N}N^{2s}\|P_Nu\|_{L^{\infty}_TL^2_x}^2$$ Moreover, by using , we have $$\frac12\frac{d}{dt}\|P_Nu(\cdot,t)\|_{L^2_x}^2 = \int_{\mathbb R} \big(P_N\partial_x(u^3)P_Nu\big) (x,t)dx \, .$$ which yields after integration in time between $0$ and $t$ and summation over $N$ $$\label{apriori.ee.3}
\|u\|_{\widetilde{L^{\infty}_T}H^s_x}^2\lesssim \|u_0\|_{H^s}^2+\sum_{N}\sup_{t \in [0,T]} \big| L_N(u)\big|\, ,$$ where $$\label{apriori.ee.3.0}
L_N(u)=N^{2s} \int_{\mathbb R \times [0,t]} P_N\partial_x(u^3) \, P_Nu \, dx ds \, .$$
In the case where $N \lesssim 1$, Hölder’s inequality and imply that $$\label{apriori.ee.4}
\sum_{N \lesssim 1}\big| L_N(u)\big| \lesssim \|u\|_{L^4_TL^{\infty}_x}^2\|u\|_{L^{\infty}_TL^2_x}^2 \lesssim \|u\|_{Z^s_T}^4 \, .$$
In the following, we can then assume that $N \gg 1$. By using the decomposition in , we get that $L_{N}(u)=\sum_{j=1}^3L_{N}^{(j)}(u)$ with $$L_{N}^{(j)}(u)=N^{2s}\sum_{M}\int_{\mathbb R \times [0,t]}P_N \widetilde{\Pi}_{1,M}^{(j)}(u,u,u) \, P_N\partial_xu \, dx ds \, ,$$ where we performed a homogeneous dyadic decomposition in $m \sim M$. Thus, by symmetry, it is enough to estimate $L_{N}^{(3)}(u)$, that still will be denoted $L_N(u)$ for the sake of simplicity.
We decompose $J_{N}(u)$ depending on wether $M <1$, $1 \le M \ll N$ and $M \gtrsim N$. Thus $$\begin{aligned}
L_N(u)&=N^{2s}\Big(\sum_{M \gtrsim N}+\sum_{1 \le M \ll N}+\sum_{M \le \frac12}\Big)\int_{\mathbb R \times [0,t]}P_N \widetilde{\Pi}_{1,M}^{(3)}(u,u,u) \, P_N\partial_xu \, dx ds
\nonumber \\&=:L_N^{high}(u)+L_N^{med}(u)+L_N^{low}(u) \, . \label{apriori.ee.5}\end{aligned}$$
*Estimate for $L_N^{high}(u)$.* Let $\tilde{u}=\rho_t(u)$ be the extension of $u$ to $\mathbb R^2$ defined in . Now we define $u_{N_i}=P_{N_i}\tilde{u}$, for $i=1,2,3$, $u_N=P_N\tilde{u}$ and perform dyadic decompositions in $N_i$, $i=1,2,3$, so that $$L_N^{high}(u)=N^{2s}\sum_{M \gtrsim N}\sum_{N_1, N_2, N_3} \int_{\mathbb R \times [0,t]}P_N \widetilde{\Pi}_{1,M}^{(3)}(u_{N_1},u_{N_2},u_{N_3}) \, P_N\partial_xu \, dx ds \, .$$ Define $$\eta_{high}(\xi_1,\xi_2,\xi_3)=\frac{\xi}N\phi_N(\xi) \, .$$ It is clear that $\eta_{high}$ is uniformly bounded in $M$ and $N$. Thus, by using estimate , we have that $$\begin{aligned}
\big|L_N^{high}(u)\big| &\lesssim N^{2s}\sum_{M \gtrsim N}\sum_{N_1, N_2, N_3}N\Big|\int_{\mathbb R \times [0,t]}P_N \widetilde{\Pi}_{\eta_{high},M}^{(3)}(u_{N_1},u_{N_2},u_{N_3}) \, P_N\partial_xu \, dx ds\Big| \nonumber \\ & \lesssim N^{2s}\|u_N\|_{Y^0} \sum_{N_1,N_2,N_3}\prod_{i=1}^3\|u_{N_i}\|_{Y^0}\, , \label{apriori.ee.6}\end{aligned}$$ since $\sum_{M \gtrsim N}N/M \lesssim 1$. Let us denote $N_{max}, \ N_{med}$ and $N_{min}$ the maximum, sub-maximum and minimum of $N_1, \ N_2, \ N_3$. It follows then from the frequency localization that $N \lesssim N_{med} \sim N_{max}$. Thus, we deduce summing over $N$, using the Cauchy-Schwarz inequality in $N_1, \ N_2, \ N_3$ and $N$ and estimate that $$\label{apriori.ee.7}
\sum_{N \gg 1}\big|L_N^{high}(u)\big| \lesssim \|\tilde{u}\|_{Y^s}^4 \lesssim \|u\|_{Y^s_T}^4 \, ,$$ since $s>0$.\
*Estimate for $L_N^{med}(u)$.* To estimate $L^{med}(u)$, we decompose $\int_{\mathbb R} P_N \widetilde{\Pi}^{(3)}_{1,M}(u,u,u) \, P_N\partial_x u$ as in , since we are in the case $1 \le M \ll N$ and $N \gg 1$.
Once again, let $\tilde{u}=\rho_t(u)$ be the extension of $u$ to $\mathbb R^2$ defined in and $u_{N_i}=P_{N_i}\tilde{u}$, for $i=1,2,3$, $u_N=P_N\tilde{u}$. Observe from the frequency localization that $N_3 \sim N$. We perform dyadic decompositions in $N_i$, $i=1,2,3$ and deduce from that $$\big|L_N^{med}(u)\big| \lesssim N^{2s}\sum_{1 \le M \ll N}\sum_{N_1, N_2}\sum_{N_3 \sim N}M\Big|\int_{\mathbb R \times [0,t]}P_N \widetilde{\Pi}_{\eta_3,M}^{(3)}(u_{N_1},u_{N_2},u_{N_3}) \, P_N\partial_xu \, dx ds\Big| \, ,$$ where $\eta_3$[^3] is uniformly bounded in the range of summation of $M, \, N, \, N_1, \, N_2$ and $N_3$. Then, we deduce from that $$\label{apriori.ee.10}
\big|L_N^{med}(u)\big| \lesssim \sum_{1 \le M \ll N}\sum_{N_1,N_2}\sum_{N_3 \sim N}\|u_{N_1}\|_{Y^0}\|u_{N_2}\|_{Y^0}\|u_{N_3}\|_{Y^s}\|u_N\|_{Y^s} \, .$$ Observe that $\max\{N_1,N_2 \} \gtrsim M$. Therefore, we deduce after summing over $N \sim N_3 \gg 1$, $N_1$, $N_2$ and $M$ that $$\label{apriori.ee.11}
\sum_{N \gg 1}\big|L_N^{med}(u)\big| \lesssim \|\tilde{u}\|_{Y^s}^4 \lesssim \|u\|_{Y^s_T}^4 \, ,$$ since $s>0$.
*Estimate for $L_N^{low}$.* In this case, we also have $N \gg 1$ and $M \ll N$. Thus the decomposition in yields $$L_N^{low}(u)=N^{2s}\sum_{M \le \frac12}M\sum_{N_3 \sim N}\int_{\mathbb R \times [0,t]}\widetilde{\Pi}_{\eta_3,M}^{(3)}(u,u,P_{N_3}u)P_Nu \, dxds \, ,$$ where $\eta_3$ is defined in the proof of Lemma \[technical.pseudoproduct\]. Since $\eta_3$ is uniformly bounded in $N$ and $M$, we deduce from and Hölder’s inequality in time (recall here that $0<t \le T \le 1$) that $$\begin{split}
\big|L_N^{low}(u) \big| &
\lesssim N^{2s}\sum_{M \le 1/2}M^2\|u\|_{L^{\infty}_TL^2_x}^2\sum_{N_3 \sim N}\|P_{N_3}u\|_{L^{\infty}_TL^2_x} \|P_Nu\|_{L^{\infty}_TL^2_x} \, .
\end{split}$$ Thus, we infer that $$\label{apriori.ee.12}
\sum_{N \gg 1}\big|L_N^{low}(u)\big| \lesssim \|u\|_{L^{\infty}_TL^2_x}^2\|u\|_{\widetilde{L^{\infty}_T}H^s_x}^2 \lesssim \|u\|_{Y^s_T}^4 \, .$$
Finally, we conclude the proof of estimate gathering , , , , and .
Proof of Theorem \[secondtheo\]
-------------------------------
By using a scaling argument as in Section \[Secmaintheo\], it suffices to prove Theorem \[secondtheo\] in the case where the initial datum $u_0$ belongs to the ball $\mathcal{B}^s(\epsilon)$ of $H^s$ centered at the origin and of radius $\epsilon$, where $0<\epsilon<\epsilon_0$ and $\epsilon_0$ is a small number to be determined later, and where the solution $u$ is defined on a time interval $[0,T]$ with $0<T\le 1$.
Let us define $\Gamma^s_T(u)=\|u\|_{\widetilde{L}^{\infty}_TH^s_x}+\|u\|_{L^4_TL^{\infty}_x}$. Then it follows gathering , and that $$\Gamma^s_T(u) \lesssim \|u_0\|_{H^s}+\Gamma^s_T(u)^2+\Gamma^s_T(u)^3 \, ,$$ if $\epsilon_0$ is chosen small enough. Moreover, observe that $\lim_{T \to 0} \Gamma^s_T(u)=c\|u_0\|_{H^s}$. Therefore, it follows by using a continuity argument that there exists $\epsilon_0>0$ such that $$\Gamma^s_T(u) \lesssim \|u_0\|_{H^s} \quad \text{provided} \quad \|u_0\|_{H^s} \le \epsilon \le \epsilon_0 \, .$$ This concludes the proof of Theorem \[secondtheo\] by using .
**Acknowledgments.** L.M and S.V were partially supported by the ANR project GEO-DISP. D.P. would like to thank the L.M.P.T. at Université François Rabelais for the kind hospitality during the elaboration of this work. He is also grateful to Gustavo Ponce for pointing out the reference [@ChHoTa].
[99]{}
<span style="font-variant:small-caps;">A. Babin, A. Ilyin and E. Titi,</span> *On the regularization mechanism for the periodic Korteweg-de Vries equation,* Comm. Pure Appl. Math., **64** (2011), no. 5, 591–648.
<span style="font-variant:small-caps;">M. Christ, J. Colliander and T. Tao,</span> *Asymptotics, frequency modulation, and low regularity ill-posedness for canonical defocusing equations,* Amer. J. Math., **125** (2003), 1235–1293.
<span style="font-variant:small-caps;">M. Christ, J. Holmer and D. Tataru,</span> *Low regularity a priori bounds for the modified Korteweg-de Vries equation,* Lib. Math. (N.S.), **32** (2012), no.1, 51–75.
<span style="font-variant:small-caps;">J. Colliander, M. Keel, G. Staffilani, H. Takaoka and T. Tao,</span> *Sharp global well-posedness for KdV and modified KdV on $\mathbb R$ and $\mathbb T$,* J. Amer. Math. Soc., **16** (2003), 705–749.
<span style="font-variant:small-caps;">A. Ionescu, C. E. Kenig and D. Tataru,</span> *Global well-posedness of the KP-I initial value problem in the energy space,* Invent. Math., **173** (2008), 265–304.
<span style="font-variant:small-caps;">T. Kato</span>, *On nonlinear Schrödinger equations II. $H^s$-solutions and unconditional well-posedness,* J. Anal. Math., **67** (1995), 281–306.
<span style="font-variant:small-caps;">C. E. Kenig and K. D. Koenig,</span> *On the local well-posedness of the Benjamin-Ono and modified Benjamin-Ono equations,* Math. Res. Let., **10** (2003), 879–895.
<span style="font-variant:small-caps;">C. E. Kenig and D. Pilod,</span> *Well-posedness for the fifth-order KdV equation in the energy space,* preprint (2012), arXiv:1205.0169, to appear in Trans. Amer. Math. Soc.
<span style="font-variant:small-caps;">C. E. Kenig, G. Ponce and L. Vega,</span> *Oscillatory integrals and regularity of dispersive equations,* Indiana Univ. Math. J., **40** (1991), 33–69.
<span style="font-variant:small-caps;">C. E. Kenig, G. Ponce and L. Vega,</span> *Well-posedness and scattering results for the generalized Korteweg- de Vries equation via the contraction principle,* Comm. Pure Appl. Math., **46** (1993), 527–620.
<span style="font-variant:small-caps;">C. E. Kenig, G. Ponce and L. Vega,</span> *On the ill-posedness of some canonical dispersive equations,* Duke Math. J., **106** (2001), 617–633.
<span style="font-variant:small-caps;">H. Koch and N. Tzvetkov,</span> *Local well-posedness of the Benjamin-Ono equation in $H^s(\mathbb R)$,* Int. Math. Res. Not., **14** (2003), 1449–1464.
<span style="font-variant:small-caps;">S. Kwon and T. Oh,</span> *On unconditional well-posedness of modified KdV,* Internat. Math. Res. Not., **15** (2012), 3509–3534.
<span style="font-variant:small-caps;">N. Masmoudi and K. Nakanishi</span>, *From the Klein-Gordon-Zakharov system to the nonlinear Schrödinger equation ,* J. Hyperbolic Differ. Equ. **2** (2005), 975–1008.
<span style="font-variant:small-caps;">L. Molinet,</span> *A note on the inviscid limit of the Benjamin-Ono-Burgers equation in the energy space,* Proc. Amer. Math. Soc. **14** (2013), 2793–2798.
<span style="font-variant:small-caps;">L. Molinet and S. Vento,</span> *Improvement of the energy method for strongly non resonant dispersive equations and applications,* preprint (2014).
<span style="font-variant:small-caps;">K. Nakanishi, H. Takaoka and Y. Tsutsumi,</span> *Local well-posedness in low regularity of the mKdV equation with periodic boundary condition,* Disc. Cont. Dyn. Systems **28** (2010), no. 4, 1635–1654.
<span style="font-variant:small-caps;">H. Takaoka and Y. Tsutsumi,</span> *Well-posedness of the Cauchy problem for the modified KdV equation with periodic boundary condition,* Int. Math. Res. Not. (2004), 3009–3040.
<span style="font-variant:small-caps;">T. Tao,</span> *Multilinear weighted convolution of $L^2$ functions and applications to nonlinear dispersive equations,* Amer. J. Math., **123** (2001), 839–908.
<span style="font-variant:small-caps;">Y. Zhou,</span> *Uniqueness of weak solution of the KdV equation,* Int. Math. Res. Not., **6** (1997), 271–283.
[^1]: $^*$ Partially supported by the french ANR project GEODISP
[^2]: $^{\dagger}$ Partially supported by CNPq/Brazil, grants 302632/2013-1 and 481715/2012-6.
[^3]: see the proof of Lemma \[technical.pseudoproduct\] for a definition of $\eta_3$.
|
---
abstract: 'In this paper a novel set-theoretic control framework for Networked Constrained Cyber-Physical Systems is presented. By resorting to set-theoretic ideas and the physical watermarking concept, an anomaly detector module and a control remediation strategy are formally derived with the aim to contrast severe cyber attacks affecting the communication channels. The resulting scheme ensures Uniformly Ultimate Boundedness and constraints fulfillment regardless of any admissible attack scenario. Simulation results show the effectiveness of the proposed strategy both against Denial of Service and False Data Injection attacks.'
author:
- '[^1] Walter Lucia\* [^2] Bruno Sinopoli\*\* Giuseppe Franzè\*'
date:
title: 'Networked Constrained Cyber-Physical Systems subject to malicious attacks: a resilient set-theoretic control approach'
---
INTRODUCTION
============
Cyber-Physical Systems (CPSs) represent the integration of computation, networking, and physical processes that are expected to play a major role in the design and development of future engineering systems equipped with improved capabilities ranging from autonomy to reliability and cyber security, see [@SaAn11] and references therein. The use of communication infrastructures and heterogeneous IT components have certainly improved scalability and functionality features in several applications (transportation systems, medical technologies, water distributions, smart grids and so on), but on the other hand they have made such systems highly vulnerable to cyber threats, see e.g. the attack on the network power transmission [@Gordman09] or the Stuxnet warm which infects the Supervision Control and Data Acquisition system used to regulate uranium enrichments [@Chen10]. Recently, the analysis of the CPS security from a theoretic perspective has received increasing attention and different solutions to discover cyber attack occurrences have been proposed, see [@MiPaPa13], [@MoWeSi15], [@PaDoBu13], [@WeSi15] and reference therein for detailed discussions. First, it is important to underline that if the attacker and defender share the same information then a passive anomaly detection system has no chance to identify stealthy attacks [@WeSi15]. There, the authors propose the introduction of an artificial time-varying model correlated to the CPS dynamics so that any adversary attempting to manipulate the system state is revealed through its effect on such an extraneous time-varying system.
Along these lines, a relevant approach is provided in [@MoWeSi15] where the physical watermarking concept is exploited. Specifically, a noisy control signal is superimposed to a given optimal control input in order to authenticate the physical dynamics of the system. In [@TeShaSaJo12], the authors modify the system structure in order to reveal zero dynamic attacks, while in [@MiPaPa13] a coding sensor outputs is considered to detect FDI attacks. It is worth to point out that most of works addressing CPSs focus their attention only on the detection problem leaving out the control countermeasures. To the best of the author’s knowledge very few control remediation strategies against cyber attacks have been proposed, see e.g. [@FaTaDi14] where a first contribution for dealing with CPS affected by corrupted sensors and actuators has been presented.
In this paper two classes of cyber attacks will be analyzed: i) *partial model knowledge attacks* and ii) *full model knowledge attacks* [@TeShaSaJo15]. The former is capable to break encryption algorithms which protect the communication channels and to modify the signals sent to the actuators and to the controller with the aim to cause physical damages. The second class can inject malicious data within the control architecture. Zero-dynamics and FDI attacks fall into such a category [@TeShaSaJo15]. In the sequel, the main aim is to develop a control architecture capable to manage constrained CPSs subject to malicious data attacks. As one of its main merits, the strategy is able to combine into a unique framework detection/mitigation tasks with control purposes. In fact both the detection and control phases are addressed by using the watermarking approach and the set-theoretic paradigm firstly introduced in [@BeRh71] and then successfully applied in e.g. [@Bl08], [@Angeli2008], [@FraTeFa15].\
Specifically, the identification module can be viewed as an active detector that, differently from the existing solutions, does not require neither input or model manipulations. Moreover, a watermarking like behavior can be simply obtained during the on-line computation of the control action. The attack mitigation is achieved by exploiting the concept of one-step controllable set jointly with cyber actions (communication disconnection, channels re-encryption) in order to ensure guaranteed control actions under any admissible attack scenario.\
Finally, a simulations campaign is provided under several attack scenarios to prove the effectiveness of the proposed methodology.
PRELIMINARIES AND NOTATIONS
===========================
Let us consider the class of Networked Constrained Cyber-Physical System (**NC-CPS**) described by the following discrete-time LTI model where we assume w.l.o.g. that the state vector is fully available: $$\label{eq:sys}
\begin{array}{rcl}
x(t+1)&=&A x(t) + B u (t) + B_d d_x(t)\\
y(t)&=&x(t)+d_y(t)
\end{array}$$ where $t\in{\mathop{{\rm Z}\mskip-7.0mu{\rm Z}}\nolimits}_+:=\{0,1,...\},$ $x(t)\in {\mathop{{\rm I}\mskip-4.0mu{\rm R}}\nolimits}^n$ denotes the plant state, $u(t)\in{\mathop{{\rm I}\mskip-4.0mu{\rm R}}\nolimits}^{m}$ the control input, $y(t)\in {\mathop{{\rm I}\mskip-4.0mu{\rm R}}\nolimits}^n$ the output state measure and $d_x(t) \in {{\cal D}}_x \subset {\mathop{{\rm I}\mskip-4.0mu{\rm R}}\nolimits}^{d_x},\, d_y(t) \in {{\cal D}}_y \subset {\mathop{{\rm I}\mskip-4.0mu{\rm R}}\nolimits}^{n},\, \forall t \in {\mathop{{\rm Z}\mskip-7.0mu{\rm Z}}\nolimits}_+,$ exogenous bounded plant and measure disturbances, respectively. Moreover (\[eq:sys\]) is subject to the following state and input set-membership constraints: $$\label{eq:constraints}
u(t)\in {{\cal U}},\quad x(t)\in {{\cal X}}, \,\, \forall t \geq 0,$$
\[UBB\] Let $S$ be a neighborhood of the origin. The closed-loop trajectory of (\[eq:sys\])-(\[eq:constraints\]) is said to be Uniformly Ultimate Boundedness (UUB) in $S$ if for all $\mu >0$ there exists $T(\mu)>0$ and $u:=f(y(t))\in {{\cal U}}$ such that, for every $\|x(0)\| \leq \mu,$ $x(t)\in S$ $\forall d_x(t) \in {{\cal D}}_x,\, \forall d_y(t) \in {{\cal D}}_y,\, \forall t \geq T(\mu).$
\[definition:one-step-controllable-sets\] A set $\mathcal{T}\subseteq {\mathop{{\rm I}\mskip-4.0mu{\rm R}}\nolimits}^n$ is Robustly Positively Invariant (RPI) for (\[eq:sys\])-(\[eq:constraints\]) if there exists a control law $u:=f(y(t))\in {{\cal U}}$ such that, once the closed-loop solution $x(t+1)=Ax(t)+Bf(y(t))+ B_dd_x$ enters inside that set at any given time $t_0$, it remains in it for all future instants, i.e. $x(t_0) \in \mathcal{T} \rightarrow x(t)
\in \Xi , \forall d_x(t) \in {{\cal D}}_x,\,\forall d_y(t) \in {{\cal D}}_y,\, \forall t \geq t_0.$ $\Box$
Given the sets ${{\cal A}}, {{\cal E}}\!\! \subset\!\! {\mathop{{\rm I}\mskip-4.0mu{\rm R}}\nolimits}^n$ ${{\cal A}}\!\oplus\!{{\cal E}}\!\!:=\{a\!+\!e:\!\! \,a\!\in{{\cal A}}, e\!\in\!{{\cal E}}\}$ is the [*Minkowski Set Sum*]{} and ${{\cal A}}\!\sim \!{{\cal E}}\!\!:=\{a\in {{\cal A}}: \, a+e \in{{\cal A}},\,\forall e\in {{\cal E}}\}$ the [*Minkowski Set Difference*]{}. $\Box$
Set-theoretic receding horizon control scheme (ST-RHC) {#section:dual-mode}
------------------------------------------------------
In the sequel, the receding horizon control scheme proposed in [@Angeli2008] and based on the philosophy developed in the seminal paper [@BeRh71] is summarized.\
Given the constrained LTI system (\[eq:sys\])-(\[eq:constraints\]), determine a state-feedback $u(\cdot)=f(y(\cdot))\in \mathcal{U}$ capable *i)* to asymptotically stabilize (\[eq:sys\]) and *ii)* to drive the state trajectory $x(\cdot)\in \mathcal{X}$ within a pre-specified region $\mathcal{T}^0$ in a finite number of steps $N$ regardless of any disturbance realization $d_x(t)\in {{\cal D}}_x,\,\,d_y(t)\in {{\cal D}}_y.$
The latter can be addressed by resorting to the following receding horizon control strategy:\
[**[Off-line -]{}**]{}
- Compute a stabilizing state-feedback control law $u^0(\cdot)=f^0(y(\cdot))$ complying with (\[eq:constraints\]) and the associated RPI region $\mathcal{T}^0;$
- Starting from $\mathcal{T}^0,$ determine a sequence of $N$ robust one-step ahead controllable sets $\mathcal{T}^i$ (see [@Bl08]): $$\label{eq:family-one-step}
\begin{array}{l}
\mathcal{T}^0 := \mathcal{T}\\
\mathcal{T}^i := \{ x \!\! \in {{\cal X}}:\forall d_x(t)\in {{\cal D}}_x,\,d_y(t)\in {{\cal D}}_y,\,\, \exists \, u \in {{\cal U}}:\\
\quad \qquad A (x+d_y(t)) + B u +B_dd_x(t) \in {\mathcal{T}}^{i-1} \}\\
\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,= \{ x \!\! \in {{\cal X}}: \exists \, u \in {{\cal U}}: A x + B u \in
\tilde{\mathcal{T}}^{i-1} \},
i=1,\ldots, N
\end{array}$$ where $\tilde{\mathcal{T}}^{i-1}:=\mathcal{T}^{i-1} \sim B_d {{\cal D}}_x \sim A {{\cal D}}_y.$
[**[On-line -]{}**]{}
Let $x(0) \in \displaystyle\bigcup_{i=0}^N \mathcal{T}^{i},$ the command $u(t)$ is obtained as follows:
- Let $i(t) := \min \{i: y(t) \in \mathcal{T}^i \}$
- If $i(t)=0$ then $u(t)=f^0(y(t))$ else solve the following semi-definite programming (SDP) problem: $$\label{fun_opt_b_3}
{u}(t) = \arg \min J_{j(t)}(y(t),u) \quad s.t.$$ $$\label{cond_opt_b_5}
A x(t) + B u \in {\tilde{\mathcal{T}}}^{i(t)-1},~ u \in {{\cal U}}$$ where $J_{j(t)}(y(t),u)$ is a cost function and $j(t)$ a time-dependent selection index.
\[remark:varaible\_cost\_function\] It is worth noticing that the cost function $J_{j(t)}(y(t),u)$ can be arbitrarily chosen without compromising the final objective of the control strategy and, in principle, it may be changed at each time instant. $\Box$
PROBLEM FORMULATION {#section:problem_formulation}
===================
![Encrypted NC-CPS over Internet[]{data-label="fig:distribuited_control_architecture"}](proposed_distributed_control_scheme){width="0.8\linewidth"}
In the sequel, we consider CPSs whose physical plant is modeled as (\[eq:sys\])-(\[eq:constraints\]), while the controller is spatially distributed and a cyber median is used to build virtual communication channels from the plant to the controller and [*[vice-versa]{}*]{}, see Fig. \[fig:distribuited\_control\_architecture\]. We assume that *sensors-to-controller* and *controller-to-actuators* communications are executed via Internet by means of encrypted sockets while all the remaining channels are local and externally not accessible. Moreover, malicious agents have the possibility to attack the communication over Internet by breaking the protocol security and may compromise/alter data flows in both the communication channels. Within such a context, two classes of attacks will be taken into account: *a)* Denials of Service (DoS) and *b)* False Data Injections (FDI): DoS attacks prescribe that attackers prevent the standard sensor and controller data flows, while FDI occurrences give rise to arbitrary data injection on the relevant system signals, i.e. command inputs and state measurements.\
Specifically, we shall model attacks on the actuators as $$\label{acutator_attack_model}
\tilde{u}(t):=u^c(t)+u^a(t)$$ where $u^c(t)\in {\mathop{{\rm I}\mskip-4.0mu{\rm R}}\nolimits}^m$ is the command input determined by the controller, $u^a(t)\in {\mathop{{\rm I}\mskip-4.0mu{\rm R}}\nolimits}^m$ the attacker perturbation and $\tilde{u}(t)\in{\mathop{{\rm I}\mskip-4.0mu{\rm R}}\nolimits}^m$ the resulting corrupted signal. Similarly, sensor attacks has the following structure: $$\label{sensor_attack_model}
\tilde{y}(t):=y(t)+y^a(t)$$ where $y^a(t)\in {\mathop{{\rm I}\mskip-4.0mu{\rm R}}\nolimits}^n$ is the attacker signal and $\tilde{y}(t)\in{\mathop{{\rm I}\mskip-4.0mu{\rm R}}\nolimits}^n$ the resulting corrupted measurement.
From now on, the following assumptions are made:
\[ass1\] [*[An encrypted socked between controller and plant can be on-demand reestablished in at most $T_{encry}$ time instants.]{}*]{}
\[ass2\] [*[The minimum amount of time $T_{viol}$ required to violate the cryptography algorithm is not vanishing, i.e $T_{viol}\geq T_{encry}.$ ]{}*]{}
\[ass3\] [*[No relevant channel delays are due to the communication medium, i.e. all the induced delays are less than the sampling time $T_c.$]{}*]{}
\[remark:assumptions\] [*[Assumption]{}*]{} \[ass2\] relies on the fact that communication channels are not compromised for at least $T_{encry}$ time instants downline of a new encrypted socked is established. As a consequence, the plant-controller structure is guaranteed w.r.t the sensor/actuator data truthfulness. $\Box$
Then, the problem we want to solve can be stated as follows:\
**Resilient Control Problem of NC-CPSs subject to cyber attacks (**RC-NC-CPS**)** -
*Consider the control architecture of Fig. \[fig:distribuited\_control\_architecture\]. Given the **NC-CPS** model (\[eq:sys\])-(\[eq:constraints\]) subject to DoS and/or FDI (\[acutator\_attack\_model\])-(\[sensor\_attack\_model\]) attacks , determine*
- **-(P1)** An anomaly detector module $\mathbf{D}$ capable to discover cyber attack occurrences;
- **-(P2)** A control strategy $
u(\cdot)=f(\tilde{y}(\cdot),\mathbf{D})
$ such that Uniformly Ultimate Boundedness is ensured and prescribed constraints fulfilled regardless of the presence of any admissible attack scenario. Moreover, if $u^a(t)\equiv 0,\,y^a(t) \equiv 0$ (attack free scenario) and $d_x(t), \,d_y(t)\equiv 0$ (disturbance free scenario) $\forall t\geq \bar{t}$ then the regulated plant is asymptotically stable.
The **RC-NC-CPS** problem will be addressed by properly customizing the dual model set-theoretic control scheme described in Section \[section:dual-mode\].
SET-THEORETIC CHARACTERIZATION AND IDENTIFICATION OF ATTACKS
============================================================
In this section, an identification attack module will be developed. To this end, the following preliminaries are necessary. First, notice that according to (\[fun\_opt\_b\_3\])-(\[cond\_opt\_b\_5\]) the following set-membership conditions hold true: $$\begin{aligned}
\label{robust-set-contraction}
x(t+1)&\in& \mathbf{Y}^+(y(t),u^c(t)) \nonumber \\
\mathcal{T}^{i(t)-1} &\supseteq& \mathbf{Y}^+(y(t),u^c(t))\!:=\!
\{
z\in {\mathop{{\rm I}\mskip-4.0mu{\rm R}}\nolimits}^n\!\!\!:\exists d_x\in {{\cal D}}_x,\,d_y\!\!\in\!\! {{\cal D}}_y,\nonumber \\
&&\qquad \qquad z\!=\!Ay(t)\!+\!B{u}^c\!\!+\!\!B_d d_x+\!\!d_y
\!\}\end{aligned}$$ with $\mathbf{Y}^+(y(t),u^c(t))$ the expected output prediction set. Then, by using the classification given in [@TeShaSaJo15], we consider attackers having the following disclosure and disrupt resources:
- *Disclosure*: An attacker can access to the command inputs $u(t)$ and to the sensor measures $y(t);$
- *Disrupt*: An attacker can inject arbitrary vectors $u^a(t)$, $y^a(t)$ on the actuator and sensor communication channels but it cannot read and write on the same channel in a single time interval.
Finally, we consider attacks belonging to the following categories:
\[def:stealthy-attack\] Let us denote with $\mathcal{I}_a$ and $\mathbf{Y}_a^+$ the attacker model knowledge and expected output prediction set, respectively, then an *Attack with full model knowledge* is an attack with full information, $\mathcal{I}^{full},$ about the closed-loop dynamics of the physical plant, $$\label{eq:full_information}
\mathcal{I}_a\equiv \mathcal{I}^{full}\!\!\!:=\!\!
\left\{
(\ref{eq:sys})-(\ref{eq:constraints}), f^0,\!\!
\{\mathcal{T}^i\}_{i=0}^N,\,
y(t),\,
\mbox{opt:}(\ref{fun_opt_b_3})\!\!-\!\!(\ref{cond_opt_b_5})
\right\}$$ and perfect understanding of the expected output set, $
\mathbf{Y}_a^+\equiv \mathbf{Y}^+.
$
\[def:no-stealthy-attack\] An *Attack with partial model knowledge* is an attack with partial information, $\mathcal{I}_a$, about the closed-loop dynamics of (\[eq:sys\]), e.g. $$\label{eq:partial_information}
\mathcal{I}_a\subset \mathcal{I}^{full}\,\,\mbox{ and }\,\,\mathbf{Y}_a^+\neq \mathbf{Y}^+$$
Attacks with partial model knowledge
------------------------------------
The next result proposition shows that such attacks cannot compromise the system integrity while remaining stealthy.
\[no-stealthy-attack-identification\] [ *Given the NC-CPS model (\[eq:sys\])-(\[eq:constraints\]) subject to cyber attacks modeled as (\[acutator\_attack\_model\]) and (\[sensor\_attack\_model\]) and regulated by the state feedback law $u^c(t)=f(\tilde{y}(t))$ obtained via the *ST-RHC* scheme, then a detector module $\mathbf{D},$ capable to reveal *attacks with partial model knowledge*, $\mathcal{I}^a\subset \mathcal{I}^{full},$ is achieved as the result of the following set-membership requirement: $$\label{eq:condition_attack}
\tilde{y}(t+1) \in \mathbf{Y}^+$$* ]{}
Under the attack free scenario hypothesis, the current control action $u^c(t)$ guarantees that the one-step ahead state evolution $y^+:=A\tilde{y}(t)+Bu^c(t)+B_dd_x(t)+d_y(t)$ belongs to $\mathbf{Y}^+:$ $$\label{set-prediction-membership}
y^+\in \mathbf{Y}^+(\tilde{y}(t),u^c(t)) ,\,\,\,\,\,\,\forall d_x(t)\in {{\cal D}}_x,\,d_y(t)\in {{\cal D}}_y$$ Since cyber attacks can occur, two operative scenarios can arise at the next time instant $t+1:$ $$(i)\,\,\, \tilde{y}(t+1) \notin \mathbf{Y}^+, \,\,\,\,\,\,\, (ii) \,\,\, \tilde{y}(t+1) \in \mathbf{Y}^+$$ If (i) holds true then the attack is instantaneously detected. Otherwise when (ii) takes place, the following arguments are exploited. First, an attacker could modify the control signal by adding a malicious data $u^a(t)$ and, simultaneously, the detection can be avoided by infecting the effective measurement $y(t)$ as follows: $$\mbox{Find } y^a(t): y(t)+y^a(t)\in \mathbf{Y}^+.$$ Because the set $\mathbf{Y}^+$ is unknown (see *Definition* \[def:no-stealthy-attack\]) such a reasoning is not feasible. A second possible scenario could consist in injecting small sized perturbations $u^a(t)$ and $x^a(t)$ such that $$Bu^a(t)+B_dd_x(t)\in {{\cal D}}_x \mbox{ and } x^a(t)+d_y(t)\in {{\cal D}}_y,$$ Clearly, in this case by construction the computed command $u^c(t+1)$ remains feasible.
Attacks with full model knowledge {#section:attack_full_model_knoledge}
---------------------------------
A simple stealthy attack can be achieved by means of the following steps:
**
**Stealthy Attack algorithm**
**Knowledge:** $\mathcal{I}^{full}$
Acquire ${y}(t);$ \[primo\_step\_stealthy\] Estimate the control action $\hat{u}^c(t)$ by emulating the optimization (\[fun\_opt\_b\_3\])-(\[cond\_opt\_b\_5\]); Compute the expected disturbance-free one-step ahead state measurement $
\bar{y}^+=Ay(t)+B\hat{u}^c(t) \in \mathbf{Y}_a^+
$ Corrupt $u^c(t)$ with an arbitrary malicious admissible signal $u^a(t)$ such that $
\hat{u}^c(t)+u^a(t)\in \mathcal{U}
$ Corrupt the output vector $y(t+1)$, according to expected one-step state evolution $\bar{y}^+$ i.e. $
y^a(t): \,\,y(t+1)+y^a(t)=: \bar{y}^+
$ $t\leftarrow t+1$, goto Step \[primo\_step\_stealthy\]
Note that the above attack can never be identified by the proposed detector $\mathbf{D}$ because condition (\[eq:condition\_attack\]) is always satisfied. As a consequence, the only way to detect it is to increase the information available at the defender side $\mathcal{I}^{full}$ so that the [*[partial model knowledge]{}*]{}attack structure is re-considered: $$\mathbf{Y}_a^+\neq \mathbf{Y}^+$$ The key idea traces the philosophy behind the watermarking approach [@MoWeSi15], where the defender superimposes a noise control signal (new information not available at the defender side) in order to authenticate the physical dynamics. In particular, a watermarking-like behavior can be straightforwardly obtained by using the *ST-RHC* property discussed in *Remark* \[remark:varaible\_cost\_function\].
\[set-theoretic-watermarking\] [*Let (\[eq:sys\])-(\[eq:constraints\]) and (\[acutator\_attack\_model\])-(\[sensor\_attack\_model\]) be the plant and the FDI attack models, respectively. Let $$\label{cost_functions}
\mathbf{J}=\left\{J_k(\tilde{y}(t),u)\right\}_{k=1}^{N_j}
,\,\,
\mathbf{F^0}=\left\{f^0_k(\tilde{y}(t))\right\}_{k=1}^{N_j}
,\,\,N_j>1$$ be finite sets of cost functions and stabilizing state-feedback control laws compatible with $\mathcal{T}^0,$ respectively. Let $j(t):{\mathop{{\rm Z}\mskip-7.0mu{\rm Z}}\nolimits}_+\rightarrow [1,\ldots, N_j]$ be a random function. If at each time instant $t$ the command input $u^c(t)$ is obtained as the solution of (\[fun\_opt\_b\_3\])-(\[cond\_opt\_b\_5\]) with $J_{j(t)}(\tilde{y}(t),u)$ and $f^0_{j(t)}(\tilde{y}(t))$ randomly chosen, then the anomaly detector module (\[eq:condition\_attack\]) is capable to detect complete model knowledge $\mathcal{I}^{full}$ attacks.* ]{}
Because the additional information $j(t)$ is not available to the attacker, then the following time-varying information flow results: $$\mathcal{I}^{full}(t):=
\left\{
(\ref{eq:sys})\!\!-\!\!(\ref{eq:constraints}),f^0\!\!,
\{\mathcal{T}^i\}_{i=0}^N,\,
\tilde{y},
\mbox{opt: }(\ref{fun_opt_b_3})\!\!-\!\!(\ref{cond_opt_b_5}),j(t)
\right\} \supset {I}^{full}$$ This implies that $\mathcal{I}_a\subset \mathcal{I}^{full}(t)$ and, as a consequence, a perfect stealthy attack is no longer admissible $$\mathbf{Y}^+(\tilde{y}(t),u^c(j(t)))\neq\mathbf{Y}^+_a(\tilde{y}(t),u^c(t))$$ Therefore, the detection rule (\[eq:condition\_attack\]) is effective.
Finally, by collecting the results of *Propositions* \[no-stealthy-attack-identification\]-\[set-theoretic-watermarking\], a solution to the **P1** problem is given by the following detector module: $$\label{detector_module}
\mbox{\textbf{Detector(D)}$(\tilde{y}(t))$:=}
\left\{
\begin{array}{l}
\!\!\!\!\mbox{attack}\quad \mbox{if}\quad \tilde{y}(t+1) \notin \mathbf{Y}^+ \\
\!\!\!\!\mbox{no attack}\quad \mbox{if}\quad \tilde{y}(t+1) \in \mathbf{Y}^+
\end{array}
\right.$$
Cyber-Physical countermeasures for resilient and secure control
===============================================================
Once a attack has been physically detected, the following cyber countermeasures can be adopted to recover an attack free scenario:
- Interrupt all the *sensor-to-controller* and *controller-to-actuators* communications links;
- Reestablishing new secure encrypted channels.
From a physical point of view, the prescribed actions imply that for an assigned time interval, namely $T_{encry},$ update measurements and control actions are not available at the controller and actuator sides, respectively. Therefore, the main challenge is:
*How we can ensure that, at least, the minimum safety requirements $x(t)\in \mathcal{X},\, u(t)\in\mathcal{U}$ are met while the communication are interrupted for $T_{ecnry}$ time instants?*
The next section will be devoted to answer to this key question.
$\tau$-steps feasible sets and associated set-theoretic controller ($\tau$-ST-RHC)
----------------------------------------------------------------------------------
Let $\mathcal{T}$ be a RPI region for the plant model (\[eq:sys\])-(\[eq:constraints\]) subject to the induced time-delay $\tau,$ see [@FraTeFa15]. Then, a family of $\tau$-steps controllable sets, $\{\mathcal{T}^i(\tau)\}_{i=1}^{N},$ can be defined as follows $$\label{k-steps-feasible-sets}
\begin{array}{l}
\mathcal{T}^0(\tau) := \mathcal{T}\vspace{-0.7cm}\\
\mathcal{T}^i(\tau) :=\!\!\! \{ x \!\! \in \!\!{{\cal X}}: \exists \, u\!\in\! {{\cal U}}:\!\!\!\! \overbrace{A^k}^{A(k)}\!\!\! x\! +\! (\displaystyle \!\overbrace{\sum_{j=0}^{k-1}{\!\!\!A^j}B)}^{B(k)} u\! \subseteq\!
\tilde{\mathcal{T}}^{i-1}(\tau)\vspace{-0.2cm}\\
\quad \quad \quad \quad \forall\,\, k\!=\!1,\!\ldots\!,\! \tau \}
\end{array}
\vspace{-0.5cm}$$ with $$\label{p-difference-k-steps}
\left\{
\begin{array}{l}
\tilde{\mathcal{T}}_1^i(\tau)={\mathcal{T}}^i\sim B_d{{\cal D}}_x\sim A {{\cal D}}_y.\\
\tilde{\mathcal{T}}_k^i(\tau)=\!\tilde{\mathcal{T}}_{k-1}^{i}\!\!(\tau)\!\sim\!\! A^{k-1}B_d{{\cal D}}_x\!\sim\! A^k {{\cal D}}_y,\,\,k=2,\ldots, \tau.
\end{array}
\right.$$ An equivalent description $\{\Xi^i(\tau)\}$ of (\[k-steps-feasible-sets\]) can be given in terms of the extended space $(x,\,u):$ $$\label{k-steps-feasible-aug-set}
\begin{array}{c}
\Xi^i(\tau)=\{ (x,\,u) \in {{\cal X}}\times{{\cal U}}: A(k)x\! +\! {B(k)} u\! \in\! \tilde{\mathcal{T}}^{i-1}(\tau),\\
\forall k=1,\ldots, \tau\}\\
\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,=\displaystyle\bigcap_{k=1}^{\tau}\! \{\!(x,\,u)\!\! \in {{\cal X}}\!\!\times{{\cal U}}: A(k)x\! +\! {B(k)} u\! \in\!\! \tilde{\mathcal{T}}^{i-1}\!(\tau)\}
\end{array}$$ Hence, the sets of all the admissible state and input vectors are simply determined as follows: $$\label{k-steps-feasible-x-u-set}
\mathcal{T}^i(\tau)={Proj}_{x} \Xi^i(\tau),\quad \mathcal{U}^i(K)={Proj}_u \Xi^i(\tau)$$ where ${Proj}_{(\cdot)}$ is the projection operator [@Bl08].
\[property:control\_action\_feasible\_k\_steps\] [ *Let the set sequences $\{\Xi^i(T_{encry})\}_{i=1}^{N},$ $\{\mathcal{U}^i(T_{encry})\}_{i=1}^{N},$ $\{\mathcal{T}^i(T_{encry})\}_{i=1}^{N}$ be given. Under the attack free scenario hypothesis ($u^a(t)\equiv0,\, y^a(t)\equiv 0$), the control action $u^c(t),$ computed by means of the following convex optimization problem $$\label{new_opt_k_steps_fun}
{u}^c(t) = \arg \min_u J_{j(t)}(y(t),u) \quad s.t.$$ $$\label{new_opt_k_steps_constr}
[y(t),\,u]\in \Xi^i(T_{encry}),\quad u \in {{\cal U}}^i(T_{encry})$$ and consecutively applied to (\[eq:sys\]) for $T_{ecnry}$ time instants, ensures: *i)* constraints fulfillment; *ii)* state trajectory confinement, i.e. $x(t+k)\in \mathcal{T}^{i(t)-1}(T_{encry}),\,\forall k=1,\ldots, T_{encry},$ regardless of any $d_x(\cdot)\in \mathcal{D}_x$ and $\,d_y(\cdot)\in \mathcal{D}_y$ realizations and any cost function $J_{j(t)}(y(t),u).$* ]{}
By construction of (\[k-steps-feasible-sets\])-(\[k-steps-feasible-x-u-set\]), it is always guaranteed that, if $x(t)\in \mathcal{T}^{i(t)}(T_{encry})$, the optimization (\[new\_opt\_k\_steps\_fun\])-(\[new\_opt\_k\_steps\_constr\]) is feasible and an admissible $u^c(t)$ there exists. Moreover if for $T_{encry}$ time instants the command ${u}^c(t)$ is consecutively applied to (\[eq:sys\]), one has that $$x(t+k)\!=\!A^k x(t)\!+\!\!\!\sum_{j=0}^{k-1}\!\!\!(A^jB)u^c(t)\!+\!\!\!\underbrace{\sum_{j=0}^{k-1}\!\!\!(A^jB_d)d_x(j)}_{unknown},\,k=1,\!\ldots,\!T_{encry}$$ Then in virtue of (\[k-steps-feasible-sets\]), the disturbance-free evolution $\bar{x}(t+k)$ is $$\bar{x}(t+k)\in \tilde{\mathcal{T}}^{i-1}_k(T_{encry}),\,\forall\,k=1,\!\ldots,\!T_{encry}$$ and the following implications hold true $$\begin{array}{c}
\forall d_x(t+k)\in {{\cal D}}_x, d_y(t+k)\in {{\cal D}}_y\!\!\!\! \implies\!\!\!\! x(t+k+1)\!\!\!\in \!\!{\mathcal{T}}^{i-1}_k(T_{encry})\\
k=0,\ldots, T_{encry}-1.
\end{array}$$
\[remark:k-step-sets-computation\] The optimization (\[new\_opt\_k\_steps\_fun\])-(\[new\_opt\_k\_steps\_constr\]) is solvable in polynomial time and the required computational burdens are irrespective of the number of steps $T_{encry}.$ Further details on the computation of the $\tau$-steps controllable sets can be found in [@Bl08],[@KuVa97] for comprehensive tutorials and [@MPT3],[@Ell_toolbox] for available toolboxes. $\Box$
In the sequel, the control strategy arising from the solution of (\[new\_opt\_k\_steps\_fun\])-(\[new\_opt\_k\_steps\_constr\]) will be named **$\tau$-ST-RHC** controller. Note that it is not able to address all the attack scenarios, because if the more recent action $u^c(t)$ has been corrupted, the Proposition \[property:control\_action\_feasible\_k\_steps\] statement becomes no longer valid. In such a case, the defender can only use a *smart* actuator module that locally, by means of simple security checks, is able to understand if the most recent command input is malicious.
Pre-Check and Post-Check firewalls modules {#pre-post-check}
------------------------------------------
In what follows, two complementary modules, hereafter named **Pre-Check** and **Post-Check**, are introduced, see Fig. \[fig:distribuited\_control\_architecture\]. The reasoning behind them is to passively detect attacks before they could harm the plant. In particular, such modules are in charge of checking the following state and input set-membership requirements: $$\label{pre_check}
\mbox{\textbf{Pre-Check}$(i(t))$:=}
\left\{
\begin{array}{l}
\!\!\!\!true\quad \mbox{if}\,\,\,\tilde{u}(t)\in \{\mathcal{U}^{i}(T_{encry})\}_{i=1}^{i(t)}\\
\!\!\!\!false\quad \mbox{otherwise}
\end{array}
\right.$$ $$\label{post_check}
\mbox{\textbf{Post-Check}$(i(t))$:=}\!
\left\{
\begin{array}{l}
\!\!\!\!\!\!true\,\,\, \mbox{if}\,\,\,y(t)\!\in \!\!\{\mathcal{T}^{i}(T_{encry})\}_{i=1}^{i(t)}\!\!\!\!\\
\!\!\!\!\!\!false\,\,\, \mbox{otherwise}
\end{array}
\right.$$ Conditions (\[pre\_check\])-(\[post\_check\]) check if the received $\tilde{u}(t)$ and the measurement $y(t)$ are “coherent" with the expected set level $i(t).$ If one of these tests fails, then a warning flag is sent to the actuator and an attack is locally claimed.\
In response to the received flag, different actions are performed by the actuator: if the **Pre-Check** fails, $\tilde{u}(t)\notin \{\mathcal{U}^{i}(T_{encry})\}_{i=1}^{i(t)},$ then $\tilde{u}(t)$ is discarded and the admissible stored input, hereafter named $u_{-1}:=\tilde{u}(t-1),$ applied; if the **Post-Check** fails, $y(t)\notin \{\mathcal{T}^{i}(T_{encry})\}_{i=1}^{i(t)},$ then an harmful command $\tilde{u}(t-1)$ has been applied bypassing the **Pre-Check** control. As a consequence, $u_{-1}$ cannot be used at the next time instants. In this circumstance, a possible solution consists in applying the zero input $u(t)\equiv 0$ until safe communications are reestablished. The latter gives rise to the following problem: *How can one ensure that the open-loop system subject to $u(t)\equiv 0$ fulfills the prescribed constraints (\[eq:constraints\]) and is UUB?*
The following developments provide a formal solution.
Let denote with $\mathcal{T}^{i_{max}},\,\,\,i_{max}\leq N$ the maximum admissible set computed as follows $$\label{safe_free_evolution}
\begin{array}{c}
i_{max}=\displaystyle \max_{i\leq N} \,\,i \quad s.t.\\
\underbrace{A^k\mathcal{T}^i(T_{encry}) \oplus \displaystyle \sum_{j=0}^{k-1}A^{j}B_d{{\cal D}}_x}_{\mbox{first term}}\oplus\!\!\!\!\!\! \underbrace{A^{k-1}B\mathcal{U}}_{\mbox{second term}}\!\!\!\!\!\!\!\! \in\!\!\!\!\!\!\! \displaystyle \bigcup_{j=1}^{\min(N, i+T_{viol})}\!\!\!\!\!\!\!\!\!\!\!\mathcal{T}^j(T_{encty})\\
i=1,\ldots, i_{max},\,\,k=1,\ldots, {T_{encry}}.
\end{array}$$ Note that the first term represents the autonomous state evolution of (\[eq:sys\]), whereas the second one takes care of an unknown input $u\in \mathcal{U}.$ Moreover, the upper bound $\min(N, i_{max}+T_{viol})$ is complying with the Assumption \[ass2\], where it is supposed that, after the recovery phase, a new attack could only occurs after $T_{viol}$ time instants.
The reasoning behind the introduction of $\mathcal{T}^{i_{max}}$ concerns with the following feasibility retention arguments. When data (state measurements and control actions) flows are interrupt the [**[NC-CPS]{}**]{} model (\[eq:sys\]) evolves in an open-loop fashion under a zero-input action. Therefore the computation (\[safe\_free\_evolution\]) guarantees that, starting from any initial condition belonging to $\displaystyle\bigcup_{i=1}^{i_{max}}(\, \mathcal{T}^i(T_{encry})$ the resulting $T_{encty}$ -step ahead state predictions of (\[eq:sys\]) are embedded in the worst case within the DoA $\displaystyle\bigcup_{i=1}^N\, \mathcal{T}^i(T_{encry}).$
\[free\_evolution\_bounded\_stability\] [*Let $\{\mathcal{T}^i(T_{encry})\}_{i=1}^{N}$ and $\mathcal{T}^{i_{max}}(T_{encry})$ be the $\tau-$step ahead controllable set sequence and the maximum admissible set, respectively. If the plant model (\[eq:sys\]) is operating under a free attack scenario and the state evolution $x(\cdot)$ is confined to $\displaystyle\bigcup_{i=1}^{i_{max}}\, \mathcal{T}^i(T_{encry}),$ then the zero-input state evolution of (\[eq:sys\]) will be confined to $\displaystyle\bigcup_{i=1}^N\, \mathcal{T}^i(T_{encry})$ irrespective of any cyber attack occurrence and disturbance/noise realizations.* ]{}
Constraints fulfillment and UUB trivially follow because $0_m\in \mathcal{U}$ and $\bigcup_{i=1}^{\min(N, i_{max}+T_{viol})}\{\mathcal{T}^i(T_{encry})\}\subseteq \mathcal{X}$.
The RHC algorithm
-----------------
The above developments allow to write down the following computable scheme.
**
**Actuators** Algorithm
**Input:** $\tilde{u}(t),$ **Pre-Check**, **Post-Check**\
**Output:** The applied control input $u$, the expected set-level $\hat{i}$\
**Initialization:** $\hat{i}=i(0),$ $u_{-1}=u^c(0)$
$u(t)=u^{-1}$; \[previous\_command\] $u(t)=\tilde{u}(t),$ $\hat{i}=\hat{i}-1;$ $u(t)=u_{-1}$ $u(t)=0;$ \[free-evol\] $u^{-1}\leftarrow u(t)$ $t\leftarrow t+1,$ goto Step 1
**
**$\tau$-ST-RHC** Controller Algorithm (Off-line)
**Input:** $T_{encry}$\
**Output:** $\left\{\Xi^{i}\right\}_{i=0}^{N}(T_{encry}),\left\{\mathcal{T}^{i}\right\}_{i=0}^{N}(T_{encry}),$ $\left\{\mathcal{U}^{i}\right\}_{i=0}^{N}(T_{encry}),$ $i_{max}$
Compute a RPI region $\mathcal{T}^0$ Compute the families of $T_{encry}-$steps controllable sets $\left\{\Xi^{i}\right\}_{i=0}^{N}(T_{encry}),\left\{\mathcal{U}^{i}\right\}_{i=0}^{N}(T_{encry}),\left\{\mathcal{T}^{i}\right\}_{i=0}^{N}(T_{encry})$ by resorting to recursion (\[k-steps-feasible-aug-set\]) and to the projection (\[k-steps-feasible-x-u-set\]) Determine the maximum index $i_{max}$ satisfying (\[safe\_free\_evolution\]) Collect $N_j>1$ cost functions (\[cost\_functions\]) and terminal control law $f^0_{j(t)}(\tilde{x}(t))$
**
**$\tau$-ST-RHC** Controller Algorithm (On-line)
**Input:** $\tilde{y}(t),$ $\left\{\Xi^{i}\right\}_{i=0}^{N}(T_{encry}),$ $\left\{\mathcal{T}^{i}\right\}_{i=0}^{N}(T_{encry}),$ $\left\{\mathcal{U}^{i}\right\}_{i=0}^{N}(T_{encry}),$ **Detector**$(\tilde{y}(t)),$ $i_{max},$ $\mathbf{J}$\
**Output:** Computed command $u^c(t)$\
**Initialization:** status=no attack, timer=0, encrypted communication channels, initialize **Detector**, **Pre-Check**, **Post-Check**, **Actuator** modules\
**Feasibility start condition:** $\exists i<(i_{max}+T_{viol})\leq N:\, x(0)\in \bigcup_{i=0}^{max(N,i_{max}+T_{viol})}\{\mathcal{T}^{i}(T_{encry})\}$
status=attack \[start\_encry\] timer=timer+1;
[0.85]{} Re-initialize all modules; status==no attack; timer=0;
\[end\_encry\]
Find $$i(t)=\arg\min_i:\, \tilde{y}(t)\in \mathcal{T}^{i}(T_{encry})$$ Randomly choose the selection index $\bar{j}=j(t)$; $u^c(t)=f^0_{\bar{j}}(\tilde{y}(t))$
[0.80]{} Compute $u^c(t)$ by solving (\[new\_opt\_k\_steps\_fun\])-(\[new\_opt\_k\_steps\_constr\]) with cost function $J_{\bar{j}}(\tilde{y}(t),u)$;
Send $u^c(t)$ to the actuators; Interrupt all the communications Reestablish encrypted communication channels $t\leftarrow t+1$, goto Step 1
\[remark:stima\_set\_level\_attuatore\] It is important to underline that **Pre-Check** and **Post-Check** modules need the current set-level $i(t). $ Unfortunately, this information cannot be transmitted because it could be modified by some attackers. To overcome such a difficulty, the estimate $\hat{i}(t)$ provided by the **Actuator** unit is used. Note that $i(t)$ and $\hat{i}(t)$ are synchronized at the initial ($t=0$) and at each recovery phase time instants, while in all the other situations it is ensured that such signals are compatible, i.e. $\hat{i}(t)\geq i(t), \, \forall t \geq 0.$ $\Box$
\[teorema:proof\_P1\_P2\_problem\] *[ Let $\{\Xi^i(T_{encry})\}_{i=1}^{N},$ $\{\mathcal{U}^i(T_{encry})\}_{i=1}^{N},$ $\{\mathcal{T}^i(T_{encry})\}_{i=1}^{N},$ be non empty controllable set sequences and $$x(0)\in \bigcup_{i=0}^{max(N,i_{max}+T_{viol})}\{\mathcal{T}^{i}(T_{encry})\}$$ Then, the proposed set-theoretic control architecture (**$\tau$-ST-RHC** Controller, **Detector**, **Pre-Check** and **Post-Check**) always guarantees constraints satisfaction and Uniformly Ultimate Boundedness for all admissible attack scenarios and disturbance/noise realizations. ]{}*
The proof straightforwardly follows by ensuring that under any admissible attack scenario the following requirements hold true:
- the on-line optimization problem (\[new\_opt\_k\_steps\_fun\])-(\[new\_opt\_k\_steps\_constr\]) is feasible and the state trajectory $x(t)$ is confined to $\bigcup_{i=0}^{N}\{\mathcal{T}^{i}(T_{encry})\};$
- any attack free scenario can be recovered in at most $T_{encry}$ time instants.
As shown in Section \[pre-post-check\], the worst case scenario arises when the attacker can successful inject a malicious input that simulates a stealthy attack. First, in virtue of the actions of the **Pre-Check** and **Actuator** modules, the input constraints $u(t)\in \mathcal{U}$ are always fulfilled. Then, the **Post-Check** module ensures that, whenever the state trajectory diverges within the $T_{encry}$-steps ahead controllable set sequence, a recovery procedure starts and an admissible zero-input state evolution takes place, see Proposition \[free\_evolution\_bounded\_stability\].
NUMERICAL EXAMPLE
=================
We consider the continuous-time model [@BlMI00] $$\left[
\begin{array}{c}
\dot{x}_1(t)\\\dot{x}_2(t)
\end{array}
\right]\! =\!
\left[
\begin{array}{c c}
1 & 4\\ 0.8 & 0.5
\end{array}
\right]
\left[
\begin{array}{c}
x_1(t)\\x_2(t)
\end{array}
\right]
\!\!+\!\!
\left[
\begin{array}{c}
0\\1
\end{array}
\right]
u(t)
\!\!+\!\!
\left[
\begin{array}{c}
1\\1
\end{array}
\right]
d_x(t)$$ subject to $$|u(t)| \leq 5, |x_1(t)| \leq 2.5, |x_2(t)| \leq 10, |d_x (t)|\leq 0.05$$ The continuous time system has been discretized by means of Forward Euler-method with sampling time $T_s = 0.02\,sec.$ According to *Assumption* \[ass1\]-\[ass3\], we consider a reliable encrypted communication medium where $T_{encry}=4$ time steps ($ 0.08sec$) and $T_{viol}=5$ time steps ($0.1sec$).
First, the following polyhedral families of $T_{encry}-$steps controllable sets are computed (see Fig. \[fig:sets\_and\_trajectory\]): $$\left\{\Xi^{i}\right\}_{i=0}^{60}(T_{encry}),\,\,\,\left\{\mathcal{U}^{i}\right\}_{i=0}^{60}(T_{encry}),\,\,\,\,\left\{\mathcal{T}^{i}\right\}_{i=0}^{60}(T_{encry})$$ and the maximum safe index set $i_{max}=45$ has been determined.
![$\left\{\mathcal{T}^{i}\right\}_{i=0}^{60}(T_{encry})$ family (black polyhedra) and state trajectory (red solid line). Blue arrows point to the current system state vector at the beginning of each attack scenario. []{data-label="fig:sets_and_trajectory"}](sets_and_trajectory){width="0.6\linewidth"}
The following simulation scenario is considered:\
[*[Starting from the initial condition $x(0)=[-1.09,\,5.11]^T\in \mathcal{T}^{45}(T_{encry}),$ regulate the state trajectory to zero regardless of any admissible attack and disturbance realization and satisfy the prescribed constraints.]{}*]{}
In the sequel, the following sequence of attacks is considered:
- Partial model knowledge attacks - (Attack 1) DoS attack on the *controller-to-actuator* channel; (Attack 2) DoS attack on the *sensor-to-controller* channel; (Attack 3) FDI attack on the *controller-to-actuator* channel.
- FDI Full model knowledge attacks - (Attack 4). By following the **Stealthy Attack Algorithm** of Section \[section:attack\_full\_model\_knoledge\], the attacker, tries to impose the malicious control action $$\begin{array}{c}
\tilde{u}(t)=\displaystyle \arg \max_u |Ax+Bu|,\,\,\,\,s.t. \\
Ax+Bu\in \tilde{\mathcal{T}}^0,\,\,\,u\in \mathcal{U}^0\\
\end{array}$$ with the aim to keep the state trajectory as far as possible from the equilibrium and to avoid the **Post-Check** detection by embedding the dynamical plant behavior within the terminal region.
![Command inputs: actuator output $u(t),$ corrupted control action $\tilde{u}(t),$ controller command $u^c(t).$ []{data-label="fig:inputs_and_attacks"}](inputs_and_attacks){width="1\linewidth"}
![State set-membership levels: real plant level $i(t)$ (top figure), Pre-Check and Post-Check estimate level $\hat{i}(t)$ (bottom figure). []{data-label="fig:set_levels"}](set_levels){width="1\linewidth"}
![Detector, Pre-Check and Post-Check flag signals.[]{data-label="fig:status_new"}](status_new){width="1\linewidth"}
First, it is interesting to underline that the state trajectory is confined within $\left\{\mathcal{T}^{i}\right\}_{i=0}^{60}(T_{encry})$ and asymptotically converges to the origin when an attack free scenario is recovered ($t= 4.32 sec$). -([**[Attack 1]{}**]{}) Starting from $t=0.14 sec,$ the actuator do not receive new packets. According to the **Actuators** algorithm (Step \[previous\_command\]) the most recent available command ($u(t)=u^c(0.12)=4.95$) can be applied since both Pre-Check and Post-Check conditions are satisfied. At $t=0.16 sec,$ the Detector identifies the attack (see Fig. \[fig:status\_new\]) because $$\tilde{y}(0.16)\!\!\notin\!\! \mathcal{Y}^+\!\!\!\!=\!\{\!z\in {\rm R}^n\!:\! \exists d\in \mathcal{D}, z\!=\!A\tilde{x}(0.14)\!+\!Bu^c(0.14)\!+\!B_d d\}$$ As prescribed in Steps \[start\_encry\]-\[end\_encry\] of the **$\tau$-ST-RHC** algorithm, the existing communications are interrupted and the procedure to reestablish new encrypted channels started. At $t=0.24 sec,$ the encryption procedure ends and all the modules re-initialized. It is worth to notice that neither Pre-Check or Post-Check modules trigger a False Input or False output events. This is due to the fact that the most recent control action was not corrupted and, by construction, it ensures that the state trajectory remains confined within the current controllable set for the successive $T_{encry}$ time instants, see Fig. \[fig:set\_levels\].
-([**[Attack 2]{}**]{}) Starting from $t=0.34 sec$ the Controller does not receive update state measurements and the [**[Detector]{}**]{} triggers an attack event. As a consequence, the network is disconnected and the actuator does not receive new control actions and the available command $u(t)=u^c(0.32)=-4.19$ is applied, see Fig. \[fig:inputs\_and\_attacks\]. At $t=0.22 \,sec,$ the attack free scenario is recovered.
-([**[Attack 3]{}**]{}) At $t=0.52 sec$ the attacker injects a signal $u^a(0.52)=2$ that corrupts the current input $u^c(0.52)=-4.91$ as indicated in (\[acutator\_attack\_model\]). Therefore, the actuator receives $\tilde{u}(0.52)=-2.91$ that is still admissible as testified by the [**[Pre-Check]{}**]{} unit. The main consequence of the latter is that $x(0.54) \in \mathcal{T}^{20}$ while the expected set-membership condition should have to be $\mathcal{T}^{18}:$ the Post-Check module and the Detector trigger an attack event and the Controller blocks all the communications. From now on, the [**[Actuator]{}**]{} logic imposes a zero-input state evolution, see Step \[free-evol\]. Although during the channel encryption phase (the encryption procedure ends at $t=0.60 sec$) the set-membership index increases (see Fig.\[fig:set\_levels\]), this does not compromise feasibility retention because $x(0.52)\in \{\mathcal{T}^i(T_{encry})\}_{i=0}^{imax}$ and the zero-input state evolution will remain confined within the domain of attraction, i.e. $x(0.60)=[-0.58,\, 3.61]^T\in \mathcal{T}^{42}(T_{encry}),$ see Fig. \[fig:set\_levels\]. -([**[Attack 4]{}**]{}) At $t=3.84 sec,$ with $x(3.83)\in \mathcal{T}^0(T_{encry}),$ an FDI attack corrupts both the communication channels, see Fig. \[fig:sets\_and\_trajectory\],\[fig:set\_levels\].\
The attacker is capable to remain stealthy until $t=4.24\,sec$ and to manipulate the plant input and outputs. This unfavorable phenomenon is due to the fact that it is not possible to discriminate between the attack and the disturbance/noise realizations $d_x(t)$ and $d_y(t),$ i.e. $\tilde{y}(t)\in \mathcal{Y}^+, \forall t\in [3.84,\, 4.22]sec.$\
At $t=4.24\,sec,$ a different behavior occurs in response to $\tilde{u}(4.24)=-5.025\notin \mathcal{U}^0$ with the Pre-Check module that triggers an anomalous event arising when the attacker tries to impose $\tilde{u}(4.24)=-4.993$ as the current input. Specifically, the attacker determines the estimate $\hat{u}^c(4.24)$ and modifies the control action as follows $\hat{u}^c(4.24)+u^a(4.24)=\tilde{u}(4.24).$ Since a time-varying index is exploited in (\[new\_opt\_k\_steps\_fun\])-(\[new\_opt\_k\_steps\_constr\]), the estimate $\hat{u}^c(4.24)=-0.032$ is numerically different from the effective control action $u^c(4.24)=-0.059.$ Therefore $\tilde{u}(4.24) \notin \mathcal{U}^0$ and, as a consequence, the attack is detected. For the interested reader, simulation demos are available at the following two web links: **J fixed**: <https://goo.gl/8CQ4b8>, **J random**: <https://goo.gl/DQhOxB>
Conclusions
===========
In this paper a control architecture devoted to detect and mitigate cyber attacks affecting networked constrained CPS is presented. The resulting control scheme, which takes mainly advantage from set-theoretic concepts, provides a formal and robust cyber-physical approach against the DoS and FDI attack classes. Constraints satisfaction and Uniformly Ultimate Boundndeness are formally proved regardless of any admissible attack scenario occurrence. Finally, the simulation section allows to show the effectiveness and applicability of the proposed strategy under severe cyber attack scenarios.
[99]{}
D. Angeli, A. Casavola, G. Franzè, and E. Mosca, “An ellipsoidal off-line MPC scheme for uncertain polytopic discrete-time systems”, [*Automatica*]{}, Vol. 44, No. 12, pp. 3113–3119, 2008.
D. P. Bertsekas and I. B. Rhodes, “On the minimax reachability of target sets and target tubes”, [*Automatica*]{}, Vol. 7, pp. 233–247, 1971.
F. Blanchini and S. Miani, “Any domain of attraction for a linear constrained system is a tracking domain of attraction”, [*[SIAM Journal on Control and Optimization]{}*]{}, Vol. 38, No. 3, pp. 971–994, 2000.
F. Blanchini and S. Miani, “Set-Theoretic Methods in Control”, [*Birkäuser*]{}, Boston, 2008.
T.M. Chen, “Stuxnet, the real start of cyber walfare?\[editor’s note\]”. [*IEEE Network*]{}, Vol. 24, No. 6, pp. 2–3, 2010.
H. Fawzi, P. Tabuada and S. Diggavi, “Secure estimation and control for cyber-physical systems under adversarial attacks”. [*IEEE Transactions on Automatic Control*]{}, Vol. 59, No. 6, pp. 1454–1467, 2014.
G. Franzè, F. Tedesco and D. Famularo, “Model predictive control for constrained networked systems subject to data losses”, [*Automatica*]{}, Vol. 54, pp. 272-278, 2015.
S. Gorman, “Electricity grid in US penetrated by spies”, [*The wall street journal*]{}, A1, 2009.
A. A. Kurzhanskiy and P. Varaiya, “Ellipsoidal toolbox (et)”, [*45rd IEEE CDC*]{}, pp. 1498–1503, 2006.
A. Kurzhanski[ĭ]{} and I. V[á]{}lyi. “Ellipsoidal calculus for estimation and control’, [*Berlin: Birkhauser*]{}, 1997.
F. Miao, M. Pajic and G.J. Pappas, “Stochastic game approach for replay attack detection”, [*52nd IEEE CDC*]{}, pp. 1854–1859, 2013.
Y. Mo, S. Weerakkody and B. Sinopoli, “Physical authentication of control systems: designing watermarked control inputs to detect counterfeit sensor outputs”, [*IEEE Control Systems*]{}, Vol. 35, No. 1, pp. 93–109, 2015.
M. Herceg, M. Kvasnica, C. Jones, M. Morari, “Multi-parametric toolbox 3.0”, [*European Control Conference (ECC)*]{}, 2013.
F. Pasqualetti, F. Dorfler and F. Bullo,“Attack detection and identification in cyber-physical systems”, [*IEEE Transaction on Automatic Control*]{} Vol. 58 No. 11, pp. 2715-2729, 2013.
T. Samad and A.M. Annaswamy, “The Impact of Control Technology”, [*IEEE Control Systems Society*]{}, 2011.
A. Teixeira, I. Shames, H. Sandberg and K.H. Johansson, “Revealing stealthy attacks in control systems’, [*50th IEEE Allerton Conference on Communication, Control, and Computing*]{}, pp. 1806–1813, 2012.
A. Teixeira, I. Shames, H. Sandberg and K.H. Johansson, “A secure control framework for resource-limited adversaries“, [*Automatica*]{}, Vol. 51, pp. 135–148, 2015.
S. Weerakkody, and B. Sinopoli. “Detecting integrity attacks on control systems using a moving target approach”, [*54rd CDC*]{}, 2015.
[^1]: \*Walter Lucia and Giuseppe Franzè are with the DIMES, Universitá degli Studi della Calabria, Via Pietro Bucci, Cubo 42-C, Rende (CS), 87036, ITALY [{walter.lucia, giuseppe.franze}@unical.it]{}
[^2]: \*\*Bruno Sinopoli is with the Electrical and Computer Engineering Department, Carnegie Mellon University, Pittsburgh, PA 15213, USA, [brunos@ece.cmu.edu]{}
|
---
abstract: 'Recently, El-Maaref [*et al*]{} \[J. Phys. B 52(2019) 065202\] have reported results for energy levels, radiative rates and collision strengths ($\Omega$) for some transitions of Kr-like W XXXIX. For the calculations of these parameters they have adopted several codes, including GRASP, DARC and FAC. For energy levels they have shown discrepancies of up to 1.64 Ryd between the GRASP and FAC calculations, whereas for $\Omega$ the differences between the DARC and FAC results are over an order of magnitude for a few transitions. In addition, for several transitions there is an anomaly in the behaviour of $\Omega$ between the two sets of calculations. In this comment, we demonstrate that their results from both codes are incorrect, and hence cannot be relied upon.'
---
[ ]{}\
[**[Kanti M Aggarwal]{}**]{}\
Astrophysics Research Centre, School of Mathematics and Physics, Queen’s University Belfast,\
Belfast BT7 1NN, Northern Ireland, UK\
e-mail: K.Aggarwal@qub.ac.uk\
Received: 9 September 2019
[**Keywords:**]{} Energy levels, oscillator strengths, collision strengths, Kr-like tungsten W XXXIX\
------------------------------------------------------------------------
Introduction
============
In a recent paper, El-Maaref [*et al*]{} (2019) have reported results for energy levels, radiative rates and collision strengths ($\Omega$) for a few transitions of Kr-like W XXXIX, which is an important ion particularly for the studies of fusion plasmas. For the calculations of atomic structure, i.e. for the determination of energy levels, radiative rates (A-values) and oscillator strengths (f-values), they have adopted several codes, namely the General-purpose Relativistic Atomic Structure Package (GRASP: [http://amdpp.phys.strath.ac.uk/UK\_APAP/codes.html]{}), the Flexible Atomic Code (FAC: [https://www-amdis.iaea.org/FAC/]{}) and AutoStructure (AS: Badnell 1997). This was for making inter-comparisons among several sets of calculations and to make an assessment of accuracy. However, the discrepancies shown in energies for a few levels are very large, i.e. up to 1.64 Ryd – see for example level 127 in their table 1. In our long experience of working with these codes and for a wide range of ions, including those of tungsten (Aggarwal and Keenan 2016), such large discrepancies have not been observed, and therefore their results appear to be suspicious. Similarly, for the calculations of $\Omega$ they have adopted the $R$-matrix (the Dirac Atomic $R$-matrix Code, DARC: [http://amdpp.phys.strath.ac.uk/UK\_APAP/codes.html]{}) and the distorted-wave (DW) methods (FAC). Both of these codes are fully relativistic and the main difference between the two, for a highly ionised system such as W XXXIX, is in the calculations of closed-channel (Feshbach) resonances in the former but not in the latter. Again, our past experience on many ions, including the one of tungsten (W LXVI: Aggarwal 2016), shows that generally the background values of collision strengths ($\Omega_B$) are comparable between the two calculations for a majority of transitions, and over a large range of energies, provided similar number of configurations and their configuration state functions (CSF), or correspondingly the number of levels, are included in both. However, El-Maaref [*et al*]{} (2019) have noted large discrepancies in $\Omega$, in both the magnitude as well as the behaviour, between the two calculations – see for example, transitions 1–2 and 1–5 in their figure 1. Therefore, for the benefit of the readers, we demonstrate in this short comment that their calculations are incorrect from all codes, and if the calculations are performed correctly with care then the discrepancies are not as striking as shown by them.
Energy levels and radiative rates
=================================
For calculating energy levels, El-Maaref [*et al*]{} (2019) have chosen two models consisting of: (i) 25 levels of the (4s$^2$)4p$^6$, 4p$^5$4d and 4p$^5$4f, and (ii) 357 levels, the additional ones arising from the inclusion of the 4p$^4$4d$^2$, 4p$^4$4f$^2$ and 4p$^5$5$\ell$ configurations. As already stated, they have performed three calculations with these models with the GRASP, FAC and AS codes, and the discrepancies shown for some of the levels are up to 1.64 Ryd – see in particular level 127 in their table 1. We have performed two calculations with GRASP and FAC for both models, but in table 1 compare the energies for only the larger model, and for the same levels as listed in their table 1. The orderings of levels are the same in both calculations and (nearly) match with those of El-Maaref [*et al*]{}. However, there are two major differences between our and their calculations. Firstly, they have incorrectly identified the configurations for some of the levels, such as 34, 52, 61, 124 and 127, which belong to the 4p$^4$4d$^2$ configuration in stead of 4p$^5$4f, as listed by them. In fact, this is the main reason that the discrepancies shown by them between the smaller and larger model calculations are are up to 1.64 Ryd, as the correspondence of the levels between the two is incorrect. Secondly, the differences between our calculations with GRASP and FAC (GRASP2 and FAC2) are ‘expectedly’ insignificant, whereas in their work (GRASP1 and FAC1) these are up to 0.7 Ryd for most levels, and energies with GRASP are invariably higher, in spite of using the same [*configuration interaction*]{} (CI) in both codes. Finally, the differences between the GRASP1 and GRASP2 energies are generally within $\sim$0.15 Ryd, but between FAC1 and FAC2 are up to $\sim$0.5 Ryd, and their values are invariably [*lower*]{}.
[rllrrrrrr]{}\
Index & Configuration & Level & GRASP1 & FAC1 & GRASP2 & FAC2\
\
1 & 4p$^6$ & $^1$S$_0$ & 0.000 & 0.000 & 0.0000 & 0.0000\
2 & 4p$^5$4d & $^3$P$^o_0$ & 11.443 & 10.904 & 11.4184 & 11.4110\
3 & 4p$^5$4d & $^3$P$^o_1$ & 11.789 & 11.242 & 11.7577 & 11.7489\
4 & 4p$^5$4d & $^3$F$^o_3$ & 12.033 & 11.480 & 11.9984 & 11.9863\
5 & 4p$^5$4d & $^3$D$^o_2$ & 12.123 & 11.574 & 12.0935 & 12.0814\
6 & 4p$^5$4d & $^3$F$^o_4$ & 13.215 & 12.638 & 13.1466 & 13.1414\
7 & 4p$^5$4d & $^1$D$^o_2$ & 13.376 & 12.800 & 13.3137 & 13.3074\
8 & 4p$^5$4d & $^3$D$^o_3$ & 13.789 & 13.209 & 13.7273 & 13.7176\
9 & 4p$^5$4d & $^3$D$^o_1$ & 14.967 & 14.382 & 14.9056 & 14.8898\
10 & 4p$^5$4d & $^3$F$^o_2$ & 18.713 & 18.107 & 18.6037 & 18.6114\
11 & 4p$^5$4d & $^3$P$^o_2$ & 20.190 & 19.562 & 20.0555 & 20.0676\
12 & 4p$^5$4d & $^1$P$^o_1$ & 20.380 & 19.760 & 20.2773 & 20.2735\
13 & 4p$^5$4d & $^1$F$^o_3$ & 20.394 & 19.763 & 20.2544 & 20.2654\
14 & 4p$^4$($^3$P$_2$)4d$^2$($^3$F$_2$) & $^5$D$_0$ & 22.703 & 22.096 & 22.6467 & 22.6245\
15 & 4p$^4$($^3$P$_2$)4d$^2$($^3$F$_2$) & $^5$D$_1$ & 22.970 & 22.360 & 22.9081 & 22.8850\
16 & 4p$^4$($^3$P$_2$)4d$^2$($^3$F$_2$) & $^5$F$_2$ & 23.093 & 22.477 & 23.0256 & 23.0021\
17 & 4p$^4$($^3$P$_2$)4d$^2$($^3$F$_2$) & $^5$F$_3$ & 23.122 & 22.507 & 23.0568 & 23.0313\
18 & 4p$^4$($^3$P$_2$)4d$^2$($^3$F$_2$) & $^5$G$_4$ & 23.333 & 22.710 & 23.2627 & 23.2352\
19 & 4p$^4$($^3$P$_2$)4d$^2$($^3$F$_2$) & $^3$F$_2$ & 23.558 & 22.941 & 23.4934 & 23.4655\
20 & 4p$^4$($^1$S$_0$(4d$^2$($^3$F$_2$) & $^3$F$_2$ & 23.998 & 23.377 & 23.9335 & 23.9021\
21 & 4p$^4$($^3$P$_2$)4d$^2$($^3$F$_2$) & $^5$F$_4$ & 24.302 & 23.662 & 24.2059 & 24.1874\
22 & 4p$^4$($^3$P$_2$)4d$^2$($^3$P$_2$) & $^5$S$_2$ & 24.437 & 23.790 & 24.3447 & 24.3259\
23 & 4p$^4$($^3$P$_2$)4d$^2$($^3$F$_2$) & $^5$G$_5$ & 24.438 & 23.801 & 24.3337 & 24.3150\
24 & 4p$^4$($^3$P$_2$)4d$^2$($^3$F$_2$) & $^5$D$_3$ & 24.480 & 23.840 & 24.3841 & 24.3651\
25 & 4p$^4$($^1$S$_0$)4d$^2$($^3$P$_2$) & $^3$P$_0$ & 24.520 & 23.892 & 24.4587 & 24.4235\
26 & 4p$^4$($^3$P$_2$)4d$^2$($^3$P$_2$) & $^5$P$_3$ & 24.645 & 24.005 & 24.5508 & 24.5304\
27 & 4p$^4$($^3$P$_2$)4d$^2$($^3$P$_2$) & $^5$D$_4$ & 24.652 & 24.005 & 24.5500 & 24.5299\
28 & 4p$^4$($^3$P$_2$)4d$^2$($^1$G$_2$) & $^3$G$_5$ & 24.759 & 24.106 & 24.6521 & 24.6305\
29 & 4p$^4$($^3$P$_2$)4d$^2$($^3$F$_2$) & $^3$D$_1$ & 24.855 & 24.216 & 24.7607 & 24.7408\
30 & 4p$^4$($^3$P$_2$)4d$^2$($^3$F$_2$) & $^5$G$_6$ & 24.910 & 24.251 & 24.7983 & 24.7759\
31 & 4p$^4$($^3$P$_2$)4d$^2$($^1$G$_2$) & $^3$F$_2$ & 25.060 & 24.416 & 24.9647 & 24.9406\
32 & 4p$^4$($^1$S$_0$)4d$^2$($^3$F$_2$) & $^3$F$_3$ & 25.371 & 24.725 & 25.2765 & 25.2498\
33 & 4p$^4$($^3$P$_2$)4d$^2$($^3$F$_2$) & $^3$D$_2$ & 25.597 & 24.951 & 25.4964 & 25.4750\
34 & 4p$^4$($^3$P$_2$)4d$^2$($^1$G$_2$) & $^3$G$_4$ & 25.658 & 25.020 & 25.5575 & 25.5415\
35 & 4p$^4$($^1$D$_2$)4d$^2$($^3$P$_2$) & $^3$D$_1$ & 25.689 & 25.043 & 25.5963 & 25.5680\
49 & 4p$^5$4f & $^3$D$_1$ & 27.067 & 26.455 & 26.9730 & 26.9689\
52 & 4p$^4$($^3$P$_2$)4d$^2$($^1$S$_0$) & $^3$P$_2$ & 27.437 & 26.789 & 27.3180 & 27.3039\
56 & 4p$^5$4f & $^3$D$_3$ & 27.770 & 27.133 & 27.6547 & 27.6483\
57 & 4p$^5$4f & $^3$G$_5$ & 27.892 & 27.241 & 27.7671 & 27.7570\
61 & 4p$^4$($^1$S$_0$)4d$^2$($^1$S$_0$) & $^1$S$_0$ & 28.044 & 27.387 & 27.9279 & 27.8935\
62 & 4p$^5$4f & $^1$F$_3$ & 28.166 & 27.515 & 28.0512 & 28.0331\
63 & 4p$^5$4f & $^3$F$_4$ & 28.377 & 27.727 & 28.2575 & 28.2437\
\
[rllrrrrrr]{}\
Index & Configuration & Level & GRASP1 & FAC1 & GRASP2 & FAC2\
\
112 & 4p$^5$4f & $^3$F$_2$ & 33.688 & 32.990 & 33.5191 & 33.5116\
122 & 4p$^5$4f & $^3$G$_3$ & 35.120 & 34.424 & 34.9433 & 34.9384\
124 & 4p$^4$($^3$P$_2$)4d$^2$($^3$P$_2$) & $^1$S$_0$ & 35.340 & 34.526 & 35.1746 & 35.1403\
127 & 4p$^4$($^1$D$_2$)4d$^2$($^1$G$_2$) & $^1$D$_2$ & 35.873 & 35.161 & 35.7008 & 35.6786\
\
[GRASP1: earlier calculations of El-Maaref [*et al*]{} (2019) with the [grasp]{} code for 357 levels\
GRASP2: present calculations with the [grasp]{} code for 357 levels\
FAC1: earlier calculations of El-Maaref [*et al*]{} (2019) with the [fac]{} code for 357 levels\
FAC2: present calculations with the [fac]{} code for 357 levels\
]{}
In conclusion, we will like to state that there is scope for improvement in the calculated energy levels of El-Maaref [*et al*]{} (2019), but their identification of the configuration is incorrect for several levels, and this is the main reason for the large discrepancies shown by them for different models and codes. Finally, El-Maaref [*et al*]{} have only provided the J values for the levels but we have also listed the LSJ$^{\pi}$ designations in table 1, which may be helpful for further comparisons. Without this there is no distinction among levels of a J value belonging to the same configuration. For example, there are six J = 2 levels for the 4p$^4$4d$^2$ configuration shown in their table 1, i.e. levels 16, 19, 20, 22, 31, and 33. However, these level designations should be used with caution as these may not always be definitive (and are only for guidance), because some of them are highly mixed. As an example, level 127 is a mixture of 0.33 4p$^5$4f $^3$F$_2$ (22), 0.43 4p$^5$4f $^1$D$_2$ (24), 0.21 4p$^4$4d$^2$($^3$F$_2$) $^3$F$_2$ (82), 0.34 4p$^4$4d$^2$($^3$F$_2$) $^1$D$_2$ (84), 0.23 4p$^4$4d$^2$($^1$G$_2$) $^3$F$_2$ (91), and 0.48 4p$^4$4d$^2$($^1$G$_2$) $^1$D$_2$ (102). Hence, this level (and many more) is a mixture of several J levels from two different configurations, which makes it difficult to provide a unique identification for all levels, and this is a general atomic physics problem in all codes and methods. However, the J$^{\pi}$ values and their orderings are definitive and can be used with confidence.
For the A-values, there are no (major) discrepancies between our calculations and those of El-Maaref [*et al*]{} (2019), although differences for the 3–25 transition (f = 1.3$\times$10$^{-4}$) are significant (70%), because their value with GRASP is 2.94$\times$10$^8$ whereas ours is 5.00$\times$10$^8$ s$^{-1}$, matching well with the corresponding results with FAC, their being 5.24$\times$10$^8$ and ours 4.19$\times$10$^8$ s$^{-1}$. This difference can also not be attributed to a typing mistake because all three results (from GRASP, FAC and AS) listed in their table 2 differ by up to a factor of four. Since this is weak transition such differences may appear sometimes among different calculations.
Collision strengths
===================
El-Maaref [*et al*]{} (2019) have performed two sets of calculations with DARC for $\Omega$, adopting the smaller and larger models with 25 and 307 (out of 357) levels, listed in section 2. They have resolved resonances in the thresholds region with an energy mesh of 0.02 Ryd, and have calculated results up to an energy of 64 Ryd, which does not even cover the entire thresholds region as it extends to $\sim$80 Ryd. They have also performed similar calculations, although without resonances, with the DW method as implemented in the AS and FAC codes, and have shown comparisons for several transitions in their figure 1. However, there are three obvious errors in those comparisons. Firstly, for [*inelastic*]{} transitions $\Omega$ results are not possible for energies [*below*]{} thresholds, as shown by them. This has happened because DARC lists results w.r.t. [*incident*]{} energies, whereas the DW codes list w.r.t. excited ones, and they have not realised this difference. Secondly, for two transitions, namely 1–2 (4p$^6$ $^1$S$_0$ – 4p$^5$4d $^3$P$^o_0$) and 1–5 (4p$^6$ $^1$S$_0$ – 4p$^5$4d $^3$D$^o_2$), they have shown a sudden [*jump*]{} in $\Omega$s at energies above $\sim$25 Ryd, and have therefore concluded that the DW results may be inaccurate. However, this kind of jump neither happens in the calculations nor can be explained. Thirdly, they are unable to distinguish between collision strengths ($\Omega$) and cross-sections ($\sigma$), in spite of giving relationship between the two in their eq. (8). This is because in their figures 2, 3 and 4 (and in the related text) they are plotting $\Omega$ (a dimensionless quantity) but describing $\sigma$. Apart from these obvious errors there are others which are more important, as discussed below.
With FAC we have performed two sets of calculations by including the (i) 25 (FAC1) and (ii) 357 (FAC2) levels, described in section 2. In figure 1 we compare our results of $\Omega$ with those of El-Maaref [*et al*]{} (2019), obtained with DARC. It may be noted that energies shown in this and other figures are all [*incident*]{} although FAC lists the excited ones. For brevity, only three transitions are shown here, which are (a) 1–2 (4p$^6$ $^1$S$_0$ – 4p$^5$4d $^3$P$^o_0$), (b) 1–3 (4p$^6$ $^1$S$_0$ – 4p$^5$4d $^3$P$^o_1$) and (c) 1–5 (4p$^6$ $^1$S$_0$ – 4p$^5$4d $^3$D$^o_2$). This is because for 1–2 and 1–5 they have shown sudden jumps with the FAC results, both of which are forbidden transitions, and 1–3 is the only allowed one. Additionally, for clarity comparisons are shown only at energies above thresholds because our calculations do not include resonances. Our interest is in comparing $\Omega_B$, which are more important and should be comparable between the FAC and DARC calculations.
For 1–2 and 1–5 transitions both FAC1 and FAC2 $\Omega$ are comparable over the entire energy range and hence, do not support the findings of El-Maaref [*et al*]{} (2019), who show different $\Omega_B$ for the two models, and neither are there any jumps, as demonstrated by them. For the 1–3 transition, the two results are indeed different, but this is an [*allowed*]{} one, for which $\Omega$ directly depends on the f-value and the energy difference, $\Delta$E$_{i,j}$. The $\Delta$E$_{i,j}$ in FAC1 and FAC2 are comparable (11.25 and 11.75 Ryd, respectively), but the f-values are different, i.e. 0.0076 and 0.0065, respectively, and hence the differences in $\Omega$s. The corresponding f-values with GRASP are also similar (i.e. 0.0073 and 0.0064, respectively) and have no discrepancy with the result of El-Maaref [*et al*]{}. Therefore, their results for $\Omega$ with both codes should have been comparable but are strikingly different, for all transitions, as seen in present or their figure 1. Their results are clearly [*underestimated*]{}, by up to over an order of magnitude, because they have calculated $\Omega$ with [*limited*]{} partial waves with angular momentum $J \le$ 9.5, which is [*not*]{} sufficient for the convergence. This fact has been emphasised and demonstrated in several of our papers – see for example, Aggarwal and Keenan (2008) for Ni XI and Aggarwal (2016) for W LXVI. Similarly, $\Omega$ values for allowed transitions (generally) increase with energy whereas those of El-Maaref [*et al*]{} decrease, as seen in figure 1b. This is because increasingly larger number of partial waves are required with increasing energy, which they have not done.
In figure 2 we make similar comparisons for one more transition, i.e. 3–5 (4p$^5$4d $^3$P$^o_1$–4p$^5$4d $^3$D$^o_2$), which is semi-forbidden and the magnitude of $\Omega$ for this is much larger compared to those shown in figure 1. However, the discrepancies and conclusions are the same for this transition as are for 1–2 and 1–5, and the $\Omega$ values of El-Maaref [*et al*]{} (2019) with DARC are clearly underestimated. Finally, in figure 3 we consider the 1–9 (4p$^6$ $^1$S$_0$ – 4p$^5$4d $^3$D$^o_1$) transition, which is allowed and strong with f = 1.321 in FAC1 and 1.107 in FAC2, comparable with the GRASP calculations with f = 1.3006 and 1.1062, respectively. Since $\Delta$E$_{i,j}$ is comparable in both models, the differences in $\Omega$ are proportional to the one in f-values alone. Therefore, based on the comparisons shown in figures 1–3 we conclude that the reported $\Omega$ results by El-Maaref [*et al*]{} are highly underestimated (by up to over an order of magnitude), and for both forbidden as well as allowed transitions, because of the inclusion of limited range of partial waves, and hence the non-convergence. Their results with FAC are also incorrect because of the sudden jumps in $\Omega$ behaviour, which we neither observe nor expect. Apart from this, there are other deficiencies in their work which we discuss below.
As is well known and has also been emphasised by El-Maaref [*et al*]{} (2019), tungsten is one of the most useful material for fusion reactors, and atomic data for its ions are required for a variety of studies. Since the temperatures in fusion plasmas are very high (in the range of $\sim$ 10$^6$ to 10$^8$ K or equivalently 6.3 to 633 Ryd), calculations for $\Omega$ need to be performed up to very high energies, and the one done by them for up to 64 Ryd is of no use. Since the highest threshold considered in their work (level 357) is $\sim$80 Ryd, the calculations for $\Omega$ should be performed for up to, at least, 700 Ryd. Unfortunately, their $\Omega$ values can also not be reliably extrapolated to higher energies, and if attempted then it may lead to faulty results, as demonstrated and concluded in some of our work on Be-like ions, see for example, Aggarwal and Keenan (2015a,b) and Aggarwal [*et al*]{} (2016). Similarly, the resonances demonstrated by them for a few transitions are only useful for calculating the [*effective*]{} collision strengths ($\Upsilon$), obtained after integration over an electron velocity distribution function, mostly [*Maxwellian*]{}. Since their energy resolution is rather coarse (0.02 Ryd or equivalently 3160 K), this will not yield accurate and reliable results, as has recently been discussed and demonstrated by us (Aggarwal 2018a,b) for two F-like ions. Finally, they have reported their results for only 57 transitions, out of the possible 46 971 among the 307 levels considered, i.e. $\sim$0.1%. For modelling of plasmas results for a complete model are desired and therefore their reported data are not of much practical use.
Conclusions
===========
In this short communication, we have demonstrated that the collisional data recently reported by El-Maaref [*et al*]{} (2019) for transitions in Kr-like W XXXIX are highly underestimated for all transitions, irrespective of their types, such as forbidden or allowed. This is because they have considered a very limited range of partial waves with $J \le$ 9.5, highly insufficient for the convergence of $\Omega$. Similarly, their reported results are for about 0.1% of the possible transitions among the considered 307 levels, and are therefore of limited practical use, even if correct. Therefore, a more reliable, accurate and complete set of collisional data for this ion is highly desirable. Similarly, there is scope for improvement in the accuracy of the results reported for energy levels and radiative rates.
It has been suggested by several authors in the past (see for example, Aggarwal and Keenan 2013, Chung [*et al*]{} 2016 and Aggarwal 2017) that to assess the reliability and accuracy of atomic data independent calculations should be performed, either with the same or a different method/code, which El-Maaref [*et al*]{} (2019) have done. However, such calculations are only useful if performed diligently and carefully, and differences, if any, are resolved rather than ignored. Similarly, merely comparing data for a few levels or transitions may result in faulty conclusions, as has been done by El-Maaref [*et al*]{}, and earlier by Tayal and Sossah (2015), as explained in our paper on Mg V (Aggarwal and Keenan 2017). Therefore, we will like to emphasise again that the reliability of any calculation does not (much) depend on the (in)accuracy of the method/code adopted, but on its implementation. An incorrect application of a code may lead to large discrepancies and faulty conclusions. Finally, for the benefit of the readers we will like to note that some other works reported by El-Maaref and co-workers on other ions are deficient and erroneous for similar reasons, as recently demonstrated and explained by us for W XLV (Aggarwal 2019a), Mn X (Aggarwal 2019b), and Sc VI (Aggarwal 2019c,d).
[999]{}
Aggarwal K M 2016 [*Atoms*]{} [**4**]{} 24\
Aggarwal K M 2017 [*Atoms*]{} [**5**]{} 37\
Aggarwal K M 2018a [*Can. J. Phys.*]{} [**96**]{} 1155\
Aggarwal K M 2018b [*Can. J. Phys.*]{} [**96**]{} 1158\
Aggarwal K M 2019a [*J. Quant. Spect. Rad. Transf.*]{} [**231**]{} 136\
Aggarwal K M 2019b [*J. Elect. Spect. Related Phen.*]{} [****]{} in press\
Aggarwal K M 2019c [*Indian J. Phys.*]{} [****]{} submitted\
Aggarwal K M 2019d [*At. Data Nucl. Data Tables*]{} [****]{} in press\
Aggarwal K M and Keenan F P 2008 [*Eur. Phys. J.*]{} D [**46**]{} 205\
Aggarwal K M and Keenan F P 2013 [*Fus. Sci. Tech.*]{} [**63**]{} 363\
Aggarwal K M and Keenan F P 2015a [*Month. Not. R. Astron. Soc.*]{} [**447**]{} 3849\
Aggarwal K M and Keenan F P 2015b [*Month. Not. R. Astron. Soc.*]{} [**450**]{} 1151\
Aggarwal K M and Keenan F P 2016 [*At. Data Nucl. Data Tables*]{} [**111-112**]{} 187\
Aggarwal K M and Keenan F P 2017 [*Can. J. Phys.*]{} [**95**]{} 9\
Aggarwal K M, Keenan F P and Lawson K D 2016 [*Month. Not. R. Astron. Soc.*]{} [**461**]{} 3997\
Badnell N R 1997 [*J. Phys. B*]{} [**30**]{} 1\
Chung H-K, Braams B J, Bartschat K, Cs[á]{}sz[á]{}r A G, Drake G W F, Kirchner T, Kokoouline V and Tennyson J 2016 [*J. Phys.*]{} D [**49**]{} 363002\
El-Maaref A A, Abou Halaka M M, Tammam M, Shaaban E R and Yousef E S 2019 [*J. Phys. B*]{} [**52**]{} 065202\
Tayal S S and Sossah A M 2015 [*Astron. Astrophys.*]{} [**574**]{} A87\
|
---
abstract: 'Motivated by recent evidence pointing out the fragility of high-performing span prediction models, we direct our attention to multiple choice reading comprehension. In particular, this work introduces a novel method for improving answer selection on long documents through weighted global normalization of predictions over portions of the documents. We show that applying our method to a span prediction model adapted for answer selection helps model performance on long summaries from NarrativeQA, a challenging reading comprehension dataset with an answer selection task, and we strongly improve on the task baseline performance by +36.2 Mean Reciprocal Rank.'
author:
- |
Aditi Chaudhary$^{1*}$, Bhargavi Paranjape$^{1*}$, Michiel de Jong$^{2}$ [^1]\
Carnegie Mellon University$^1$ University of Southern California$^2$\
` {aschaudh, bvp}@cs.cmu.edu, msdejong@usc.edu`
bibliography:
- 'emnlp2018.bib'
title: Weighted Global Normalization for Multiple Choice Reading Comprehension over Long Documents
---
Introduction
============
The past years have seen increased interest from the research community in the development of deep reading comprehension models, spurred by the release of datasets such as SQuAD. [@squad]. For a majority of these datasets, the top performing models employ span prediction, selecting a span of tokens from the reference document that answers the question. Such models have been very successful; the best model on the SQuAD leaderboard approaches human performance [@qanet]. However, this strong performance may be deceptive. [@adversarial] show that inserting lexically similar adversarial sentences into the passages sharply reduces performance.
One possible reason for this disparity is that standard span prediction is an easy task. The information required to evaluate whether a span is the correct answer is often located right next to the span. @kernelselection transform the SQuAD dataset into a sentence selection task where the goal is to predict the sentence that contains the correct span. They achieve high accuracy on this task using simple heuristics that compare lexical similarity between the question and each sentence individually, without additional context. Selecting an answer from a list of candidate answers that are lexically dissimilar to the context makes it more challenging for models to retrieve the relevant information. For that reason, we focus on reading comprehension for answer selection.
Another common weakness of reading comprehension datasets is that they consist of short paragraphs. This property also makes it easier to locate relevant information from the context. Realistic tasks require answering questions over longer documents.
Building on [@clarkgardner], we propose a weighted global normalization method to improve the performance of reading comprehension models for answer selection on long documents. First, we adapt global normalization to the multiple-choice setting by applying a reading comprehension model in parallel over fixed length portions (chunks) of the document and normalizing the scores over all chunks. Global normalization encourages the model to produce low scores when it is not confident that the chunk it is considering contains the information to answer the question. Then we incorporate a weighting function to rescale the contribution of different chunks. In our work we use an Multilayer Perceptron over the scores and a TF-IDF heuristic as our weighting function, but more complex models are possible.
We experiment on the answer selection task over story summaries from the recently released NarrativeQA [@narrativeqa] dataset. It provides an interesting and a challenging test bed for reading comprehension as the summaries are long, and the answers to questions often do not occur in the summaries. We adopt the three-way attention model [@triattention], an adapted version of the BiDAF [@bidaf] span prediction model, in order to evaluate our method. We show that straightforward application of the answer selection model to entire summaries fails to outperform the model where the context is removed, demonstrating the weakness of current reading comprehension (RC) models . Inspired by @chen2017reading, we show that using TF-IDF to reduce context and applying global normalization on top of the reduced context, both significantly improve the performance. We observe that incorporating TF-IDF scores into the model with weighted global normalization helps improve performance more than either individually.
We view our contribution as twofold:
- We introduce a novel weighted global normalization method for multiple choice question answering over long context documents.
- We improve over the baseline of NarrativeQA answer selection task by a large margin, setting a competitive baseline for this interesting and challenging reading comprehension task.
Model Architecture {#model}
==================
While span prediction models are not directly applicable to the answer selection task, the methods used to represent the context and question carry over. We base our model on the multiple choice architecture in [@triattention]. Taking inspiration from the popular BiDAF architecture [@bidaf], the authors employ three-way attention over the context, question, and answer candidates to create and score answer representations. This section outlines our version of that architecture, denoted by *T-Attn*.
#### Word Embedding Layer.
We use pre-trained word embedding vectors to represent the tokens of query, context and all candidate answers.
#### Attention Flow Layer.
The interaction between query and context is modeled by computing a similarity matrix. This matrix is used to weigh query tokens to generate a query-aware representation for each context token. We compute query-aware-answer and context-aware-answer representations in a similar fashion. The representation of a token $u$ in terms of a sequence $\mathbf{v}$ is computed as: $$\begin{gathered}
\mathit{Attn_{seq}}(u, \{v_i\}_{i=1}^n) = \sum_{i=1}^{n}\alpha_iv_i\\
\alpha_i = \mathit{softmax}(\mathit{f}(\mathbf{W}u)^T\mathit{f}(\mathbf{W}v_i))\end{gathered}$$ where $\mathit{f}$ is ReLU activation.
#### Modeling Layer.
We encode the query, context and the candidate answers by applying a Bi-GRU to a combination of the original and query/context-aware representations. Consequently, we obtain:- $$\begin{gathered}
\mathbf{h_q} = \text{Bi-GRU}({w_i}_{i=1}^{|Q|})\\
\mathbf{h_c} = \text{Bi-GRU}([{w_i;w_i^q}]_{i=1}^{|C|})\\ \mathbf{h_a} = \text{Bi-GRU}([{w_i;w_i^q;w_i^c}]_{i=1}^{|a|})\end{gathered}$$ where $w_i$ are token embeddings, and $w_i^q$ and $w_i^c$ are query-aware and context-aware token representations respectively.
#### Output Layer.
The query and each candidate answers is re-weighted by weights learnt through a linear projection of the respective vectors. Context tokens are re-weighted by taking a bilinear attention with **q**. $$\begin{gathered}
\mathbf{q}=Attn_{self}\mathbf{h_q}^{|Q|}\\ \mathbf{a} = Attn_{self}\mathbf{h_a}^{|a|} \\
\mathbf{c} = Attn_{seq}(\mathbf{q},{h_c}^{|C|})\\
\mathit{Attn_{self}}(u_i) = \sum_{i=1}^{n}\mathit{softmax}(\mathbf{W}^\top u_i)u_i\end{gathered}$$ where $\mathbf{h_x}$ is the hidden representation from the respective modeling layers. We adapt the output layer used in [@triattention] for multiple-choice answers by employing a feed-forward network ($ffn$) to compute scores $S_a$ for each candidate answer $a$. The formulation is:- $$\begin{gathered}
\mathbf{l_{a^q}} = \mathbf{q}^\top \mathbf{a} \quad ; \quad
\mathbf{l_{a^c}} = \mathbf{c}^\top \mathbf{a}\\
S_a = ffn([\mathbf{l_{a^q}};\mathbf{l_{a^c}}])\end{gathered}$$ Standard cross-entropy loss over all answer candidates is used for training.
Evaluation
==========
In this section, we will first discuss the different methods, including our proposed method, used for handling the challenge of longer context documents. Since existing methods don’t apply to our task of answer selection as is, we also discuss the adjustments we made to these existing methods for comparing it with our proposed approach. For our task, we used the *T-Attn* model, described in Section \[model\], as our standard reading comprehension model.
Existing methods
----------------
#### Baseline.
The baseline is set by using the reading comprehension model, *T-Attn* model in our case, on the entire long context document.
#### Heuristic context reduction.
A simple method to make reading comprehension on longer contexts manageable is to use heuristic information retrieval methods, like TF-IDF as used by [@chen2017reading], to reduce the context first and then apply any of the standard reading comprehension models to this reduced context.
We divide the summaries (the long context) into chunks comprising of approximate 40 tokens. These chunks are then ranked by their TF-IDF score with either the question (during validation and testing) or with the question and the gold answer (during training). We then apply the reading comprehension model to the $K$ top ranked chunks. We experiment with $k=1,5$. Reducing the context in this way make it easier for the reading comprehension model to locate relevant information, but runs the risk of eliminating important context.
#### Global Normalization.
@clarkgardner improve span prediction over multiple paragraphs by applying a reading comprehension model to each paragraph separately and globally normalizing span scores over all paragraphs. This technique does not directly apply to the case of answer selection, as the correct answer is not a span and hence not tied to a specific paragraph. We implement an adjusted version of global normalization, in which we apply a reading comprehension model to each paragraph (in our case, each chunk $j$) separately, and sum all chunk scores over the answer candidates. The probability of answer candidate $i$ being correct, given $m$ chunks is
$$p_i = \frac{\sum_{j=1}^m e^{s_{ij}}}{\sum_{j=1}^m \sum_{i=1}^n e^{s_{ij}}}
\label{gn}$$
where $s_{ij}$ is score given by $j^{th}$ chunk to $i^{th}$ answer candidate.
Normalizing in this manner encourages the model to produce lower scores for paragraphs that do not contain sufficient information to confidently answer the question.
Proposed method
---------------
#### Weighted global normalization.
The global normalization method relies on the reading comprehension model to learn what paragraphs contain useful information, rather than using the TF-IDF heuristic alone to eliminate paragraphs, which runs the risk of mistakenly culling useful context.
We incorporate the TF-IDF scores $h_j$ into reading comprehension model to re-weigh each chunk as follows: $$p_i = \frac{ \sum_{j=1}^m z_j e^{s_{ij}}}{\sum_{j=1}^m z_j \sum_{i=1}^n e^{s_{ij}}}
\label{wgn}$$ where $z_j=h_j$. These static scores for each chunk can be substituted with a learned function of these scores. For the purpose of demonstration, we use a simple MultiLayer Perceptron as the learning function, but it can be easily replaced with any function of choice. Hence, in Equation \[wgn\], $z_j = \mathbf{W_2}(ReLU(\mathbf{W_1}[h, s_{:,j}]))$, and we refer to this model as *Wt-MLP*.
The adapted Tri-Attention architecture with weighted global normalization for multiple choice answers, is shown in Figure \[architecture\].
![Weighted Global Normalization for Answer Selection[]{data-label="architecture"}](EMNLP2018Diagram.jpg){width="45.00000%"}
--------------------------------- ---------------------- ---------------- -----------
**Model** **Training Context** **Validation** **Test**
MRR MRR
Weighted-Global-Norm (WGN-MLP) Top 5 chunks **0.631** **0.621**
Weighted-Global-Norm (WGN) Top 5 chunks 0.625 0.613
Global-Norm (GN) Top 5 chunks 0.573 0.568
5 uniform chunks 0.545 0.531
Vanilla Tri-Attention (T-Attn) Top 1 chunk 0.601 0.591
Full Summary 0.523 0.516
No Context 0.525 0.522
NarrativeQA Baseline (ASReader) Full Summary 0.269 0.259
--------------------------------- ---------------------- ---------------- -----------
-------- ------- ----------- ----------- ----------
**Top-1** **Top-5** **Full**
WGN Top-5 0.595 0.625 0.611
GN Top-5 0.605 0.573 0.518
T-Attn Top-1 0.601 0.551 0.471
-------- ------- ----------- ----------- ----------
: Ablation: Performance of different models on different validation context sizes[]{data-label="ablation"}
Experiments
===========
Data
----
The NarrativeQA dataset consists of 1572 movie scripts and books ([Project Gutenberg](https://www.gutenberg.org/)) in the public domain, with 46765 corresponding questions. Each document is also accompanied by a corresponding summary extracted from Wikipedia. In this paper, we focus on the summaries as our dataset which have an average length of 659 tokens. For comparison, the average context size in the SQuAD dataset is less than 150 tokens. As described in [@narrativeqa], candidates for a question comprise of answers [^2] to all the questions for a document. There are approximately 30 answer candidates per document.
Implementation Details
----------------------
We split each summary into chunks of approximately 40 tokens, respecting sentence boundaries. We use 300 dimensional pre-trained [@pennington2014glove] *GloVe* word embeddings, that are held fixed during training. We use a hidden dimension size of 128 in recurrent layers and 256 in linear layers. We use a 2-layer Bi-GRU in the modeling layer and a 3-layer feed-forward network for the output layer. The *Wt-MLP* in weighted global normalization is 1 layer deep and takes following features as input:TF-IDF scores, max, min, average and standard deviation of answer scores for all chunks. A 0.2 dropout is used with Adam optimizer and a learning rate 0.002. The models converged within 10 epochs.
Results and Discussion
======================
Table \[results\] reports results for our main experiments. We find that *T-Attn* without any context beats the NarrativeQA baseline by a large margin, suggesting the need for a stronger baseline. Surprisingly, *T-Attn* on full summaries performs no better than the No Context setting, implying that the model is unable to extract relevant information from a long context.
Providing the model with a reduced context of the top TF-IDF scored chunk (*Top 1 chunk*) leads to a significant gain (by +7). Global normalization also helps; uniformly sampling 5 chunks with global normalization (GN) yields a modest improvement over no-context, and taking the top 5 TF-IDF chunks rather than randomly sampling further improves performance, though not up to the level of the reduced context.
Both global normalization and TF-IDF scoring appear to provide a useful signal on the relevance of chunks. We found that combining the two in the form of weighted global normalization (WGN-MLP) outperforms both the globally normalized (by +6) and reduced context (by +3) experiments. The global normalization helps the model better tolerate the inclusion of likely, but not certainly, irrelevant chunks, while the weighting allows it to retain the strong signal from the TF-IDF scores.
Table \[ablation\] provides insight into the effect of global normalization. Global normalization models perform similarly to vanilla Tri-Attention trained on the top chunk when evaluated on the top chunk at test time, but degrade less in performance when evaluated on more chunks. Weighted global normalization in particular suffers only a minor penalty to performance from being evaluated on entire summaries. This effect may be even more pronounced on datasets where the top TF-IDF chunk is less reliable.
Conclusion and Future Work
==========================
This work introduces a method for improving answer selection on long documents through weighted global normalization of predictions over chunks of the documents. We show that applying our method to a span prediction model adapted for answer selection aids performance on long summaries in NarrativeQA and we strongly improve over the task baseline. In this work, we used a learned function of candidate and TF-IDF scores as the weights, but in principle the weighting function could take any form. For future work, we intend to explore the use of neural networks that takes into account the context and query to learn weights.
[^1]: All authors contributed equally
[^2]: after removing duplicate answers
|
---
abstract: |
This paper focuses on reducing memory usage in enumerative model checking, while maintaining the multi-core scalability obtained in earlier work. We present a multi-core tree-based compression method, which works by leveraging sharing among sub-vectors of state vectors.
An algorithmic analysis of both worst-case and optimal compression ratios shows the potential to compress even large states to a small constant on average (8 bytes). Our experiments demonstrate that this holds up in practice: the median compression ratio of 279 measured experiments is within 17% of the optimum for tree compression, and five times better than the median compression ratio of ’s compression.
Our algorithms are implemented in the LTSmin tool, and our experiments show that for model checking, multi-core tree compression pays its own way: it comes virtually without overhead compared to the fastest hash table-based methods.
author:
- 'Alfons Laarman, Jaco van de Pol, Michael Weber'
bibliography:
- 'main.bib'
title: Parallel Recursive State Compression for Free
---
Acknowledgements {#acknowledgements .unnumbered}
----------------
We thank Elwin Pater for taking the time to proofread this work and provide feedback. We thank Stefan Blom for the many useful ideas that he provided.
|
---
abstract: 'This paper is devoted to the multigrid convergence analysis for the linear systems arising from the conforming linear finite element discretization of the second order elliptic equations with anisotropic diffusion. The multigrid convergence behavior is known to strongly depend on whether the discretization grid is aligned or non-aligned with the anisotropic direction and analyses in the paper will be mainly focused on two-level algorithms. For an aligned grid case, a lower bound is given for point-wise smoother which shows deterioration of convergence rate. In both aligned and non-aligned cases we show that for a specially designed block smoother the convergence is uniform with respect to both anisotropy ratio and mesh size in the energy norm. The analysis is complemented with numerical experiments which confirm the theoretical results.'
address:
- 'School of Mathematics, Sichuan University, Chengdu, Sichuan, China, 610064'
- 'Department of Mathematics, Pennsylvania State University, University Park, State College PA 16802'
- 'Department of Mathematics, Pennsylvania State University, University Park, State College PA 16802'
author:
- Guozhu Yu
- Jinchao Xu
- 'Ludmil T. Zikatanov'
bibliography:
- 'GYu\_refs.bib'
title: 'Analysis of two-level method for anisotropic diffusion equations on aligned and non-aligned grids'
---
Introduction
============
In this paper we will study multilevel methods for anisotropic partial differential equations (PDEs) by finite element (FE) methods and in particular we will analyze the convergence behavior of some methods for anisotropic diffusion equations on grids that are either aligned or non-aligned with the anisotropy.
There are already many convergence results in the literature for multilevel method on anisotropic problems when the underlying FE grid is aligned with the anisotropy direction. The case of constant anisotropy was considered by Stevenson [@Stevenson1993; @Stevenson1994], who established a uniform convergence of the V-cycle multigrid methods. Main tools in his analysis are the so called classical *smoothing* and *approximation* properties (see Hackbusch [@Hackbusch1985]). The case of “mildly” varying anisotropy was analyzed in a work by Bramble and Zhang [@Bramble-Zhang2001]. Using a different theoretical framework developed in Bramble, Pasciak, Wang and Xu [@Bramble-Pasciak-Wang-Xu1991] and Xu [@Xu1992], Neuss [@Neuss1998] also showed uniform convergence of the V-cycle algorithm for anisotropic diffusion problem. More recently, Wu, Chen, Xie and Xu [@Wu-Chen-Xie-Xu] analyzed V-cycle multigrid with line smoother and standard coarsening, and V-cycle multigrid with point Gauss-Seidel smoother and semi-coarsening at the same time, and they were able to prove convergence under weaker assumptions on the regularity of the solution of the underlying PDE. Another technique, based on tensor product type subspace splittings and a semi-coarsening was proposed and analyzed by Griebel and Oswald [@Griebel-Oswald1995]. They have shown uniform and optimal condition number bounds for multilevel additive preconditioners.
The aforementioned theoretical convergence results on multigrid methods for the anisotropic diffusion equations are, however, all carried out under one main assumption that the anisotropy direction is aligned with the mesh. But such an assumption is not always satisfied in practice. In this paper, we will make an attempt to develop uniform convergence theory in certain cases when this aligned grid assumption is not satisfied. More specifically, we will study the problem in the case that $\Omega$ is a square domain triangulated by uniform grids that is rotated by an angle $\omega\in[0, \pi]$. We have grids that are not aligned with anisotropy except for special cases that $\omega=0$, $\omega=\frac{\pi}{2}$ or $\omega=\frac{3\pi}{4}$.
For this special class of domains and grids, we will design a two-level method and prove its uniform convergence (with respect to both anisotropy and mesh size). We are not yet able to extend our theoretical analysis neither to multilevel (more than two levels) case, nor to more general anisotropic problems. We hope however that the analysis presented here, even though in a special case, can be extended to handle more general anisotropic problems.
We would like to point out that our work was partially motivated by some recent theoretical results for nearly singular problems (see [@Lee-Wu-Xu-Zikatanov2007]). Indeed, anisotropic diffusion equation gets more nearly singular when anisotropy gets smaller. Techniques such as line smoother or semi-coarsening correspond in some way to space splittings of the so called near kernel components of the anisotropic diffusion problem. We refer to [@Lee-Wu-Xu-Zikatanov2007] for description of such splittings.
The rest of this paper is organized as follows. In Section \[section:Preliminaries\] we introduce the notation and preliminaries. In Section \[section:Theorem\] we state the main result. We then prove several stability and interpolation estimates for coarse grid interpolant in Section \[section:StabilityC\] and for the fine grid interpolant in Section \[section:StabilityF\]. In section \[section:Proof\] we prove the main theorem, already stated in section \[section:Theorem\]. Numerical experiments, which verify the theory are given in section \[section:Numerical\].
Preliminaries and notation\[section:Preliminaries\]
===================================================
Consider the anisotropic diffusion equation on a square domain $\Omega\subset \mathbb{R}^2$: $$\label{equation:model1}
\left\{
\begin{array}{rl}
-u_{xx}-\epsilon u_{yy}=f, &\mbox{in}\quad\Omega,\\
u=0, &\mbox{on}\quad \partial\Omega,\\
\end{array}
\right.$$ where $\epsilon>0$ is a constant. We are interested in the case when $\epsilon\rightarrow 0$. The weak formulation of (\[equation:model1\]) is: Find $u\in H_{0}^{1}(\Omega)$ such that $$\label{equation:model2}
a(u,v)=(f,v), \quad \forall v\in H_{0}^{1}(\Omega),$$ where $$a(u,v)=\int_{\Omega}(\partial_{x}u\partial_{x}v+\epsilon\partial_{y}u\partial_{y}v)dxdy,
\quad and\quad (f,v)=\int_{\Omega}(fv)dxdy.$$
We consider family of computational domains $\Omega$ obtained by rotations of a fixed domain $\Omega_0=(-1,1)^2$ around the origin. The angles of rotation are denoted by $\omega$ and we consider $\omega\in[0,\pi]$, since this covers all the possible cases of alignment (non-alignment) of the anisotropy and the FE grid.
We assume that we have initial triangulation ${\mathcal{T}_{0}}$ of the domain $\Omega_{0}$, obtained by dividing $\Omega_{0}$ into $N\times N$ equal squares and then dividing every square into two triangles. Then we rotate $\Omega_0$ around the origin to obtain the computational domain $\Omega$ and its triangulation ${\mathcal{T}_{h}}$. The finite element function space associated with $\Omega$ and ${\mathcal{T}_{h}}$ will be the space of piece-wise continuous linear functions with respect to ${\mathcal{T}_{h}}$ and we denote this space by $V_h$. One may see three such domains shown in Figure \[fig:three-domains\]. In such setting one case of grid aligned anisotropy corresponds to $\Omega=\Omega_0$, or equivalently, $\omega=0$.
\[0.33\][![The domains from left to right are corresponding to $\omega=0$, $\omega=\frac{\pi}{6}$ and $\omega=\frac{\pi}{4}$ respectively. \[fig:three-domains\]](0 "fig:")]{} \[0.33\][![The domains from left to right are corresponding to $\omega=0$, $\omega=\frac{\pi}{6}$ and $\omega=\frac{\pi}{4}$ respectively. \[fig:three-domains\]](30 "fig:")]{} \[0.33\][![The domains from left to right are corresponding to $\omega=0$, $\omega=\frac{\pi}{6}$ and $\omega=\frac{\pi}{4}$ respectively. \[fig:three-domains\]](45 "fig:")]{}
Given a coarse mesh ${\mathcal{T}_{H}}$, assume the fine mesh ${\mathcal{T}_{h}}$ is obtained from ${\mathcal{T}_{H}}$ by splitting each of the triangles in the triangulation ${\mathcal{T}_{H}}$ into four congruent triangles. One clearly then has $h=H/2$. The spaces of continuous piece-wise linear functions corresponding to the partitions ${\mathcal{T}_{h}}$ and ${\mathcal{T}_{H}}$ are denoted by $V_{h}$ and $V_{H}$. As it is customary $I_{h}$, $I_{H}$ will denote the nodal interpolation operators mapping to $V_{h}$ and $V_{H}$ respectively.
For the analysis of the two-level method, we introduce partition of unity $\{\theta_{i}(y)\}_{i=1}^{L}$, where $L=\left[\frac{y_{max}-y_{min}}h\right]$, with $y_{max}=\max\limits_{(x,y)\in \Omega}\{y\}$, $y_{min}=\min\limits_{(x,y)\in \Omega}\{y\}$. We define $\theta_i$ as follows: $$\label{eq:PU}
\theta_{i}(y)=\left\{
\begin{array}{ll}
\frac{(y-y_{min})-(i-1)h}{h}, &(i-1)h\leq y-y_{min}\leq ih,\\
\frac{(i+1)h-(y-y_{min})}{h}, &ih\leq y-y_{min}\leq (i+1)h,\\
0, &\mbox{other}.\\
\end{array}
\right.$$ Note that each $\theta_i$ is piece-wise linear in the $y$ variable and is constant in $x$. Moreover, each $\theta_i$ is supported in the $i$-th strip $(i-1)h\leq y-y_{min}\leq (i+1)h$.
Denote the set of triangles in ${\mathcal{T}_{h}}$ including nodes in the $i$-th strip by ${\mathcal{T}_{i}}$, and Let $$\Omega_{i}=\bigcup_{\tau\in {\mathcal{T}_{i}}}\tau,$$ $$V_{i}=\{v\in V_h, supp\ v\subseteq \bar{\Omega}_{i}\},$$ then the two-level method with line smoother $V_{i} (1\leq i\leq L)$ and coarse grid $V_{H}$ can be written as $$V_{h}=\sum_{i=1}^{L}V_{i}+V_{H}.$$
Let $K$ be a triangle with vertices $\{(x_i,y_i)\}_{i=1}^3$, which we assume ordered counter-clockwise. For a given edge $E\in \partial K$, with $$E=((x_{i},y_{i}),(x_{j},y_{j})), \quad j=1+\operatorname{mod}(i,3),$$ we denote $$\label{eq:deltaE}
\delta^K_{E}y=\frac{1}{2|K|}(y_{j}-y_{i}),$$ we also denote with $(x_{E},y_{E})$ the coordinates of the vertex of $K$ which is opposite to $E$. In another word, if $E=((x_{i},y_{i}),(x_{j},y_{j})) $ then $(x_{E},y_{E})=(x_k,y_k)$, where $k\neq i$ and $k\neq j$. Let $v$ be a linear function on $K$. If we set $v_{E}^{K}=v(x_{E},y_{E})$, then, it is easy to check that $$\label{eq:derivative identity}
\frac{\partial v}{\partial x}\bigg|_{K}=\sum_{E\in
\partial K}(\delta^K_{E}y)v_{E}^{K}.$$
Convergence of the two-level method\[section:Theorem\]
======================================================
We will first prove that even in the aligned case the most common point-wise smoothers will result in two-level method whose convergence deteriorates when $\epsilon$ tends to zero in equation . The result is as follows.
\[lemma: counterexample\] In case of grid aligned anisotropy (i.e. $\omega=0,\frac{\pi}{2}, \frac{3\pi}{4}$) the energy norm of the error propagation operator corresponding to the two-level iteration with coarse space $V_H$ and point-wise Gauss-Seidel smoother can be bounded below as follows: $$\label{equation: lower bound on E}
\|E_{TL}\|_{a}^{2}\geq 1-C(\epsilon+h^2),$$ with constant $C$ independent of $\epsilon$ and $h$.
This result follows from the following two-level convergence identity (proof can be found in [@Zikatanov2008 Lemma 2.3]):
\[lemma: two-level by ludmil\] The following relation holds for the two-level error propagation operator $E_{TL}=(I-T)(I-P_{H})$: $$\|E_{TL}\|_{a}^{2}=1-\frac{1}{K}\quad \mbox{where}\quad K=\sup_{v\in V}\frac{\|(I-\Pi_{*})v\|_{*}^{2}}{\|v\|_{a}^{2}},$$ where $\|v\|_{*}^{2}=\inf\limits_{\sum_{i}v_{i}=v}\sum\limits_{k}\|v_{k}\|_{a}^{2}$ and $\Pi_{*}$ is an $(\cdot,\cdot)_{*}$-orthogonal projection on $V_{H}$.
From the Lemma \[lemma: two-level by ludmil\] we can immediately see that to prove the estimate we need to show that $$K=\sup_{v\in V_h}\frac{\|(I-\Pi_{*})v\|_{*}^{2}}{\|v\|_{a}^{2}}\gtrsim \frac{1}{\epsilon+h^2},$$ here the quantity $K$ is the same as in Lemma \[lemma: two-level by ludmil\]. From the proof of [@Zikatanov2008 Theorem 4.5], we also know that $$K \gtrsim h^{-2} \sup_{v\in V_h}\frac{\|(I-Q_H)v\|^{2}}{\|v\|_{a}^{2}},$$ where $Q_H$ is the $(\cdot,\cdot)$ orthogonal projection on $V_H$. In the case of angle of rotation $\omega=0$, the computational domain is $\Omega=\Omega_0=(-1,1)^2$. We assume $h=1/n$ ($n$ is even) and consider the $2n\times 2n$ partition with vertices $(x_j,y_k)$, $x_j=jh$ and $y_k=kh$, $j=-n,\cdots,n, k=-n,\cdots,n$, then the corresponding coarse gird is with vertices $(x_{2j},y_{2k})$, $j=-n/2,\cdots,n/2$, $k=-n/2,\cdots,n/2$.
For any given $v\in V_h$, since $$\|v\|^2\simeq h^2
\sum\limits_{j=-n}^n\sum\limits_{k=-n}^nv^2(x_j,y_k),$$ and $$\|I_Hv\|^2\simeq H^2
\sum\limits_{j=-n/2}^{n/2}\sum\limits_{k=-n/2}^{n/2}v^2(x_{2j},y_{2k}),$$ then the interpolation $I_H$ is stable in the $L_2$ norm, i.e. $\|I_Hv\|\lesssim\|v\|$, for all $v\in V_h$, provided that $\frac{H}{h}\lesssim
1$. Now consider a function $v_0\in V_h$, supported in the closure of $(-1,1)\times(0,2h)$ and defined as $$v_0(x_j,y_{1})=v_0(x_j,h)=1-|j|h, j=-n,\cdots,n,$$ and $v_0$ is 0 at any other vertex. Note that $$I_H v_0 = 0.$$
From the stability of $I_H$ in the $L_2$ norm, which we have just shown we get: $$\begin{aligned}
\|I_H(I-Q_H)v_0\|^2 & \lesssim & \|(I-Q_H)v_0\|^2,\\
\|(I-I_H)(I-Q_H)v_0\|^2 &\lesssim& \|(I-Q_H)v_0\|^2.\end{aligned}$$ Using these estimates and the fact that $I_HQ_H=Q_H$ then gives $$\begin{aligned}
\|(I-Q_H)v_0\|^{2} &\gtrsim &
\|I_H(I-Q_H)v_0\|^2+\|(I-I_H)(I-Q_H)v_0\|^2\\
&= &\|(I_H-Q_H)v_0\|^2+\|(I-I_H)v_0\|^2\\
&= &\|Q_Hv_0\|^2+\|v_0\|^2\\
&\geq &\|v_0\|^2.\end{aligned}$$ So $$K \gtrsim h^{-2} \sup_{v\in V_h}\frac{\|(I-Q_H)v\|^{2}}{\|v\|_{a}^{2}}\gtrsim h^{-2} \frac{\|(I-Q_H)v_0\|^{2}}{\|v_0\|_{a}^{2}}\gtrsim
h^{-2}\frac{\|v_0\|^{2}}{\|v_0\|_{a}^{2}}.$$ Since $$\|v_0\|^{2}\simeq h^2\sum_{j=-n}^n v_0^2(x_j,h)\simeq h^2\sum_{j=0}^n (1-jh)^2=h^4\sum_{j=0}^n j^2\simeq h^4n^3\simeq h,$$ and $$\|v_0\|_a^{2}=\|\partial_x v_0\|^2+\epsilon \|\partial_y v_0\|^2\simeq h+\epsilon/h,$$ then $$K \gtrsim h^{-2}\frac{\|v_0\|^{2}}{\|v_0\|_{a}^{2}} \gtrsim h^{-2}\cdot h\cdot \frac{h}{\epsilon+h^2}=\frac{1}{\epsilon+h^2}.$$
The above results show that in case of a grid that is *aligned* with the anisotropy direction the convergence of a standard two-level method (point-wise smoother and standard coarsening) will deteriorate. One easily sees that for $\epsilon \le h^2$ we get a poor convergence rate (no better than $1-\mathcal{O}(h^2)$).
However, the next result shows that when the grid is not aligned with the anisotropy direction (e.g., angle of rotation $\omega=\pi/4)$ the lower bound given in Lemma \[lemma: counterexample\] does not apply and the standard two-level method is uniformly convergent in this case.
Assume that $\Omega$ is obtained from $\Omega_0$ by a rotation with angle of rotation $\omega=\frac{\pi}{4}$. Then the error propagation operator corresponding to the two-level iteration with coarse space $V_H$ and point-wise Gauss-Seidel smoother is a uniform contraction in the energy norm. In fact, we have the estimate $$K=\sup_{v\in V_h}\frac{\|(I-\Pi_{*})v\|_{*}^{2}}{\|v\|_{a}^{2}}\leq C,$$ and so $$\|E_{TL}\|_{a}^{2}\leq 1-\frac{1}{C},$$ with constant $C$ independent of $\epsilon$ and $h$.
The proof follows the same lines as the proof of Theorem \[theorem: the main theorem\], which will be given in the Section \[section:Proof\].
Let us remark here that for decreasing values of the angle of rotation (i.e. decreasing $\omega$ from $\pi/4$ to $0$) the convergence rate $\|E_{TL}\|_a$ of a two-level method with point-wise smoother deteriorates as the angle of rotation becomes smaller.
From the above considerations, it is clear that even in the case of aligned anisotropy one needs to use a special smoother or coarsening strategy in order to achieve uniform convergence. Our analysis shows that the line smoother or more generally a block smoother with blocks consisting of degrees of freedom along the anisotropy results in a uniformly convergent method. The Theorem below provides a uniform estimate on the convergence rate of the error propagation operator and is the main result in this paper.
\[theorem: the main theorem\] For any angle of rotation $\omega\in [0,\pi]$, the two-level iteration with coarse space $V_H$ and line (block) Gauss-Seidel smoother is a uniformly convergent method. In fact, we have $$\label{eq:convergence rate}
\|E_{TL}\|_{a}^{2}\leq 1-\frac{1}{C},$$ with constant $C$ independent of $\epsilon$ and $h$.
The proof of this theorem is postponed to Section \[section:Proof\]. The result follows from the stability and interpolation estimates that are given in Section \[section:StabilityC\] and Section \[section:StabilityF\] and Lemma \[lemma: two-level by ludmil\].
Stability of the coarse grid interpolant\[section:StabilityC\]
==============================================================
In this section we prove the stability of the coarse grid interpolant.
(5,5) (0,0)[(1,2)[2]{}]{} (0,0)[(2,1)[4]{}]{} (4,2)[(-1,1)[2]{}]{} (1,2)[(2,1)[2]{}]{} (1,2)[(1,-1)[1]{}]{} (2,1)[(1,2)[1]{}]{}
(0,0)(-0.5,0)[1]{} (4,2)(4.3,2)[2]{} (2,4)(2,4.3)[3]{} (3,3)(3.3,3.2)[4]{} (1,2)(0.7,2.1)[5]{} (2,1)(2,0.5)[6]{}
(0.8,1)[$K_{1}$]{} (2.9,2)[$K_{2}$]{} (1.9,3)[$K_{3}$]{} (1.8,1.8)[$K_{4}$]{}
In what follows, given a triangle $K\in {\mathcal{T}_{H}}$ ($K=\bigcup\limits_{l=1}^4K_l$ with $K_l\in{\mathcal{T}_{h}}$) as shown in the Figure \[fig:refinement\], we shall frequently use the following equalities $$\label{eq:triangle equalities}
\begin{array}{l}
y_{2}-y_{6}=y_{6}-y_{1}=y_{4}-y_{5},\\
y_{3}-y_{4}=y_{4}-y_{2}=y_{5}-y_{6},\\
y_{1}-y_{5}=y_{5}-y_{1}=y_{6}-y_{4},\\
|K|=4|K_{l}|,\quad l=1\ldots 4.
\end{array}$$
\[proposition: relations\] For any $v\in V_h$, we have the following relation: $$\label{eq:identity}
\frac{\partial (I_{H}v)}{\partial x}\bigg|_{K}=\frac{1}{2}\left(\sum_{l=1}^{3}\frac{\partial v}{\partial x}\bigg|_{K_{l}}-\frac{\partial
v}{\partial x}\bigg|_{K_{4}}\right).$$
From with $I_{H}v$ instead of $v$, we have $$ \frac{\partial (I_{H}v)}{\partial x}\bigg|_{K}=\sum_{E\in \partial K} (\delta_E^K y)
v_{E}^{K},$$ and from with $K_l$ instead of $K$ for $l=1,\ldots,4$, we have $$ \frac{\partial v}{\partial x}\bigg|_{K_{l}}
=\sum_{E\in \partial K_l} (\delta_E^{K_l} y)v_{E}^{K_{l}}.$$ Combine the above two equations and , it is immediate to verify the result.
We are now ready to prove our first stability estimate. Since we have anisotropic diffusion problem in hand, we need to estimate separately $\left\|\frac{\partial (I_{H}v)}{\partial x}\right\|_{0}$ and $\left\|\frac{\partial (I_{H}v)}{\partial y}\right\|_{0}$, which is done in the next Lemma.
\[lemma: two-level\] For any $v\in V_{h}$, we have $\left\|\frac{\partial (I_{H}v)}{\partial x}\right\|_{0}^{2}\leq 4 \|\frac{\partial v}{\partial
x}\|_{0}^{2}$, and $\left \|\frac{\partial (I_{H}v)}{\partial y}\right\|_{0}^{2}\leq 4 \|\frac{\partial v}{\partial
y}\|_{0}^{2}.$
We only need to prove this estimate locally for any $K\in {\mathcal{T}_{H}}$. So we fix $K\in {\mathcal{T}_{H}}$ and we would like to show that $\|\frac{\partial (I_{H}v)}{\partial x}\|_{0,K}^{2}\leq 4 \|\frac{\partial v}{\partial
x}\|_{0,K}^{2}$.
From , for the $L_2$ norm $ \|\frac{\partial
(I_{H}v)}{\partial x}\|_{0,K}^{2} $ we have
$$\begin{aligned}
\left\|\frac{\partial (I_{H}v)}{\partial x}\right\|_{0,K}^{2}
&=&\frac{|K|}{4}\left(\sum\limits_{l=1}^{3}\frac{\partial v}{\partial x}\bigg|_{K_{l}}-\frac{\partial v}{\partial x}\bigg|_{K_{4}}\right)^{2}\\
&\leq &|K|\sum\limits_{l=1}^{4}\left(\frac{\partial v}{\partial x}\bigg|_{K_{l}}\right)^{2}
=4\sum\limits_{l=1}^{4}|K_{l}|\left(\frac{\partial v}{\partial x}\bigg|_{K_{l}}\right)^{2}\\
&=&4\sum\limits_{l=1}^{4}\left\|\frac{\partial v}{\partial x}\right\|_{0,K_{l}}^{2}=4\left\|\frac{\partial v}{\partial x}\right\|_{0,K}^{2}.
\end{aligned}$$
Summing over all the elements then gives $\|\frac{\partial (I_{H}v)}{\partial x}\|_{0}^{2}\leq 4\|\frac{\partial v}{\partial
x}\|_{0}^{2}.$ In a similar fashion we can prove that $\|\frac{\partial (I_{H}v)}{\partial y}\|_{0}^{2}\leq 4\|\frac{\partial v}{\partial
y}\|_{0}^{2},$ and the proof of the lemma is complete.
As a consequence, we have the following approximation result for the coarse grid interpolant.
\[lemma: two level convergence\] For any $v\in V_{h}$, we have $$\left\|\frac{\partial (v-I_{H}v)}{\partial x}\right\|_{0}^{2}\lesssim \left\|\frac{\partial v}{\partial
x}\right\|_{0}^{2},
\quad
\left\|\frac{\partial (v-I_{H}v)}{\partial y}\right\|_{0}^{2}\lesssim \left\|\frac{\partial v}{\partial
y}\right\|_{0}^{2},$$ and $$\|v-I_{H}v\|_{0}^{2} \lesssim h^{2} |v|_{1}^{2}.$$
The first two estimates follow from the inequalities given in Lemma \[lemma: two-level\]. The third estimate can be found in [@Bramble-Xu1991 Lemma 4.4].
In fact, in the proof of Lemma \[lemma: two-level\], there is no any requirement for the partition. The result is true for partition ${\mathcal{T}_{h}}$ obtaining from regular refinement of any given partition ${\mathcal{T}_{H}}$. So is the Lemma \[lemma: two level convergence\].
Stability estimates on the fine grid\[section:StabilityF\]
==========================================================
In this section we give estimates on the stability of the partition of unity introduced in Section \[section:Preliminaries\], equation . In what follows, to avoid proliferation of indicies, we will omit the subscript $i$ and we will write $\theta$ instead of $\theta_i$.
For any given $K\in{\mathcal{T}_{H}}$ ($K=\bigcup\limits_{l=1}^4K_l$ with $K_l\in{\mathcal{T}_{h}}$) shown in Figure \[fig:refinement\], we label with 1, 2, and 3 the vertices of $K$, and 4, 5, and 6 the midpoints of $K$ (these are also vertices of $K_4$). The corresponding coordinates are denoted by $(x_{j},y_{j}), 1\leq j\leq 6$. Further, for a continuous function $v$, when there is no confusion, we write $v_{j}:=v(x_{j},y_{j})$.
In what follows we also denote $$\label{eq: minmum of edge}
E_{min}=arg\min\limits_{E\in\partial
K_{4}}\{|\delta_{E}^{K_{4}}y|\}, \quad
\mbox{where $\delta_{E}^{K_4}y$ is defined in~\eqref{eq:deltaE}}.$$ Then $E_{min}$ is related to the anisotropic direction. In fact, to indicate the dependence on the particular element, one may write $E_{min}^{K_4}$ instead of $E_{min}$, but for simplicity we have chosen to omit the superscript $K_4$. Furthermore, we may denote $E_{min}'=arg\min\limits_{E'\in\partial
K'_{4}}\{|\delta_{E'}^{K'_{4}}y|\}$.
Let us now consider a function $w \in V_{h}$ vanishing at the coarse grid vertices, that is, $w$ satisfies $I_H w=0$. We have the following 4 cases on a fixed $K\in{\mathcal{T}_{H}}$:
1. $\theta$ is zero in $K_4$;
2. $\theta$ is nonzero in $K_4$ and convex in $K_4$ (i.e. $\theta\neq 0$ a.e. in $K_4$);
3. $\theta$ is nonzero at only one of the vertices of $K_4$ and concave in $K_4$;
4. $\theta$ is nonzero at exactly two of the vertices of $K_4$ and concave in $K_4$.
The rest of this section contains technical results and their proofs, which can be classified according to the cases above. To prove the stability estimates on the fine grid, we need to bound $\|\frac{\partial (I_{h}(\theta w))}{\partial x}\|$.
- For the Case 0, there is nothing to prove, since in this case $I_{h}(\theta w)=0$.
- For the Case 1 the corresponding estimate is given in Lemma \[lemma: 3 point stability\]. In Case 1 we also need to assume quasi-uniformity of the mesh. We also note that Proposition \[proposition: 3 point property\] (for Case 1) contains an estimate which is later used in Case 2 and Case 3.
- The stability estimates in Case 2 and Case 3 are given in Lemmas \[lemma: 1 point stability\] and \[lemma: 2 point stability\], respectively, under the assumption that ${\mathcal{T}_{h}}$ is a uniform partition.
In summary, for uniform mesh we have proved the stability estimate in all cases. In addition, we have proved some of the results in more general case of unstructured, but quasi-uniform mesh (Case 1).
\[lemma: 3 point stability\] Assume that $\theta\neq 0$ in $K_4$ and convex in $K_4$ (Case 1). Then $$\left\|\frac{\partial (I_{h}(\theta w))}{\partial x}\right\|_{0,K}^{2}\lesssim \left\|\frac{\partial (\theta w)}{\partial
x}\right\|_{0,K}^{2}.$$
Since $I_Hw=0$ we obviously have that $I_H(\theta w)=0$ as well. From in Proposition \[proposition: relations\] with $v=I_h(\theta w)$ we obtain that $$\label{eq:51}
\sum_{l=1}^{3}\frac{\partial (I_h(\theta w))}{\partial x}\bigg|_{K_{l}}-\frac{\partial (I_h(\theta w))}{\partial x}\bigg|_{K_{4}}
=2\frac{\partial(I_H (I_h(\theta w)))}{\partial x}\bigg|_{K}=2\frac{\partial(I_H(\theta w))}{\partial x}\bigg|_{K}=0.$$ In addition, from , with $v=w$ we have $$\label{eq:52}
\sum_{l=1}^{3}\frac{\partial w}{\partial x}\bigg|_{K_{l}}-\frac{\partial w}{\partial x}\bigg|_{K_{4}}
=2\frac{\partial(I_H w)}{\partial x}\bigg|_{K}=0.$$
Therefore, from for the $L^{2}$ norm $\|\frac{\partial (I_{h}(\theta w))}{\partial x}\|_{0,K}$ we have: $$\begin{aligned}
\left\|\frac{\partial (I_{h}(\theta w))}{\partial x}\right\|_{0,K}^{2}
&=&\sum\limits_{l=1}^{4}|K_{l}|\left(\frac{\partial(I_h (\theta w))}{\partial x}\bigg|_{K_l}\right)^{2}
\thickapprox\sum\limits_{l=1}^{3}|K_{l}|\left(\frac{\partial(I_h (\theta w))}{\partial x}\bigg|_{K_l}\right)^{2}\nonumber\\
&\thickapprox&\frac{1}{|K|}\{[(y_{4}-y_{6})\theta_{6}w_{6}+(y_{5}-y_{4})\theta_{5}w_{5}]^{2}\label{norm1}\\
&&\,\,\quad+[(y_{5}-y_{4})\theta_{4}w_{4}+(y_{6}-y_{5})\theta_{6}w_{6}]^{2}\nonumber\\
&&\,\,\quad+[(y_{6}-y_{5})\theta_{5}w_{5}+(y_{4}-y_{6})\theta_{4}w_{4}]^{2}\}.\nonumber
\end{aligned}$$
On the other hand, since $\theta$ is a convex function in $K_4$, and $\theta$ is supported in a $2h$ width strip, $\theta$ should be convex in at least three of $K_l(l=1:4)$.
For any $K_l$ in which $\theta$ is convex, we have $$\|\theta\|_{0,K_{l}}^{2}\gtrsim
|K_l|\sum_{j=1}^{3}\theta^{2}(x_{j}^{K_{l}},y_{j}^{K_{l}}),$$ where $(x_{j}^{K_{l}},y_{j}^{K_{l}})$ denote the coordinates of the $j$-th vertex of element $K_{l}$ for $j=1:3$. Since the mesh is quasi-uniform, we have that $\max\limits_{j=1:3}\{y_j^{K_{l}}\}-\min\limits_{j=1:3}\{y_j^{K_{l}}\}\gtrsim
h$, and there exists at least one vertex $j_0$ such that $\theta(x_{j_{0}},y_{j_0})=\theta(y_{j_0})\gtrsim 1$. Hence, if $\theta$ is a convex function in $K_{l}$, then $\|\theta\|_{0,K_{l}}^{2}\gtrsim|K_l|$, and we conclude that there are at least three elements $K_{l}$, where $\|\theta\|_{0,K_{l}}^{2}\gtrsim|K_l|$ holds.
From this argument and we get $$\begin{aligned}
\left\|\frac{\partial (\theta w)}{\partial x}\right\|_{0,K}^{2}
&=&\left\|\theta\frac{\partial w}{\partial x}\right\|_{0,K}^{2}
= \sum\limits_{l=1}^{4}\|\theta\|_{0,K_{l}}^{2} \left(\frac{\partial w}{\partial x}\bigg|_{K_l}\right)^{2}
\thickapprox\sum\limits_{l=1}^{3}|K_l|\left(\frac{\partial w}{\partial
x}\bigg|_{K_l}\right)^{2}\nonumber\\
&\thickapprox&\frac{1}{|K|}\{[(y_{4}-y_{6})w_{6}+(y_{5}-y_{4})w_{5}]^{2}\label{norm2}\\
&&\,\,\quad
+[(y_{5}-y_{4})w_{4}+(y_{6}-y_{5})w_{6}]^{2}\nonumber\\
&&\,\,\quad
+[(y_{6}-y_{5})w_{5}+(y_{4}-y_{6})w_{4}]^{2}\}.\nonumber
\end{aligned}$$
Introducing now $$M=\left(
\begin{matrix}
0 &y_{5}-y_{4} &y_{4}-y_{6}\\
y_{5}-y_{4} &0 &y_{6}-y_{5}\\
y_{4}-y_{6} &y_{6}-y_{5} &0\\
\end{matrix}
\right), \quad
\Theta=\left(
\begin{matrix}
\theta_{4} &&\\
&\theta_{5} &\\
&&\theta_{6}\\
\end{matrix}
\right),$$ we rewrite as $$\left\|\frac{\partial (I_{h}(\theta w))}{\partial x}\right\|_{0,K}^{2}\approx
\frac{1}{|K|}\|M\Theta\bm{z}\|^2_{\ell_2}, \quad
\bm{z}= (w_{4}, w_{5}, w_{6})^t,$$ while can be rewritten as $$\left\|\frac{\partial (\theta w)}{\partial x}\right\|_{0,K}^{2}\approx
\frac{1}{|K|}\|M\bm{z}\|^2_{\ell_2}, \quad
\bm{z}= (w_{4}, w_{5}, w_{6})^t.$$ Here, $\|\cdot\|_{\ell_2}$ is the usual Euclidean norm on $\mathbb{R}^3$.
To prove the estimate $\|\frac{\partial (I_{h}(\theta w))}{\partial x}\|_{0,K}^{2}
\lesssim\|\frac{\partial (\theta w)}{\partial x}\|_{0,K}^{2}$, we only need to show that $$\label{eq:haha}
\frac{1}{|K|}\|M\Theta\bm{z}\|^2_{\ell_2}\lesssim
\frac{1}{|K|}\|M\bm{z}\|^2_{\ell_2}\quad\mbox{for all}\quad \bm{z}\in
\mathbb{R}^3.$$
Such an inequality is easy to get in the case of $\det(M)=0$, so we may assume $M$ is invertible (i.e. $\det(M)=2(y_{6}-y_{5})(y_{4}-y_{6})(y_{5}-y_{4})\neq 0$). We then need a bound on the eigenvalues of $M^{-1}\Theta M^{2}\Theta
M^{-1}=(M^{-1}\Theta M) (M^{-1}\Theta M)^T$. In fact, we only need to bound the entries of $M^{-1}\Theta M$ because all the norms of this $3\times 3$ matrix are equivalent. Thus, if the entries of $M^{-1}\Theta M$ are bounded in absolute value, then the eigenvalues of $(M^{-1}\Theta M) (M^{-1}\Theta M)^T$ are bounded and consequently holds.
Directly computing the inverse of $M$ gives $$M^{-1}=
\frac{1}{\det(M)}
\left(
\begin{array}{ccc}
-(y_{6}-y_{5})^{2} &(y_{6}-y_{5})(y_{4}-y_{6}) &(y_{6}-y_{5})(y_{5}-y_{4})\\
(y_{4}-y_{6}) (y_{6}-y_{5}) &-(y_{4}-y_{6})^{2} &(y_{4}-y_{6})(y_{5}-y_{4})\\
(y_{5}-y_{4})(y_{6}-y_{5}) &(y_{5}-y_{4})(y_{4}-y_{6}) &-(y_{5}-y_{4})^{2}\\
\end{array}
\right).$$ We then calculate $M^{-1}\Theta M$ to obtain that $$M^{-1}\Theta M=\frac{1}{2}
\left(
\begin{array}{ccc}
\theta_{5}+\theta_{6}
&\frac{y_{6}-y_{5}}{y_{4}-y_{6}}(-\theta_{4}+\theta_{6})
&\frac{y_{6}-y_{5}}{y_{5}-y_{4}}(-\theta_{4}+\theta_{5})\\
\frac{y_{4}-y_{6}}{y_{6}-y_{5}}(-\theta_{5}+\theta_{6})
&\theta_{4}+\theta_{6}
&\frac{y_{4}-y_{6}}{y_{5}-y_{4}}(\theta_{4}-\theta_{5})\\
\frac{y_{5}-y_{4}}{y_{6}-y_{5}}(\theta_{5}-\theta_{6})
&\frac{y_{5}-y_{4}}{y_{4}-y_{6}}(\theta_{4}-\theta_{6})
&\theta_{4}+\theta_{5}\\
\end{array}
\right).$$ Since $\theta$ is convex in $K_4$, by the definition of $\theta$, it is easy to see that $$|\theta_{6}-\theta_{5}|\lesssim h^{-1} |y_{6}-y_{5}|,$$ $$|\theta_{5}-\theta_{4}|\lesssim h^{-1} |y_{5}-y_{4}|,$$ $$|\theta_{4}-\theta_{6}|\lesssim h^{-1} |y_{4}-y_{6}|.$$ Since $|y_{i}-y_{j}|\lesssim h$, we have $|(M^{-1}\Theta
M)_{ij}|\lesssim 1$ and the proof of the Lemma is complete.
Next result is an auxiliary estimate used later in the proof of Lemma \[lemma: 1 point stability\] and \[lemma: 2 point stability\]. In the statement of the lemma we used the notation given at the end of Section \[section:Preliminaries\].
\[proposition: 3 point property\] Assume that $\theta\neq 0$ and convex in $K_4$. Then the following inequality holds $$\left\|\frac{\partial (\theta w)}{\partial x}\right\|_{0,K}^{2}\gtrsim |K| (\max\limits_{E\in\partial
K_{4}}\{|\delta_{E}^{K_{4}}y|\}^{2}\cdot(w_{E_{min}}^{K_{4}})^{2}
+(\delta_{E_{min}}^{K_{4}}y)^{2}\cdot
\max\limits_{E\in\partial K_{4}}\{w_{E}^{K_{4}}\}^{2}).$$
Let $E_{min}$ be defined as . Without loss of generality, assume $E_{min}=\{(x_4,y_4),(x_5,y_5)\}$, and then $w_6=w_{E_{min}}^{K_{4}}$. This means $$|y_{5}-y_{4}|=\min\{|y_{5}-y_{4}|,|y_{4}-y_{6}|,|y_{6}-y_{5}|\}.$$ Hence $$\left|\frac{y_{5}-y_{4}}{y_{6}-y_{5}}\right|\leq 1,\quad
\left|\frac{y_{5}-y_{4}}{y_{4}-y_{6}}\right|\leq 1,$$ and by triangle inequalities we have $$\left|\frac{y_{4}-y_{6}}{y_{6}-y_{5}}\right|\leq 2,\quad
\left|\frac{y_{6}-y_{5}}{y_{4}-y_{6}}\right|\leq 2.$$
According to the expression (\[norm2\]) and the above inequalities, we have $$\begin{aligned}
\left\|\frac{\partial (\theta w)}{\partial x}\right\|_{0,K}^{2}
&\thickapprox&\frac{1}{|K|}\{[(y_{4}-y_{6})w_{6}+(y_{5}-y_{4})w_{5}]^{2}
+[(y_{5}-y_{4})w_{4}+(y_{6}-y_{5})w_{6}]^{2}\\
&&\,\,\quad
+[(y_{6}-y_{5})w_{5}+(y_{4}-y_{6})w_{4}]^{2}\}\\
&\gtrsim
&\frac{1}{|K|}
\{[(y_{4}-y_{6})w_{6}+(y_{5}-y_{4})w_{5}]
+[(y_{5}-y_{4})w_{4}+(y_{6}-y_{5})w_{6}]\frac{y_{4}-y_{6}}{y_{6}-y_{5}}\\
&&\,\,\quad-[(y_{6}-y_{5})w_{5}+(y_{4}-y_{6})w_{4}]\frac{y_{5}-y_{4}}{y_{6}-y_{5}}\}^{2}\\
&=
&\frac{2}{|K|}
[(y_{4}-y_{6})w_{6}]^{2}\\
&\gtrsim
&\frac{1}{|K|}
\max\{|y_{5}-y_{4}|,|y_{4}-y_{6}|,|y_{6}-y_{5}|\}^{2}w_{6}^{2}.\\
\end{aligned}$$
Combining with (\[norm2\]), we have
$$\left\|\frac{\partial (\theta w)}{\partial x}\right\|_{0,K}^{2}\gtrsim \frac{1}{|K|}
\{[(y_{5}-y_{4})w_{4}]^{2}+[(y_{5}-y_{4})w_{5}]^{2}\}.$$
So $$\left\|\frac{\partial (\theta w)}{\partial x}\right\|_{0,K}^{2}\gtrsim \frac{1}{|K|}
\{\max\{|y_{5}-y_{4}|,|y_{4}-y_{6}|,|y_{6}-y_{5}|\}^{2}w_{6}^{2}+(y_{5}-y_{4})^{2}w_{4}^{2}+(y_{5}-y_{4})^{2}w_{5}^{2}\}.$$ Notice again, here $E_{min}=\{(x_4,y_4),(x_5,y_5)\}$ and $w_6=w_{E_{min}}^{K_{4}}$, then we get the result.
We need to notice that till now we only require the mesh is quasi-uniform, since when $\theta$ is convex in element $K_4$, the semi-norm of interpolation function $\|\frac{\partial(I_h(\theta w))}{\partial x}\|_{0,K}$ can be bounded by $\|\frac{\partial (\theta w)}{\partial x}\|_{0,K}$. However, this is not true when $\theta(y)$ is concave. In this case, $\|\frac{\partial(I_h(\theta w))}{\partial x}\|_{0,K}$ may also depend on some neighboring element. To get the information of the neighboring element, we assume the partition ${\mathcal{T}_{h}}$ is uniform in the following.
\[lemma: 1 point stability\] Assume that $\theta$ is nonzero at only one vertex of $K_4$ and that $K^\prime$ is the unique element from ${\mathcal{T}_{H}}$ which has this vertex on one of its edges. Assume also that $\theta$ is concave in $K_4$ (Case 2). Then the following inequality holds $$\left\|\frac{\partial (I_{h}(\theta w))}{\partial
x}\right\|_{0,K}^{2}\lesssim
\left\|\frac{\partial (\theta w)}{\partial x}\right\|_{0,K'}^{2}.$$
(5,5) (0,0)[(1,2)[2]{}]{} (0,0)[(2,1)[4]{}]{} (4,2)[(-1,1)[2]{}]{} (1,2)[(2,1)[2]{}]{} (1,2)[(1,-1)[1]{}]{} (2,1)[(1,2)[1]{}]{}
(0,0)(-0.5,0)[1]{} (4,2)(4.3,2)[2]{} (2,4)(2,4.3)[3]{} (3,3)(3.3,3.2)[4]{} (1,2)(0.7,2.1)[5]{} (2,1)(2,0.5)[6$(x_{E_{0}}^{K_4},y_{E_{0}}^{K_4})$]{} (0.8,1)[$K_{1}$]{} (2.9,2)[$K_{2}$]{} (1.9,3)[$K_{3}$]{} (1.7,1.5)[$K_{4}$]{}
(2,-2)[(-1,1)[2]{}]{} (2,-2)[(1,2)[2]{}]{} (1,0)[$K'$]{}
(1.9,2.2)[$E_0$]{} (1,-1)[(2,1)[2]{}]{} (1.8,-0.3)[$E_0'$]{} (1,-1) (3,0)
Without loss of generality, assume $\theta_{E_{0}}^{K_{4}}$ is the only nonzero value. There are two possibilities: (a) $E_{0}=E_{min}$; and (b) $E_{0}\neq E_{min}$.
**Proof in case (a).** Since $E_{0}=E_{min}$, we conclude that $|\delta_{E_{0}}^{K_{4}} y| =\min\limits_{E\in\partial
K_{4}}\{|\delta_{E}^{K_{4}} y|\}$.
We then have $$\begin{aligned}
\left\|\frac{\partial (I_{h}(\theta w))}{\partial x}\right\|_{0,K}^{2}
&= &\frac{|K|}{4}\sum\limits_{E\in\partial K_{4}} |\delta_{E}^{K_{4}} y|^{2}(\theta_{E_{0}}^{K_{4}}w_{E_{0}}^{K_{4}})^{2}
\quad (\mbox{from~} \eqref{eq:derivative identity})\\
&\lesssim &|K|\max\limits_{E\in\partial
K_{4}}\{|\delta_{E}^{K_{4}}y|\}^{2}(w_{E_{0}}^{K_{4}})^{2}.
\end{aligned}$$
Since the partition ${\mathcal{T}_{H}}$ is uniform (see Figure \[fig:1 point fig\]), and $K'$ is the element sharing the same point $(x_{E_{0}}^{K_4},y_{E_{0}}^{K_4})$ with $K$, we know that the values of $\theta$ at midpoints of $K'$ are all nonzero. This is so, because the support of $\theta$, whose width is $2h$ must include $K'$ in its interior. Assume now that $E'_{0}$ is the edge opposite to point $(x_{E_{0}}^{K_4},y_{E_{0}}^{K_4})$ in $K'_{4}$ (i.e. $(x_{E_{0}}^{K_4},y_{E_{0}}^{K_4})=
(x_{E'_{0}}^{K'_{4}},y_{E'_{0}}^{K'_{4}})$, see Figure \[fig:1 point fig\]). Observe that $E'_{0} =E'_{min}$ or $|\delta_{E'_{0}}^{K'_{4}} y| =\min\limits_{E'\in\partial K'_{4}}\{|\delta_{E'}^{K'_{4}} y|\}$, because $E_0'$ is a parallel translation of $E_0$. By Proposition \[proposition: 3 point property\], we now have $$\left\|\frac{\partial (\theta w)}{\partial x}\right\|_{0,K'}^{2}
\gtrsim
|K'|\max\limits_{E'\in\partial
K'_{4}}\{|\delta_{E'}^{K'_{4}}y|\}^{2}(w_{E'_{0}}^{K'_{4}})^{2}
= |K|\max\limits_{E\in\partial
K_{4}}\{|\delta_{E}^{K_{4}}y|\}^{2}(w_{E_{0}}^{K_{4}})^{2}.\\$$
So $\|\frac{\partial (I_{h}(\theta w))}{\partial x}\|_{0,K}^{2}
\lesssim \|\frac{\partial (\theta w)}{\partial x}\|_{0,K'}^{2}$, and this completes the proof in case (a).
**Proof in case (b).** In case (b) we have $E_{0}\neq E_{min}$ and hence $|\delta_{E_{0}}^{K_{4}} y|
\neq\min\limits_{E\in\partial K_{4}}\{|\delta_{E}^{K_{4}} y|\}$. Since $\theta_{E_{0}}^{K_{4}}$ is the only nonzero value among the values of $\theta$ at the vertices of $K_4$, we easily get $$\theta_{E_{0}}^{K_{4}}\lesssim h^{-1}\min\limits_{E\in\partial K_{4}}\{2|K_4|\delta_{E}^{K_{4}} y|\}.$$
Then $$\begin{aligned}
\left\|\frac{\partial (I_{h}(\theta w))}{\partial x}\right\|_{0,K}^{2}
&=&\frac{|K|}{4}\sum\limits_{E\in\partial K_{4}} |\delta_{E}^{K_{4}} y|^{2}(\theta_{E_{0}}^{K_{4}}w_{E_{0}}^{K_{4}})^{2}
\quad (\mbox{from~} \eqref{eq:derivative identity})\\
&\lesssim& |K|\max\limits_{E\in\partial K_{4}} \{|\delta_{E}^{K_{4}} y|\}^{2}(\theta_{E_{0}}^{K_{4}})^{2}(w_{E_{0}}^{K_{4}})^{2}\\
&\lesssim&|K|\max\limits_{E\in\partial K_{4}} \{2|K_4|\delta_{E}^{K_{4}} y|\}^{2}
h^{-2}\min\limits_{E\in\partial K_{4}}\{|\delta_{E}^{K_{4}}
y|\}^{2}(w_{E_{0}}^{K_{4}})^{2}\\
&\lesssim& |K|\min\limits_{E\in\partial K_{4}}\{|\delta_{E}^{K_{4}}
y|\}^{2}(w_{E_{0}}^{K_{4}})^{2}.
\end{aligned}$$
Let $K'$ be the same as before, then by Proposition \[proposition: 3 point property\], we have $$\left\|\frac{\partial (\theta w)}{\partial x}\right\|_{0,K'}^{2}
\gtrsim |K'|\min\limits_{E'\in\partial
K'_{4}}\{|\delta_{E'}^{K'_{4}}y|\}^{2}(w_{E'_{0}}^{K'_{4}})^{2}
= |K|\min\limits_{E\in\partial
K_{4}}\{|\delta_{E}^{K_{4}}y|\}^{2}(w_{E_{0}}^{K_{4}})^{2}.$$
Combining the last two inequalities then gives $\|\frac{\partial
(I_{h}(\theta w))}{\partial x}\|_{0,K}^{2} \lesssim
\|\frac{\partial (\theta w)}{\partial x}\|_{0,K'}^{2}$. This completes the proof in case (b), and also the proof of the Lemma.
The next Lemma gives the stability estimates in the last case (Case 3) and we refer to Figure \[fig: 2 point fig\] for clarifying the notation.
\[lemma: 2 point stability\] Assume that $\theta$ is nonzero at exactly two vertices of $K_4$ and concave in $K_4$ (Case 3). Let $K^\prime$ be an element from ${\mathcal{T}_{H}}$ which shares with $K_4$ the vertex at which $\theta$ has larger value on one of its edges. Then the following inequality holds $$\left\|\frac{\partial (I_{h}(\theta w))}{\partial x}\right\|_{0,K}^{2}\lesssim
\left\|\frac{\partial (\theta w)}{\partial x}\right\|_{0,K\bigcup K'}^{2}.$$
(5,5)
(0,0)[(1,2)[2]{}]{} (0,0)[(2,1)[4]{}]{} (4,2)[(-1,1)[2]{}]{} (1,2)[(2,1)[2]{}]{} (1,2)[(1,-1)[1]{}]{} (2,1)[(1,2)[1]{}]{}
(0,0)(-0.5,0)[1]{} (4,2)(4.3,2)[2]{} (2,4)(2,4.3)[3]{} (3,3)(3.3,3.2)[4]{} (1,2)(0.7,2.1)[5]{} (2,1)(2,0.5)[6]{}
(0.8,1)[$K_{1}$]{} (2.9,2)[$K_{2}$]{} (1.9,3)[$K_{3}$]{} (1.8,1.8)[$K_{4}$]{}
(4,2)[(1,2)[2]{}]{} (2,4)[(2,1)[4]{}]{} (3.8,3.5)[$K'$]{}
Without loss of generality we may assume that $\theta_4$ and $\theta_5$ are the only nonzero values of $\theta$, we may also assume that $\theta_{4}\geq\theta_{5}$. As a consequence, $K'$ will share $(x_4,y_4)$ with $K$.
We consider two possibilities: (a) $E_{min}=\{(x_4,y_4),(x_5,y_5)\}$; (b) $E_{min}\neq\{(x_4,y_4),(x_5,y_5)\}$.
**Proof of (a).** Since $E_{min}=\{(x_4,y_4),(x_5,y_5)\}$, we have that $|y_{5}-y_{4}|=\min\{|y_{5}-y_{4}|,|y_{4}-y_{6}|,|y_{6}-y_{5}|\}$. Hence, from we obtain $$\begin{aligned}
\left\|\frac{\partial (I_{h}(\theta w))}{\partial x}\right\|_{0,K}^{2}
& \thickapprox &
\frac{1}{|K|}\{
((y_{5}-y_{4})\theta_{4}w_{4})^{2}
+((y_{5}-y_{4})\theta_{5}w_{5})^{2}\\
&&\,\,\quad +((y_{6}-y_{5})\theta_{5}w_{5}+(y_{4}-y_{6})\theta_{4}w_{4})^{2}\}.\end{aligned}$$
We want to bound now all the terms on the right side of the above relation with quantities independent of the values of $\theta$. From the fact that $\theta \le 1$ we have that $$((y_{5}-y_{4})\theta_{4}w_{4})^{2}\lesssim ((y_{5}-y_{4})w_{4})^{2}.$$ The other terms are bounded as follows $$\begin{aligned}
((y_{5}-y_{4})\theta_5w_{5})^{2}
&\leq &
((y_{5}-y_{4})w_{5})^{2} \lesssim
((y_{5}-y_{4})w_{5})^{2} (\frac{y_{6}-y_{5}}{y_{4}-y_{6}})^{2}\\
&= &(((y_{6}-y_{5})w_{5}+(y_{4}-y_{6})w_{4})\frac{y_{5}-y_{4}}{y_{4}-y_{6}}-(y_{5}-y_{4})w_{4})^{2}\\
&\lesssim
&((y_{6}-y_{5})w_{5}+(y_{4}-y_{6})w_{4})^{2}(\frac{y_{5}-y_{4}}{y_{4}-y_{6}})^{2}+((y_{5}-y_{4})w_{4})^{2}\\
&\lesssim
& ((y_{6}-y_{5})w_{5}+(y_{4}-y_{6})w_{4})^{2}+((y_{5}-y_{4})w_{4})^{2},
\end{aligned}$$ and also $$\begin{aligned}
((y_{6}-y_{5})\theta_{5}w_{5}+(y_{4}-y_{6})\theta_{4}w_{4})^{2}
&=&((y_{6}-y_{5})\theta_{5}w_{5}+(y_{4}-y_{6})\theta_{5}w_{4} -(y_{4}-y_{6})(\theta_{5}-\theta_{4})w_{4})^{2}\\
&=&(((y_{6}-y_{5})w_{5}+(y_{4}-y_{6})w_{4})\theta_{5} -(y_{4}-y_{6})w_{4}\frac{y_{5}-y_{4}}{h})^{2}\\
&\lesssim&((y_{6}-y_{5})w_{5}+(y_{4}-y_{6})w_{4})^{2}+((y_{5}-y_{4})w_{4})^{2}.
\end{aligned}$$
Hence we get $$\left\|\frac{\partial (I_{h}(\theta w))}{\partial x}\right\|_{0,K}^{2}
\lesssim\frac{1}{|K|}\{((y_{5}-y_{4})w_{4})^{2}+((y_{6}-y_{5})w_{5}+(y_{4}-y_{6})w_{4})^{2}\}.\\$$
In this case, $$\left\|\frac{\partial (\theta w)}{\partial x}\right\|_{0,K}^{2}
\gtrsim \frac{1}{|K|}\{((y_{6}-y_{5})w_{5}+(y_{4}-y_{6})w_{4})^{2}\}.\\$$
Since $K'$ denotes the element sharing $(x_4,y_4)$ with $K$, we obtain $$\begin{aligned}
\left\|\frac{\partial (\theta w)}{\partial x}\right\|_{0,K'}^{2}
&\gtrsim &\frac{1}{|K'|}
\{\min\{|y_{5}^{K'}-y_{4}^{K'}|,|y_{4}^{K'}-y_{6}^{K'}|,|y_{6}^{K'}-y_{5}^{K'}|\}^{2}w_{4}^{2}\}\\
&=&\frac{1}{|K|}
\{\min\{|y_{5}-y_{4}|,|y_{4}-y_{6}|,|y_{6}-y_{5}|\}^{2}w_{4}^{2}\}\\
&=&\frac{1}{|K|}\{(y_{5}-y_{4})^{2}w_{4}^{2}\},
\end{aligned}$$ so $$\left\|\frac{\partial(\theta w)}{\partial x}\right\|_{0,K\bigcup K'}^{2}
\gtrsim
\frac{1}{|K|}\{((y_{5}-y_{4})w_{4})^{2}+((y_{6}-y_{5})w_{5}+(y_{4}-y_{6})w_{4})^{2}\}
\gtrsim \left\|\frac{\partial (I_{h}(\theta w))}{\partial
x}\right\|_{0,K}^{2}.$$ This completes the proof in case (a).
**Proof of (b).** In this case we have that $E_{min}\neq\{(x_4,y_4),(x_5,y_5)\}$, which is equivalent to $|y_{5}-y_{4}|>\min\{|y_{5}-y_{4}|,|y_{4}-y_{6}|,|y_{6}-y_{5}|\}$.
Since $\theta_{4}>\theta_{5}$ ($\theta_{4}=\theta_{5}$ can not be true in this case, because $\theta_{4}=\theta_{5}$ implies $y_4=y_5$), we can get $|y_6-y_5|<|y_4-y_6|$. It is then easy to see $|y_{6}-y_{5}|=\min\{|y_{5}-y_{4}|,|y_{4}-y_{6}|,|y_{6}-y_{5}|\}$. Then $$\theta_{5}\lesssim h^{-1}
\min\{|y_{5}-y_{4}|,|y_{4}-y_{6}|,|y_{6}-y_{5}|\}=h^{-1}
|y_{6}-y_{5}|.$$ So from , we have $$\begin{aligned}
\left\|\frac{\partial (I_{h}(\theta w))}{\partial x}\right\|_{0,K}^{2}
&\thickapprox
&\frac{1}{|K|}\{((y_{5}-y_{4})\theta_{5}w_{5})^{2}+((y_{5}-y_{4})\theta_{4}w_{4})^{2}+((y_{6}-y_{5})\theta_{5}w_{5}+(y_{4}-y_{6})\theta_{4}w_{4})^{2}\}\\
&\lesssim &\frac{1}{|K|}
\{((y_{5}-y_{4})\theta_{5}w_{5})^{2}+((y_{5}-y_{4})\theta_{4}w_{4})^{2}+((y_{6}-y_{5})\theta_{5}w_{5})^{2}+((y_{4}-y_{6})\theta_{4}w_{4})^{2}\}\\
&\lesssim &\frac{1}{|K|}
\{((y_{6}-y_{5})w_{5})^{2}+((y_{4}-y_{6})w_{4})^{2}\}.
\end{aligned}$$
In this case, $$\left\|\frac{\partial (\theta w)}{\partial x}\right\|_{0,K}^{2}
\gtrsim \frac{1}{|K|}\{((y_{6}-y_{5})w_{5}+(y_{4}-y_{6})w_{4})^{2}\}.\\$$
Since the partition ${\mathcal{T}_{H}}$ is uniform (see Figure \[fig: 2 point fig\]), and $K'$ is the element sharing the same point $(x_4,y_4)$ with $K$, we know that the values of $\theta$ at midpoints of $K'$ are all nonzero. Observe that the edge opposite to point $(x_4,y_4)$ in $K'_4$ is a parallel translation of the edge opposite to point $(x_4,y_4)$ in $K_4$. That is to say, point $(x_4,y_4)$ in $K'$ (also point 4 in $K$) is just the midpoint opposite to edge with $\min\{|y_{5}^{K'}-y_{4}^{K'}|,|y_{4}^{K'}-y_{6}^{K'}|,|y_{6}^{K'}-y_{5}^{K'}|\}$. By Proposition \[proposition: 3 point property\], we have $$\begin{aligned}
\left\|\frac{\partial (\theta w)}{\partial x}\right\|_{0,K'}^{2}
&\gtrsim &\frac{1}{|K'|}
\{\max\{|y_{5}^{K'}-y_{4}^{K'}|,|y_{4}^{K'}-y_{6}^{K'}|,|y_{6}^{K'}-y_{5}^{K'}|\}^{2}w_{4}^{2}\}\\
&=&\frac{1}{|K|}
\{\max\{|y_{5}-y_{4}|,|y_{4}-y_{6}|,|y_{6}-y_{5}|\}^{2}w_{4}^{2}\}.
\end{aligned}$$
Combining the last two inequalities, we have
$$\begin{aligned}
\left\|\frac{\partial(\theta w)}{\partial x}\right\|_{0,K\bigcup K'}^{2}
&\gtrsim
&\frac{1}{|K|}\{((y_{6}-y_{5})w_{5}+(y_{4}-y_{6})w_{4})^{2}+\max\{|y_{5}-y_{4}|,|y_{4}-y_{6}|,|y_{6}-y_{5}|\}^{2}w_{4}^{2}\}\\
&\gtrsim
&\frac{1}{|K|}\{((y_{6}-y_{5})w_{5})^{2}+((y_{4}-y_{6})w_{4})^{2}\}\\
&\gtrsim &\left\|\frac{\partial (I_{h}(\theta w))}{\partial x}\right\|_{0,K}^{2}.\\\end{aligned}$$
This completes the proof of case (b), and also the proof of Lemma.
\[lemma: fine grid\] For any $w\in V_{h}$, if $I_{H}w=0$, then for any $1\leq i \leq L$, $$\left\|\frac{\partial (I_{h}(\theta_{i} w))}{\partial x}\right\|_{0}^{2}\lesssim
\left\|\frac{\partial (\theta_{i} w)}{\partial x}\right\|_{0}^{2}.$$
The estimate follows from the local (element-wise) estimates given by Lemma \[lemma: 3 point stability\], \[lemma: 1 point stability\], \[lemma: 2 point stability\], and summation over all elements from ${\mathcal{T}_{H}}$.
Proof of the theorem \[theorem: the main theorem\]\[section:Proof\]
===================================================================
In this section we prove the convergence result that we have already stated in Section \[section:Theorem\].
For any angle of rotation $\omega\in [0,\pi]$, the two-level iteration with coarse space $V_H$ and line (block) Gauss-Seidel smoother is a uniformly convergent method. In fact, we have $$ \|E_{TL}\|_{a}^{2}\leq 1-\frac{1}{C},$$ with constant $C$ independent of $\epsilon$ and $h$.
From Lemma \[lemma: two-level by ludmil\], we only need to prove $\sup\limits_{v\in V_{h}}\frac{\|(I-\Pi_{*})v\|_{*}^{2}}{\|v\|_{a}^{2}}\leq
C$.
For any $v\in V_h$, let $w:=v-I_{H}v$, $w_{i}:=I_{h}(\theta_{i}(y)w)$, it is easy to see $I_H w=0$, and $$\sum_i w_{i}=\sum_i I_{h}(\theta_{i}(y)w)=I_{h}\sum_i(\theta_{i}(y)w)=I_h w=w.$$
Then $$\begin{aligned}
\sup_{v\in V_{h}}\frac{\|(I-\Pi_{*})v\|_{*}^{2}}{\|v\|_{a}^{2}}
&\leq &\sup_{v\in V_{h}}\frac{\|(I-I_H)v\|_{*}^{2}}{\|v\|_{a}^{2}}\\
&=& \sup_{v\in V_{h}}\frac{\|w\|_{*}^{2}}{\|v\|_{a}^{2}}
= \sup_{v\in V_{h}}\inf_{\sum_{i}\tilde{w}_{i}=w}\frac{\sum_{i}\|\tilde{w}_{i}\|_{a}^{2}}{\|v\|_{a}^{2}}
\leq \sup_{v\in V_{h}}
\frac{\sum_{i}\|w_i\|_{a}^{2}}{\|v\|_{a}^{2}}.
\end{aligned}$$
So we only need to prove for any $v\in V_h$, $\sum_{i}\|w_i\|_{a}^{2}\lesssim \|v\|_{a}^{2}$.
First, the decomposition is stable in $L^{2}$, $$\label{stable1}
\sum_{i}\|w_{i}\|_{0}^{2}\lesssim \|\sum_{i}w_{i}\|_{0}^{2}=\|w\|_{0}^{2}\lesssim
\sum_{i}\|w_{i}\|_{0}^{2}.$$ Since the sum is along $x$ direction, $$\begin{aligned}
\sum\limits_{i}\|\partial_{x}w_{i}\|_{0}^{2}
&=&\sum\limits_{i}\|\partial_{x}(I_{h}(\theta_{i}(y)w))\|_{0}^{2}\nonumber\\
&\lesssim &\sum\limits_{i}\|\partial_{x}(\theta_{i}(y)w)\|_{0}^{2} \quad(\mbox{by Lemma \ref{lemma: fine grid}})\nonumber\\
&= &\sum\limits_{i}\|\theta_{i}(y)\partial_{x}w\|_{0}^{2}\label{stable2}\\
&= &\sum\limits_{i}\|\theta_{i}(y)\|^{2}\|\partial_{x}w\|_{0}^{2}\nonumber\\
&\lesssim &\|\partial_{x}w\|_{0}^{2},\nonumber
\end{aligned}$$ then $$\begin{aligned}
\sum\limits_{i}\|w_{i}\|_a^2
&= &\sum\limits_{i}\|\partial_{x}w_{i}\|_{0}^{2}+\sum\limits_{i}\epsilon\|\partial_{y}w_{i}\|_{0}^{2}\\
&\lesssim &\sum\limits_{i}\|\partial_{x}w_{i}\|_{0}^{2}+\sum\limits_{i}\epsilon h^{-2}\|w_{i}\|_{0}^{2} \quad(\mbox{by inverse inequality})\\
&\lesssim &\|\partial_{x}w\|_{0}^{2}+\epsilon h^{-2}\|w\|_{0}^{2} \quad(\mbox{by (\ref{stable1}) and (\ref{stable2})})\\
&=&\|\partial_{x}(v-I_{H}v)\|_{0}^{2}+\epsilon h^{-2}\|v-I_{H}v\|_{0}^{2}\\
&\lesssim &\|\partial_{x}v\|_{0}^{2}+\epsilon
h^{-2}h^{2}|v|_{1}^{2} \quad(\mbox{by Lemma\,} \ref{lemma: two level convergence})\\
&\lesssim &|v|_{a}^{2}.
\end{aligned}$$
Numerical Experiments\[section:Numerical\]
==========================================
Tests for two-level method on a rotated uniform mesh
----------------------------------------------------
We first test the performance of the two-level iterative method and its convergence properties with respect to $\epsilon$ and $h$. We pick as initial triangulation a $4\times 4$ mesh with a characteristic mesh size $h_0= \frac{1}{2}\sqrt{2}$ as shown in Figure \[fig:three-domains\]. We then apply the two-level method described earlier on sequence of meshes with mesh sizes $h_k=2^{-k}h_0$, $k=1,\ldots,6$.
The energy norm of the error of two-level method $\|E_{TL}\|_{a}$ is depicted in Figure \[fig:uniform\] show that two-level method is uniformly convergent w.r.t. $\epsilon$ and $h$, which agree with the theoretical results we have proved in the previous sections.
[ ]{}
[ ]{}
Tests for two-level method on a general unstructured mesh
---------------------------------------------------------
Similarly to the case of uniform mesh, for a general unstructured mesh we choose $h_0=0.9$ as the maximum diameter of the triangles on the coarsest mesh $\mathcal{T}_0$ as shown in Figure \[fig:three-general-meshes\]. This coarsest mesh is then refined $6$ times and get the mesh to obtain a sequence of triangulations with characteristic mesh sizes $h=2^{-k}h_0$, $k=1,\ldots,6$.
\[0.33\][![Plot of unstructured grids used in the numerical examples for three values of the angle of rotation of anisotropy $\omega=0$, $\omega=\pi/6$ and $\omega=\pi/4$. \[fig:three-general-meshes\]](g0fine "fig:")]{} \[0.33\][![Plot of unstructured grids used in the numerical examples for three values of the angle of rotation of anisotropy $\omega=0$, $\omega=\pi/6$ and $\omega=\pi/4$. \[fig:three-general-meshes\]](g30fine "fig:")]{} \[0.33\][![Plot of unstructured grids used in the numerical examples for three values of the angle of rotation of anisotropy $\omega=0$, $\omega=\pi/6$ and $\omega=\pi/4$. \[fig:three-general-meshes\]](g45fine "fig:")]{}
The energy norm of the error propagation operator for the two level method, $\|E_{TL}\|_{a}$, are shown in Figure \[fig:general\]. The uniform convergence is clearly seen from the plots. Theoretical justification of such uniform convergence is however much more difficult and is a topic of current and future research.
[ ]{}
[ ]{}
|
---
abstract: 'We investigate the vacuum expectation value of the fermionic current induced by a magnetic flux in a (2+1)-dimensional conical spacetime in the presence of a circular boundary. On the boundary the fermionic field obeys MIT bag boundary condition. For irregular modes, a special case of boundary conditions at the cone apex is considered, when the MIT bag boundary condition is imposed at a finite radius, which is then taken to zero. We observe that the vacuum expectation values for both charge density and azimuthal current are periodic functions of the magnetic flux with the period equal to the flux quantum whereas the expectation value of the radial component vanishes. For both exterior and interior regions, the expectation values of the current are decomposed into boundary-free and boundary-induced parts. For a massless field the boundary-free part in the vacuum expectation value of the charge density vanishes, whereas the presence of the boundary induces nonzero charge density. Two integral representations are given for the boundary-free part in the case of a massive fermionic field for arbitrary values of the opening angle of the cone and magnetic flux. The behavior of the induced fermionic current is investigated in various asymptotic regions of the parameters. At distances from the boundary larger than the Compton wavelength of the fermion particle, the vacuum expectation values decay exponentially with the decay rate depending on the opening angle of the cone. We make a comparison with the results already known from the literature for some particular cases.'
author:
- |
E. R. Bezerra de Mello$^{1}$[^1], V. B. Bezerra$^{1}$[^2], A. A. Saharian$^{1,2}$[^3], V. M. Bardeghyan$^{2}$\
\
*$^{1}$Departamento de Física, Universidade Federal da Paraíba*\
*58.059-970, Caixa Postal 5.008, João Pessoa, PB, Brazil*\
*$^2$Department of Physics, Yerevan State University,*\
*Alex Manoogian Street, 0025 Yerevan, Armenia*
title: |
Fermionic current densities induced by magnetic flux\
in a conical space with a circular boundary
---
PACS numbers: 03.70.+k, 04.60.Kz, 11.27.+d
Introduction
============
Topological defects are inevitably produced during symmetry breaking phase transitions and play an important role in many fields of physics. They appear in different condensed matter systems including superfluids, superconductors and liquid crystals. Moreover, symmetry breaking phase transitions have several cosmological consequences and, within the framework of grand unified theories, various types of topological defects are predicted to be formed in the early universe [@Vile85]. They provide an important link between particle physics and cosmology. Among various types of topological defects, the cosmic strings are of special interest. They are candidates to produce a number of interesting physical effects, such as the generation of gravitational waves, gamma ray bursts and high-energy cosmic rays. Recently, cosmic strings attract a renewed interest partly because a variant of their formation mechanism is proposed in the framework of brane inflation [@Sara02].
In the simplest theoretical model describing the infinite straight cosmic string the spacetime is locally flat except on the string where it has a Dirac-delta shaped Riemann curvature tensor. From the point of view of quantum field theory, the corresponding non-trivial topology induces non-zero vacuum expectation values for several physical observables. Explicit calculations for the geometry of a single idealized cosmic string have been developed for different fields [@Hell86]-[@Beze10]. Moreover, vacuum polarization effects by higher-dimensional composite topological defects constituted by a cosmic string and global monopole are investigated in Refs. [@Beze06Comp] for scalar and fermionic fields. The geometry of a cosmic string in background of de Sitter spacetime has been recently considered in [@Beze09dS].
Another type of vacuum polarization arises in the presence of boundaries. The imposed boundary conditions on quantum fields alter the zero-point fluctuations spectrum and result in additional shifts in the vacuum expectation values of physical quantities. This is the well-known Casimir effect (for a review see [@Most97]). Note that the Casimir forces between material boundaries are presently attracting much experimental attention [@Klim09]. In Refs. [@Brev95]-[@Beze08Ferm], both types of sources for the polarization of the vacuum were studied in the cases of scalar, electromagnetic and fermionic fields, namely, a cylindrical boundary and a cosmic string, assuming that the boundary is coaxial with the string. The case of a scalar field was considered in an arbitrary number of spacetime dimensions, whereas the problems for the electromagnetic and fermionic fields were studied in four dimensional spacetime. Continuing in this line of investigation, in the present paper we study the fermionic current induced by a magnetic flux in a (2+1)-dimensional conical space with a circular boundary.
As it is well known, field theoretical models in 2+1 dimensions exhibit a number of interesting effects, such as parity violation, flavour symmetry breaking, fractionalization of quantum numbers (see Refs. [@Dese82]-[Dunn99]{}). An important aspect is the possibility of giving a topological mass to the gauge bosons without breaking gauge invariance. Field theories in 2+1 dimensions provide simple models in particle physics and related theories also rise in the long-wavelength description of certain planar condensed matter systems, including models of high-temperature superconductivity. An interesting application of Dirac theory in 2+1 dimensions recently appeared in nanophysics. In a sheet of hexagons from the graphite structure, known as graphene, the long-wavelength description of the electronic states can be formulated in terms of the Dirac-like theory of massless spinors in (2+1)-dimensional spacetime with the Fermi velocity playing the role of speed of light (for a review see Ref. [@Cast09]). One-loop quantum effects induced by non-trivial topology of graphene made cylindrical and toroidal nanotubes have been recently considered in Refs. [@Bell09]. The vacuum polarization in graphene with a topological defect is investigated in Ref. [@Site08] within the framework of long-wavelength continuum model.
The interaction of a magnetic flux tube with a fermionic field gives rise to a number of interesting phenomena, such as the Aharonov-Bohm effect, parity anomalies, formation of a condensate and generation of exotic quantum numbers. For background Minkowski spacetime, the combined effects of the magnetic flux and boundaries on the vacuum energy have been studied in Refs. [@Lese98; @Bene00]. In the present paper we investigate the vacuum expectation value of the fermionic current induced by vortex configuration of a gauge field in a (2+1)-dimensional conical space with a circular boundary. We assume that on the boundary the fermionic field obeys MIT bag boundary condition. The induced fermionic current is among the most important quantities that characterize the properties of the quantum vacuum. Although the corresponding operator is local, due to the global nature of the vacuum, this quantity carries an important information about the global properties of the background spacetime. In addition to describing the physical structure of the quantum field at a given point, the current acts as the source in the Maxwell equations. It therefore plays an important role in modelling a self-consistent dynamics involving the electromagnetic field.
From the point of view of the physics in the region outside the conical defect core, the geometry considered in the present paper can be viewed as a simplified model for the non-trivial core. This model presents a framework in which the influence of the finite core effects on physical processes in the vicinity of the conical defect can be investigated. In particular, it enables to specify conditions under which the idealized model with the core of zero thickness can be used. The corresponding results may shed light upon features of finite core effects in more realistic models, including those used for defects in crystals and superfluid helium. In addition, the problem considered here is of interest as an example with combined topological and boundary induced quantum effects, in which the vacuum characteristics can be found in closed analytic form.
The organization of the paper is as follows. In the next section we consider the complete set of solutions to the Dirac equation in the region outside a circular boundary on which the field obeys MIT bag boundary condition. Shrinking the radius of the circle to zero we clarify the structure of the eigenspinors for the boundary-free geometry. These eigenspinors are used in Sect. \[sec:BoundFree\] for the evaluation of the vacuum expectation value of the fermionic current density. Two integral representations are provided for the charge density and azimuthal component. In Sect. \[sec:ExtFC\], we consider the vacuum expectation values in the region outside a circular boundary. They are decomposed into boundary-free and boundary-induced parts. Rapidly convergent integral representations for the latter are obtained. Similar investigation for the region inside a circular boundary is presented in Sect. \[sec:Int\]. The main results are summarized in Sect. [sec:Conc]{}. In Appendix \[sec:IntRep\] we derive two integral representations for the series involving the modified Bessel functions. These representations are used to obtain the fermionic current densities in the boundary-free geometry. In Appendix \[sec:App2New\], we compare the results of the present paper, in the special case of a magnetic flux in (2+1)-dimensional Minkowski spacetime, with those from the literature. In Appendix \[sec:App2\] we show that the special mode does not contribute to the vacuum expectation value of the fermionic current in the region inside a circular boundary.
Model and the eigenspinors in the exterior region {#sec:Ext}
=================================================
In this paper we consider a two-component spinor field $\psi $, propagating on a $(2+1)$-dimensional background spacetime with a conical singularity described by the line-element$$ds^{2}=g_{\mu \nu }dx^{\mu }dx^{\nu }=dt^{2}-dr^{2}-r^{2}d\phi ^{2},
\label{ds21}$$where $r\geqslant 0$, $0\leqslant \phi \leqslant \phi _{0}$, and the points $%
(r,\phi )$ and $(r,\phi +\phi _{0})$ are to be identified. We are interested in the change of the vacuum expectation value (VEV) of the fermionic current induced by a magnetic flux in the presence of a circular boundary concentric with the apex of the cone.
The dynamics of a massive spinor field is governed by the Dirac equation $$i\gamma ^{\mu }(\nabla _{\mu }+ieA_{\mu })\psi -m\psi =0\ ,\;\nabla _{\mu
}=\partial _{\mu }+\Gamma _{\mu }, \label{Direq}$$where $A_{\mu }$ is the vector potential for the external electromagnetic field. In Eq. (\[Direq\]), $\gamma ^{\mu }=e_{(a)}^{\mu }\gamma ^{(a)}$ are the $2\times 2$ Dirac matrices in polar coordinates and $\Gamma _{\mu }$ is the spin connection. The latter is defined in terms of the flat space Dirac matrices, $\gamma ^{(a)}$, by the relation $$\Gamma _{\mu }=\frac{1}{4}\gamma ^{(a)}\gamma ^{(b)}e_{(a)}^{\nu }e_{(b)\nu
;\mu }\ , \label{Gammamu}$$where $;$ means the standard covariant derivative for vector fields. In the equations above, $e_{(a)}^{\mu }$, $a=0,1,2$, is the basis tetrad satisfying the relation $e_{(a)}^{\mu }e_{(b)}^{\nu }\eta ^{ab}=g^{\mu \nu }$, with $%
\eta ^{ab}$ being the Minkowski spacetime metric tensor. We assume that the field obeys the MIT bag boundary condition on the circle with radius $a$: $$\left( 1+in_{\mu }\gamma ^{\mu }\right) \psi \big|_{r=a}=0\ , \label{BCMIT}$$where $n_{\mu }$ is the outward oriented normal (with respect to the region under consideration) to the boundary. In particular, from Eq. (\[BCMIT\]) it follows that the normal component of the fermion current vanishes at the boundary, $n_{\mu }\bar{\psi}\gamma ^{\mu }\psi =0$, with $\bar{\psi}=\psi
^{\dagger }\gamma ^{0}$ being the Dirac adjoint and the dagger denotes Hermitian conjugation. In this section we consider the region $r>a$ for which $n_{\mu }=-\delta _{\mu }^{1}$.
In (2+1)-dimensional spacetime there are two inequivalent irreducible representations of the Clifford algebra. In the first one we may choose the flat space Dirac matrices in the form$$\gamma ^{(0)}=\sigma _{3},\;\gamma ^{(1)}=i\sigma _{1},\;\gamma
^{(2)}=i\sigma _{2}, \label{DirMat}$$with $\sigma _{l}$ being Pauli matrices. In the second representation the gamma matrices can be taken as $\gamma ^{(0)}=-\sigma _{3}$, $\gamma
^{(1)}=-i\sigma _{1}$, $\gamma ^{(2)}=-i\sigma _{2}$. In what follows we use the representation (\[DirMat\]). The corresponding results for the second representation are obtained by changing the sign of the mass, $m\rightarrow
-m$. For the basis tetrads we use the representation below:$$\begin{aligned}
e_{(0)}^{\mu } &=&\left( 1,0,0\right) , \notag \\
e_{(1)}^{\mu } &=&\left( 0,\cos (q\phi ),-\sin (q\phi )/r\right) ,
\label{Tetrad} \\
e_{(2)}^{\mu } &=&\left( 0,\sin (q\phi ),\cos (q\phi )/r\right) , \notag\end{aligned}$$where the parameter $q$ is related to the opening angle of the cone by the relation $$q=2\pi /\phi _{0}. \label{qu}$$With this choice, for the Dirac matrices in the coordinate system given by the line element (\[ds21\]), we have the following representation $$\gamma ^{0}=\left(
\begin{array}{cc}
1 & 0 \\
0 & -1%
\end{array}%
\right) ,\;\gamma ^{1}=i\left(
\begin{array}{cc}
0 & e^{-iq\phi } \\
e^{iq\phi } & 0%
\end{array}%
\right) ,\;\gamma ^{2}=\frac{1}{r}\left(
\begin{array}{cc}
0 & e^{-iq\phi } \\
-e^{iq\phi } & 0%
\end{array}%
\right) . \label{DirMat2}$$Consequently, the Dirac equation takes the form$$\Big[\gamma ^{\mu }\left( \partial _{\mu }+ieA_{\mu }\right) +\frac{1-q}{2r}%
\gamma ^{1}+im\Big]\psi =0, \label{Direq3}$$where the term with $\gamma ^{1}$ comes from the spin connection.
In what follows we assume the magnetic field configuration corresponding to a magnetic flux located in the region $r<a$. This will be implemented by considering the vector potential in the exterior region, $r>a$, as follows$$A_{\mu }=(0,0,A), \label{Amu}$$In Eq. (\[Amu\]), $A_{2}=A$ is the covariant component of the vector potential in the coordinates $(t,r,\phi )$. For the so called physical azimuthal component one has $A_{\phi }=-A/r$. The quantity $A$ is related to the magnetic flux $\Phi $ by the formula $A=-\Phi /\phi _{0}$. Though the magnetic field strength, corresponding to (\[Amu\]), vanishes outside the flux, the non-trivial topology of the background spacetime leads to Aharonov-Bohm-like effects on physical observables. In particular, as it will be seen below, the VEV of the fermionic current depends on the fractional part of the ratio of $\Phi $ by the quantum flux, $2\pi /e$.
Decomposing the spinor into upper and lower components, $\varphi _{+}$ and $%
\varphi _{-}$, respectively, from Eq. (\[Direq3\]) we get the following equations $$\left( \partial _{0}\pm im\right) \varphi _{\pm }\pm ie^{\mp iq\phi }\Big[%
\partial _{1}+\frac{1-q}{2r}\mp \frac{i}{r}(\partial _{2}+ieA)\Big]\varphi
_{\mp }=0. \label{DirEqphi}$$From here we find the second-order differential equation for the separate components: $$\left( \partial _{0}^{2}-\partial _{1}^{2}-\frac{1}{r}\partial _{1}-\frac{1}{%
r^{2}}\partial _{2}^{2}\mp 2i\frac{c_{\pm }}{r^{2}}\partial _{2}+\frac{%
c_{\pm }^{2}}{r^{2}}+m^{2}\right) \varphi _{\pm }=0, \label{DireqPhi1}$$with the notations $c_{\pm }=(q-1)/2\pm eA$.
For the positive energy solutions, the dependence on the time and angle coordinates is in the form $e^{-iEt+iqn\phi }$ with $E>0$ and $n=0,\pm 1,\pm
2,\ldots $. Now, from Eq. (\[DireqPhi1\]), for the radial function we obtain the Bessel equation with the solution in the region $r>a$: $$\varphi _{+}=Z_{|\lambda _{n}|}(\gamma r)e^{iqn\phi -iEt}, \label{phiext}$$where $\gamma \geqslant 0$,$$E=\sqrt{\gamma ^{2}+m^{2}},\;\lambda _{n}=q(n+\alpha +1/2)-1/2,
\label{gamma}$$and$$\alpha =eA/q=-e\Phi /2\pi . \label{alfatilde}$$In Eq. (\[phiext\]),$$Z_{|\lambda _{n}|}(\gamma r)=c_{1}J_{|\lambda _{n}|}(\gamma
r)+c_{2}Y_{|\lambda _{n}|}(\gamma r), \label{Zsig}$$with $J_{\nu }(x)$ and $Y_{\nu }(x)$ being the Bessel and Neumann functions. Note that in Eq. (\[alfatilde\]) the parameter $\alpha $ is the magnetic flux measured in units of the flux quantum $\Phi _{0}=2\pi /e$.
The lower component of the spinor, $\varphi _{-}$, is found from Eq. ([DirEqphi]{}) and for the positive energy eigenspinors we get$$\psi _{\gamma n}^{(+)}(x)=e^{iqn\phi -iEt}\left(
\begin{array}{c}
Z_{|\lambda _{n}|}(\gamma r) \\
\epsilon _{\lambda _{n}}\frac{\gamma e^{iq\phi }}{E+m}Z_{|\lambda
_{n}|+\epsilon _{\lambda _{n}}}(\gamma r)%
\end{array}%
\right) , \label{psisigpl}$$where $\epsilon _{\lambda _{n}}=1$ for $\lambda _{n}\geqslant 0$ and $%
\epsilon _{\lambda _{n}}=-1$ for $\lambda _{n}<0$. From the boundary condition (\[BCMIT\]) with $n_{\mu }=-\delta _{\mu }^{1}$ we find$$Z_{|\lambda _{n}|}(\gamma a)+\frac{\epsilon _{\lambda _{n}}\gamma }{E+m}%
Z_{|\lambda _{n}|+\epsilon _{\lambda _{n}}}(\gamma a)=0. \label{BCext}$$This condition relates the coefficients $c_{1}$ and $c_{2}$ in the linear combination (\[Zsig\]):$$\frac{c_{2}}{c_{1}}=-\frac{\bar{J}_{|\lambda _{n}|}^{(-)}(\gamma a)}{\bar{Y}%
_{|\lambda _{n}|}^{(-)}(\gamma a)}. \label{c21}$$Here and in what follows we use the notations (the notation with the upper sign is employed below)$$\bar{f}^{(\pm )}(z)=zf^{\prime }(z)+(\pm \sqrt{z^{2}+\mu ^{2}}\pm \mu
-\lambda _{n})f(z),\;\mu =ma, \label{barnot2}$$for a given function $f(z)$.
Hence, outside a circular boundary the positive energy eigenspinors are presented in the form$$\psi _{\gamma n}^{(+)}(x)=c_{0}e^{iqn\phi -iEt}\left(
\begin{array}{c}
g_{|\lambda _{n}|,|\lambda _{n}|}(\gamma a,\gamma r) \\
\epsilon _{\lambda _{n}}\frac{\gamma e^{iq\phi }}{E+m}g_{|\lambda
_{n}|,|\lambda _{n}|+\epsilon _{\lambda _{n}}}(\gamma a,\gamma r)%
\end{array}%
\right) , \label{psisig+}$$where$$g_{\nu ,\rho }(x,y)=\bar{Y}_{\nu }^{(-)}(x)J_{\rho }(y)-\bar{J}_{\nu
}^{(-)}(x)Y_{\rho }(y). \label{gsig}$$Using the properties of the Bessel functions it can be seen that $$g_{\nu ,\nu }(x,y)=g_{-\nu ,-\nu }(x,y),\;g_{\nu ,\nu +1}(x,y)=-g_{-\nu
,-\nu -1}(x,y). \label{gsig1}$$Note that the spinor (\[psisig+\]) is an eigenfunction of the operator $%
\widehat{J}=-(i/q)\partial _{\phi }+\sigma _{3}/2$, with the eigenvalue $%
j=n+1/2$, i.e.,$$\widehat{J}\psi _{\gamma n}^{(+)}(x)=j\psi _{\gamma n}^{(+)}(x),\;j=n+1/2.
\label{Jmom}$$
The coefficient $c_{0}$ in Eq. (\[psisig+\]) is determined from the orthonormalization condition for the eigenspinors: $$\int_{a}^{\infty }dr\int_{0}^{\phi _{0}}d\phi \,r\psi _{\gamma
n}^{(+)\dagger }(x)\psi _{\gamma ^{\prime }n^{\prime }}^{(+)}(x)=\delta
(\gamma -\gamma ^{\prime })\delta _{nn^{\prime }}\ . \label{ortcon}$$The integral over $r$ is divergent when $\gamma ^{\prime }=\gamma $ and, hence, the main contribution comes from the upper limit of the integration. In this case, we can replace the Bessel and Neumann functions, having in the arguments the radial coordinate $r$, by the corresponding asymptotic expressions for large values of their argument. In this way, for the normalization coefficient we find,$$c_{0}^{2}=\frac{2E\gamma }{\phi _{0}(E+m)}\left[ \bar{J}_{|\lambda
_{n}|}^{(-)2}(\gamma a)+\bar{Y}_{|\lambda _{n}|}^{(-)2}(\gamma a)\right]
^{-1}. \label{c0}$$The negative energy eigenspinors are constructed in a similar way and they are given by the expression $$\psi _{\gamma n}^{(-)}(x)=c_{0}e^{-iqn\phi +iEt}\left(
\begin{array}{c}
\epsilon _{\lambda _{n}}\frac{\gamma e^{-iq\phi }}{E+m}g_{|\lambda
_{n}|,|\lambda _{n}|+\epsilon _{\lambda _{n}}}(\gamma a,\gamma r) \\
g_{|\lambda _{n}|,|\lambda _{n}|}(\gamma a,\gamma r)%
\end{array}%
\right) , \label{psisig-}$$with the same normalization coefficient defined by Eq. (\[c0\]). Note that the positive and negative energy eigenspinors are related by the charge conjugation which can be written as $\psi _{\gamma n}^{(-)}=\sigma _{1}\psi
_{\gamma n}^{(+)\ast }$, where the asterisk means complex conjugate.
We can generalize the eigenspinors given above for a more general situation where the spinor field $\psi $ obeys quasiperiodic boundary condition along the azimuthal direction$$\psi (t,r,\phi +\phi _{0})=e^{2\pi i\chi }\psi (t,r,\phi ), \label{PerBC}$$with a constant parameter $\chi $, $|\chi |\leqslant 1/2$. With this condition, the exponential factor in the expressions for the eigenspinors has the form $e^{\pm iq(n+\chi )\phi \mp iEt}$ for the positive and negative energy modes (upper and lower signs respectively). The corresponding expressions for the eigenfunctions are obtained from those given above with the parameter $\alpha $ defined by $$\alpha =\chi -e\Phi /2\pi . \label{Replace}$$The same replacement generalizes the expressions for the VEVs of the fermionic current, given below, for the case of a field with periodicity condition (\[PerBC\]). The property, that the VEVs depend on the phase $%
\chi $ and on the magnetic flux in the combination (\[Replace\]), can also be seen by the gauge transformation $A_{\mu }=A_{\mu }^{\prime }+\partial
_{\mu }\Lambda (x)$, $\psi (x)=\psi ^{\prime }(x)e^{-ie\Lambda (x)}$, with the function $\Lambda (x)=A_{\mu }x^{\mu }$. The new function $\psi ^{\prime
}(x)$ satisfies the Dirac equation with $A_{\mu }^{\prime }=0$ and the quasiperiodicity condition similar to (\[PerBC\]) with the replacement $%
\chi \rightarrow \chi ^{\prime }=\chi -e\Phi /2\pi $.
Fermionic current in a boundary-free conical space {#sec:BoundFree}
==================================================
Before considering the fermionic current in the region outside a circular boundary, in this section we study the case of a boundary-free conical space with an infinitesimally thin magnetic flux placed at the apex of the cone. The corresponding vector potential is given by Eq. (\[Amu\]) for $r>0$. As it is well known, the theory of von Neumann deficiency indices leads to a one-parameter family of allowed boundary conditions in the background of an Aharonov-Bohm gauge field [@Sous89]. In this paper, we consider a special case of boundary conditions at the cone apex, when the MIT bag boundary condition is imposed at a finite radius, which is then taken to zero (note that similar approach, with the Atiyah-Patodi-Singer type nonlocal boundary conditions, has been used in Refs. [@Bene00] for a magnetic flux in Minkowski spacetime). The VEVs of the fermionic current for other boundary conditions on the cone apex are evaluated in a way similar to that described below. The contribution of the regular modes is the same for all boundary conditions and the results differ by the parts related to the irregular modes.
Eigenspinors
------------
In order to clarify the structure of the eigenspinors in a boundary-free conical space, we consider the limit $a\rightarrow 0$ for Eqs. (\[psisig+\]) and (\[psisig-\]). In this limit, using the asymptotic formulae for the Bessel functions for small values of the arguments, for the modes with $%
j\neq -\alpha $ we find$$\begin{aligned}
\psi _{(0)\gamma j}^{(+)}(x) &=&c_{0}^{(0)}e^{iqj\phi -iEt}\left(
\begin{array}{c}
J_{\beta _{j}}(\gamma r)e^{-iq\phi /2} \\
\frac{\gamma \epsilon _{j}e^{iq\phi /2}}{E+m}J_{\beta _{j}+\epsilon
_{j}}(\gamma r)%
\end{array}%
\right) , \notag \\
\psi _{(0)\gamma j}^{(-)}(x) &=&c_{0}^{(0)}e^{-iqj\phi +iEt}\left(
\begin{array}{c}
\frac{\gamma \epsilon _{j}e^{-iq\phi /2}}{E+m}J_{\beta _{j}+\epsilon
_{j}}(\gamma r) \\
J_{\beta _{j}}(\gamma r)e^{iq\phi /2}%
\end{array}%
\right) , \label{psi0}\end{aligned}$$where$$\beta _{j}=q|j+\alpha |-\epsilon _{j}/2. \label{jbetj}$$Note that one has $\epsilon _{j}\beta _{j}=\lambda _{n}$. In Eqs. (\[psi0\]) and (\[jbetj\]), we have defined$$\epsilon _{j}=\left\{
\begin{array}{cc}
1, & \;j>-\alpha \\
-1, & \;j<-\alpha%
\end{array}%
\right. , \label{epsj}$$and the normalization coefficient is given by the expression$$c_{0}^{(0)2}=\frac{\gamma }{\phi _{0}}\frac{E+m}{2E}. \label{c00}$$
In the case when $\alpha =N+1/2$, with $N$ being an integer number, the eigenspinors with $j\neq -\alpha $ are still given by Eqs. (\[psi0\]). The eigenspinors for the mode with $j=-\alpha $, obtained from Eqs. ([psisig+]{}) and (\[psisig-\]) in the limit $a\rightarrow 0$, have the form (\[psi0\]) with the replacements$$\begin{aligned}
J_{\beta _{j}}(z) &\rightarrow &(E+m)J_{1/2}(z)-\gamma Y_{1/2}(z), \notag \\
J_{\beta _{j}+\epsilon _{j}}(z) &\rightarrow &(E+m)J_{-1/2}(z)-\gamma
Y_{-1/2}(z), \label{jeqalfa}\end{aligned}$$and $\epsilon _{j}=-1$. The corresponding normalization coefficient is defined as $c_{0}^{(0)}=(2E)^{-1}\sqrt{\gamma /\phi _{0}}$. Taking into account the expressions for the cylinder functions with the orders $\pm 1/2$, the negative energy eigenspinors in this case are written as$$\psi _{(0)\gamma ,-\alpha }^{(-)}(x)=\left( \frac{E+m}{\pi \phi _{0}rE}%
\right) ^{1/2}e^{iq\alpha \phi +iEt}\left(
\begin{array}{c}
\frac{\gamma e^{-iq\phi /2}}{E+m}\sin (\gamma r-\gamma _{0}) \\
e^{iq\phi /2}\cos (\gamma r-\gamma _{0})%
\end{array}%
\right) , \label{psibetSp}$$where $\gamma _{0}=\arccos [\sqrt{(E-m)/2E}]$. Note that the eigenspinors obtained from (\[psi0\]) in the limits $\alpha \rightarrow (N+1/2)^{\pm }$ do not coincide with (\[psibetSp\]). For the limit from below (above), $%
\alpha \rightarrow (N+1/2)^{-}$ ($\alpha \rightarrow (N+1/2)^{+}$), the eigenspinors are given by Eq. (\[psibetSp\]) with the replacement $\gamma
_{0}\rightarrow \pi /2$ ($\gamma _{0}\rightarrow 0$). Hence, for the bag boundary condition on the cone apex, the eigenspinors in the boundary-free geometry are discontinuous at points $\alpha =N+1/2$. Notice that, in the presence of the circular boundary, the eigenspinors in the region outside the boundary, given by Eqs. (\[psisig+\]) and (\[psisig-\]), are continuous.
In general, the fermionic modes in background of the magnetic vortex are divided into two classes, regular and irregular (square integrable) ones. In the problem under consideration, for given $q$ and $\alpha $, the irregular mode corresponds to the value of $j$ for which $q|j+\alpha |<1/2$. If we present the parameter $\alpha $ in the form$$\alpha =\alpha _{0}+n_{0},\;|\alpha _{0}|<1/2, \label{alf0}$$being $n_{0}$ an integer number, then the irregular mode is present if $%
|\alpha _{0}|>(1-1/q)/2$. This mode corresponds to $j=-n_{0}-$sgn$(\alpha
_{0})/2$. Note that, in a conical space, under the condition $$|\alpha _{0}|\leqslant (1-1/q)/2, \label{condalf0}$$there are no square integrable irregular modes. As we have already mentioned, there is a one-parameter family of allowed boundary conditions for irregular modes, parametrized with the angle $\theta $, $0\leqslant
\theta <2\pi $ (see Ref. [@Sous89]). For $|\alpha _{0}|<1/2$, the boundary condition, used in deriving eigenspinors (\[psi0\]), corresponds to $\theta =3\pi /2$. If $\alpha $ is a half-integer, the irregular mode corresponds to $j=-\alpha $ and for the corresponding boundary condition one has $\theta =0$. Note that in both cases there are no bound states.
Vacuum expectation value of the fermionic current
-------------------------------------------------
The VEV of the fermionic current, $j^{\mu }(x)=e\bar{\psi}\gamma ^{\mu }\psi
$, can be evaluated by using the mode sum formula$$\langle j^{\nu }(x)\rangle =e\sum_{j}\int_{0}^{\infty }d\gamma \,\bar{\psi}%
_{\gamma j}^{(-)}(x)\gamma ^{\nu }\psi _{\gamma j}^{(-)}(x), \label{FCMode}$$ where $\sum_{j}$ means the summation over $j=\pm 1/2,\pm 3/2,\ldots $. In this section we consider this VEV for a conical space in the absence of boundaries. The corresponding quantities will be denoted by subscript 0. For the geometry under consideration, the eigenspinors are given by expressions (\[psi0\]). Substituting them into Eq. (\[FCMode\]) one finds$$\begin{aligned}
\langle j^{0}(x)\rangle _{0} &=&\frac{eq}{4\pi }\sum_{j}\int_{0}^{\infty
}d\gamma \frac{\gamma }{E}\left[ (E-m)J_{\beta _{j}+\epsilon
_{j}}^{2}(\gamma r)+(E+m)J_{\beta _{j}}^{2}(\gamma r)\right] , \notag \\
\langle j^{2}(x)\rangle _{0} &=&\frac{eq}{2\pi r}\sum_{j}\epsilon
_{j}\int_{0}^{\infty }d\gamma \frac{\gamma ^{2}}{E}J_{\beta _{j}}(\gamma
r)J_{\beta _{j}+\epsilon _{j}}(\gamma r), \label{j020}\end{aligned}$$and the VEV of the radial component vanishes, $\langle j^{1}(x)\rangle
_{0}=0 $. In deriving Eqs. (\[j020\]), we have assumed that the parameter $%
\alpha $ is not a half-integer. When $\alpha $ is equal to a half-integer, the contribution of the mode with $j=-\alpha $ should be evaluated by using eigenspinors (\[psibetSp\]). The contribution for all other $j$ is still given by Eqs. (\[j020\]). As it will be shown below, both these contributions are separately zero and for half-integer values of $\alpha $ the renormalized VEV of the fermionic current vanishes.
In order to regularize expressions (\[j020\]) we introduce a cutoff function $e^{-s\gamma ^{2}}$ with the cutoff parameter $s>0$. At the end of calculations the limit $s\rightarrow 0$ is taken. First let us consider the charge density. The corresponding regularized expectation value is presented in the form$$\begin{aligned}
\langle j^{0}(x)\rangle _{0,\text{reg}} &=&\frac{eqm}{4\pi }%
\sum_{j}\int_{0}^{\infty }d\gamma \,\frac{\gamma e^{-s\gamma ^{2}}}{\sqrt{%
\gamma ^{2}+m^{2}}}\left[ J_{\beta _{j}}^{2}(\gamma r)-J_{\beta
_{j}+\epsilon _{j}}^{2}(\gamma r)\right] \notag \\
&&+\frac{eq}{4\pi }\sum_{j}\int_{0}^{\infty }d\gamma \,\gamma e^{-s\gamma
^{2}}\left[ J_{\beta _{j}}^{2}(\gamma r)+J_{\beta _{j}+\epsilon
_{j}}^{2}(\gamma r)\right] . \label{j00reg}\end{aligned}$$Using the representation$$\frac{1}{\sqrt{\gamma ^{2}+m^{2}}}=\frac{2}{\sqrt{\pi }}\int_{0}^{\infty
}dte^{-(\gamma ^{2}+m^{2})t^{2}}, \label{repres}$$we change the order of integrations in the first term of the right-hand side in Eq. (\[j00reg\]) and use the formula [@Prud86] $$\int_{0}^{\infty }d\gamma \,\gamma e^{-s\gamma ^{2}}J_{\beta }^{2}(\gamma r)=%
\frac{1}{2s}e^{-r^{2}/2s}I_{\beta }(r^{2}/2s), \label{intform}$$with $I_{\beta }(z)$ being the modified Bessel function. The second term on the right of Eq. (\[j00reg\]) is directly evaluated using Eq. ([intform]{}). As a result, we get the following integral representation for the regularized charge density:$$\begin{aligned}
\langle j^{0}(x)\rangle _{0,\text{reg}} &=&\frac{eqme^{m^{2}s}}{2(2\pi
)^{3/2}}\sum_{j}\int_{0}^{r^{2}/2s}dz\frac{z^{-1/2}e^{-m^{2}r^{2}/2z}}{\sqrt{%
r^{2}-2zs}}e^{-z}\left[ I_{\beta _{j}}(z)-I_{\beta _{j}+\epsilon _{j}}(z)%
\right] \notag \\
&&+\frac{eqe^{-r^{2}/2s}}{8\pi s}\sum_{j}\left[ I_{\beta
_{j}}(r^{2}/2s)+I_{\beta _{j}+\epsilon _{j}}(r^{2}/2s)\right] .
\label{j00reg1}\end{aligned}$$The renormalization procedure for this expression is described below.
Now we turn to the azimuthal component of the fermionic current. Using the relation$$zJ_{\beta _{j}+\epsilon _{j}}(z)=\beta _{j}J_{\beta _{j}}(z)-\epsilon
_{j}zJ_{\beta _{j}}^{\prime }(z), \label{relBess}$$we write the corresponding regularized expression in the form$$\langle j^{2}(x)\rangle _{0,\text{reg}}=\frac{eq}{2\pi r^{2}}\sum_{j}\left(
\epsilon _{j}\beta _{j}-r\partial _{r}/2\right) \int_{0}^{\infty }d\gamma
\,\gamma \frac{e^{-s\gamma ^{2}}J_{\beta _{j}}^{2}(xr)}{\sqrt{\gamma
^{2}+m^{2}}}. \label{j02reg}$$In a way similar to that we have used for the first term in the right-hand side of Eq. (\[j00reg\]), the azimuthal current is presented in the form$$\langle j^{2}(x)\rangle _{0,\text{reg}}=\frac{eqe^{m^{2}s}}{(2\pi
)^{3/2}r^{2}}\sum_{j}\int_{0}^{r^{2}/2s}dz\frac{z^{1/2}e^{-m^{2}r^{2}/2z}}{%
\sqrt{r^{2}-2zs}}e^{-z}\left[ I_{\beta _{j}}(z)-I_{\beta _{j}+\epsilon
_{j}}(z)\right] . \label{j02reg1}$$In deriving this representation we employed the relation$$\left( \epsilon _{j}\beta _{j}-r\partial _{r}/2\right) e^{-r^{2}y}I_{\beta
_{j}}(r^{2}y)=ze^{-z}\left[ I_{\beta _{j}}(z)-I_{\beta _{j}+\epsilon _{j}}(z)%
\right] _{z=r^{2}y}, \label{relBesMod}$$for the modified Bessel function.
The expressions of the regularized VEVs for both charge density and azimuthal current are expressed in terms of the series$$\mathcal{I}(q,\alpha ,z)=\sum_{j}I_{\beta _{j}}(z). \label{seriesI0}$$If we present the parameter $\alpha $ related to the magnetic flux as ([alf0]{}), then Eq. (\[seriesI0\]) is written in the form$$\mathcal{I}(q,\alpha ,z)=\sum_{n=0}^{\infty }\left[ I_{q(n+\alpha
_{0}+1/2)-1/2}(z)+I_{q(n-\alpha _{0}+1/2)+1/2}(z)\right] , \label{seriesI1}$$which explicitly shows the independence of the series on $n_{0}$. Note that for the second series appearing in the expressions for the VEVs of the fermionic current we have $$\sum_{j}I_{\beta _{j}+\epsilon _{j}}(z)=\mathcal{I}(q,-\alpha _{0},z).
\label{seriesI2}$$We conclude that the VEVs of the fermionic current depend on $\alpha _{0}$ alone and, hence, these VEVs are periodic functions of $\alpha $ with period 1.
When the parameter $\alpha $ is equal to a half-integer, that means $|\alpha
_{0}|=1/2$, the contribution to the VEVs from the modes with $j\neq -\alpha $ is still given by Eqs. (\[j00reg1\]) and (\[j02reg1\]). It is easily seen that for the case under consideration $\sum_{j\neq -\alpha }\left[
I_{\beta _{j}}(x)-I_{\beta _{j}+\epsilon _{j}}(x)\right] =0$. The contribution of the mode $j=-\alpha $ is evaluated by using eigenspinors (\[psibetSp\]). A simple evaluation shows that this contribution vanishes as well. As regards the second term on the right-hand side of Eq. ([j00reg1]{}), below it will be shown that this term does not contribute to the renormalized VEV of the charge density. Hence, the renormalized VEVs for both charge density and azimuthal current vanish in the case when the parameter $\alpha $ is equal to a half-integer. Note that in the limit $%
\alpha _{0}\rightarrow \pm 1/2$, $|\alpha _{0}|<1/2$, one has$$\lim_{\alpha _{0}\rightarrow \pm 1/2}\sum_{\delta =\pm 1}\delta \mathcal{I}%
(q,\delta \alpha _{0},z)=\mp \sqrt{2/\pi z}e^{-z}, \label{RelLim}$$and the expressions for the regularized VEVs are discontinuous at $\alpha
_{0}=\pm 1/2$.
Renormalized VEV in a special case
----------------------------------
Before further considering the fermionic current for the general case of the parameters characterizing the conical structure and the magnetic flux, we study a special case, which allows us to obtain simple expressions. It has been shown in [@Davi88; @Smit89; @Sour92] that when the parameter $q$ is an integer number, the scalar Green function in four-dimensional cosmic string spacetime can be expressed as a sum of $q$ images of the Minkowski spacetime function. Also, recently the image method was used in [@Beze06] to provide closed expressions for the massive scalar Green functions in a higher-dimensional cosmic string spacetime. The mathematical reason for the use of the image method in these applications is because the order of the modified Bessel functions that appear in the expressions for the VEVs becomes an integer number. As we have seen, for the fermionic case the order of the Bessel function depends, besides on the integer angular quantum number $n=j-1/2$, also on the factor $(q-1)/(2q)$ which comes from the spin connection. However, considering a charged fermionic field in the presence of a magnetic flux, an additional term will be present, the factor $\alpha $. In the special case with $q$ being an integer and $$\alpha =1/2q-1/2, \label{alphaSpecial}$$the orders of the modified Bessel functions in Eqs. (\[j00reg1\]) and ([j02reg1]{}) become integer numbers: $\beta _{j}=q|n|$, $j=n+1/2$. In this case the series over $n$ is summarized explicitly by using the formula [Prud86]{}$$\sideset{}{'}{\sum}_{n=0}^{\infty }I_{qn}(x)=\frac{1}{2q}%
\sum_{k=0}^{q-1}e^{x\cos (2\pi k/q)}, \label{SerSp}$$where the prime on the summation sign means that the term $n=0$ should be halved.
By making use of Eq. (\[SerSp\]), for the regularized VEV of charge density we find$$\begin{aligned}
\langle j^{0}(x)\rangle _{0,\text{reg}} &=&\frac{eme^{m^{2}s}}{(2\pi )^{3/2}}%
\sum_{k=1}^{q-1}\sin ^{2}(\pi k/q)\int_{0}^{r^{2}/2s}dz\frac{%
z^{-1/2}e^{-m^{2}r^{2}/2z}}{\sqrt{r^{2}-2zs}}e^{-2z\sin ^{2}(\pi k/q)}
\notag \\
&&+\frac{e}{4\pi s}\sum_{k=0}^{q-1}\cos ^{2}(\pi k/q)e^{-2(r^{2}/2s)\sin
^{2}(\pi k/q)}. \label{j00Spreg}\end{aligned}$$In the limit $s\rightarrow 0$, the only divergent contribution to the right-hand side comes from the term $k=0$. This term does not depend on the parameter $q$ and is the same as in the Minkowski spacetime in the absence of the magnetic flux. Subtracting the $k=0$ term, then we take the limit $%
s\rightarrow 0$. The integral is expressed in terms of the Macdonald function $K_{1/2}(2mr\sin (\pi k/q))$ and for the renormalized charge density we find:$$\langle j^{0}(x)\rangle _{0,\text{ren}}=\frac{em}{4\pi r}\sum_{k=1}^{q-1}%
\sin (\pi k/q)e^{-2mr\sin (\pi k/q)}. \label{j00Spren}$$Note that the second term on the right-hand side of Eq. (\[j00reg1\]) does not contribute to the renormalized VEV. Expression (\[j00Spren\]) coincides with the result of Ref. [@Beze10] obtained by using the Green function approach.
In a similar way, using (\[SerSp\]), for the regularized VEV of the azimuthal current we get the formula$$\langle j^{2}(x)\rangle _{0,\text{reg}}=\frac{e}{\pi r^{2}}\frac{e^{m^{2}s}}{%
\sqrt{2\pi }}\sum_{k=1}^{q-1}\sin ^{2}(\pi k/q)\int_{0}^{r^{2}/2s}dz\frac{%
z^{1/2}e^{-m^{2}r^{2}/2z}}{\sqrt{r^{2}-2zs}}e^{-2z\sin ^{2}(\pi k/q)}.
\label{j02Spreg}$$This expression is finite in the limit $s\rightarrow 0$ and for the renormalized VEV one finds$$\langle j^{2}(x)\rangle _{0,\text{ren}}=\frac{e}{8\pi r^{3}}\sum_{k=1}^{q-1}%
\frac{1+2mr\sin (\pi k/q)}{\sin (\pi k/q)}e^{-2mr\sin (\pi k/q)}.
\label{j02Spren}$$After the coordinate transformation $\phi ^{\prime }=q\phi $, this expression coincides with the result given in Ref. [@Beze10]. Note that for both charge density and azimuthal current one has $\langle j^{\nu
}(x)\rangle _{0,\text{ren}}/e\geqslant 0$. As expected the renormalized current densities decay exponentially at distances larger than the Compton wavelength of the fermionic particle. In Fig. \[fig1\] the VEVs of charge density and azimuthal current are plotted versus $mr$ for different values of $q$. The corresponding values of the parameter $\alpha $ are found from Eq. (\[alphaSpecial\]).
-- --
-- --
General case
------------
Now we turn to the general case for the parameters $q$ and $\alpha $. As it follows from Eqs. (\[j00reg1\]) and (\[j02reg1\]), we need to evaluate the integrals$$\int_{0}^{r^{2}/2s}dx\frac{x^{\pm 1/2}e^{-m^{2}r^{2}/2x}}{\sqrt{r^{2}-2xs}}%
e^{-x}\left[ \mathcal{I}(q,\alpha _{0},x)-\mathcal{I}(q,-\alpha _{0},x)%
\right] , \label{Integrals}$$in the limit $s\rightarrow 0$, with the function $\mathcal{I}(q,\alpha
_{0},x)$ defined by Eq. (\[seriesI0\]). Two alternative integral representations for this series are given in Appendix \[sec:IntRep\]. In order to provide an integral representation for the VEV of fermionic current, we first consider the representation (\[seriesI3\]). In the limit $s\rightarrow 0$, the only divergent contributions to the integrals in Eq. (\[Integrals\]) with separate functions $\mathcal{I}(q,\pm \alpha _{0},x)$ come from the first term in the right-hand side of Eq. (\[seriesI3\]). This term does not depend on $\alpha _{0}$ and, consequently, it is cancelled in the evaluation of the integral in Eq. (\[Integrals\]).
Hence, we see that the regularized expression (\[j02reg1\]) for the VEVof azimuthal current is finite in the limit $s\rightarrow 0$. For the corresponding renormalized quantity we find$$\langle j^{2}(x)\rangle _{0,\text{ren}}=\frac{eqr^{-3}}{(2\pi )^{3/2}}%
\int_{0}^{\infty }dz\,z^{1/2}e^{-m^{2}r^{2}/2z-z}\sum_{\delta =\pm 1}\delta
\mathcal{I}(q,\delta \alpha _{0},z). \label{j02ren1}$$where the function $\mathcal{I}(q,\alpha _{0},z)$ is given by the integral representation (\[seriesI3\]). The VEV of the azimuthal current is a periodical function of the magnetic flux with a period equal to magnetic flux quantum $\Phi _{0}$. It is an odd function of the parameter $\alpha
_{0} $ defined by Eq. (\[alf0\]). From Eq. (\[seriesI3\]) it is seen that the integrand in Eq. (\[j02ren1\]) decays exponentially for large values of $z$. Substituting the integral representation (\[seriesI3\]) into Eq. (\[j02ren1\]) and changing the order of integrations in the part with the last term of Eq. (\[seriesI3\]), the integral over $z$ is expressed in terms of the Macdonald function $K_{3/2}(y)$.
As a result, for the renormalized VEV of the azimuthal component we find the following expression$$\begin{aligned}
&& \langle j^{2}(x)\rangle _{0,\text{ren}} =\frac{e}{4\pi r^{3}}\Big\{%
\sum_{l=1}^{p}\frac{(-1)^{l}\sin (2\pi l\alpha _{0})}{\sin ^{2}(\pi l/q)}%
\frac{1+2mr\sin (\pi l/q)}{e^{2mr\sin (\pi l/q)}} \notag \\
&& \qquad -\frac{q}{4\pi }\int_{0}^{\infty }dy\frac{\sum_{\delta =\pm
1}\delta f(q,\delta \alpha _{0},y)}{\cosh (qy)-\cos (q\pi )}\frac{1+2mr\cosh
(y/2)}{\cosh ^{3}(y/2)e^{2mr\cosh (y/2)}}\Big\}, \label{j02ren2}\end{aligned}$$ where $p$ is an integer defined by $2p<q<2p+2$ and the function $f(q,\alpha
_{0},y)$ is given by Eq. (\[fqualf\]). In the case $q=2p$, the additional term $$-(-1)^{q/2}\sin (\pi q\alpha _{0})e^{-2mr}(1/2+mr), \label{AddTermj2}$$should be added to the expression in the figure braces of Eq. (\[j02ren2\]). For $1\leqslant q<2$, only the integral term remains. The difference of the functions $f(q,\alpha _{0},y)$, appearing in the integrand, can also be written in the form $$\sum_{\delta =\pm 1}\delta f(q,\delta \alpha _{0},y)=2\cosh
(y/2)\sum_{\delta =\pm 1}\delta \cos \left[ q\pi \left( 1/2-\delta \alpha
_{0}\right) \right] \cosh \left[ q\left( 1/2+\delta \alpha _{0}\right) y%
\right] . \label{Diff}$$Note that for $q=2p$ the integrand in the last term of Eq. (\[j02ren2\]) is finite at $y=0$.
In the massless limit the expression in the figure braces of Eq. ([j02ren2]{}) does not depend on the radial coordinate $r$ and the renormalized VEV of the azimuthal current behaves as $1/r^{3}$. For a massive field, at distances larger than the Compton wavelength of the spinor particle, $mr\gg
1 $, the VEV of the azimuthal current is suppressed by the factor $e^{-2mr}$ for $1\leqslant q<2$ and by the factor $e^{-2mr\sin (\pi /q)}$ for $%
q\geqslant 2$. In the limit $mr\ll 1$, the leading term in the corresponding asymptotic expansion coincides with the VEV for a massless field. In Fig. \[fig2\] we plot the VEV of the azimuthal current for a massless fermionic field as a function of the magnetic flux for several values of the parameter $q$ (numbers near the curves). In the limit $\alpha _{0}\rightarrow \pm 1/2$, $|\alpha _{0}|<1/2$, the VEV of the azimuthal current is obtained using relation (\[RelLim\]). From Eq. (\[j02ren1\]) one finds:$$\lim_{\alpha _{0}\rightarrow \pm 1/2}\langle j^{2}(x)\rangle _{0,\text{ren}%
}=\mp \frac{eqm}{2\pi ^{2}r^{2}}K_{1}(2mr), \label{Limj2}$$where $K_{\nu }(z)$ is the Macdonald function.
Now we turn to the VEV of the charge density. In the corresponding regularized expression (\[j00reg1\]), the first term on the right-hand side with the integral is finite in the limit $s\rightarrow 0$. The second term is written in the form$$\frac{eqe^{-r^{2}/2s}}{8\pi s}\sum_{\delta =\pm 1}\mathcal{I}(q,\delta
\alpha _{0},r^{2}/2s). \label{Term}$$Taking into account Eq. (\[seriesI3\]), we see that the only nonzero contribution to this term comes from the first term in the right-hand side of Eq. (\[seriesI3\]). This contribution diverges and does not depend on the parameters $q$ and $\alpha _{0}$. The divergence is the same as in the Minkowski spacetime in the absence of the magnetic flux and is subtracted in the renormalization procedure. As a result, for the renormalized VEV of the charge density we get the formula$$\langle j^{0}(x)\rangle _{0,\text{ren}}=\frac{eqm}{2(2\pi )^{3/2}r}%
\int_{0}^{\infty }dz\,z^{-1/2}e^{-m^{2}r^{2}/2z-z}\sum_{\delta =\pm 1}\delta
\mathcal{I}(q,\delta \alpha _{0},z). \label{j00ren1}$$This VEV is a periodical function of the magnetic flux with a period equal to the magnetic flux quantum. For $2p<q<2p+2$, with $p$ being an integer, using the integral representation (\[seriesI3\]), the renormalized VEV is presented in the form$$\begin{aligned}
&& \langle j^{0}(x)\rangle _{0,\text{ren}} =\frac{em}{2\pi r}\Big\{%
\sum_{l=1}^{p}(-1)^{l}\sin (2\pi l\alpha _{0})e^{-2mr\sin (\pi l/q)} \notag
\\
&&\qquad -\frac{q}{4\pi }\int_{0}^{\infty }dy\frac{\sum_{\delta =\pm
1}\delta f(q,\delta \alpha _{0},y)}{\cosh (qy)-\cos (q\pi )}\frac{%
e^{-2mr\cosh (y/2)}}{\cosh (y/2)}\Big\}. \label{j00ren2}\end{aligned}$$When $q=2p$, the additional term $$-(-1)^{q/2}\sin (\pi q\alpha _{0})e^{-2mr}/2, \label{AddTermj0}$$should be added to the expression in the figure braces on the right-hand side of Eq. (\[j02ren2\]). As in the case of the azimuthal current, the renormalized VEV (\[j00ren1\]) is an odd function of the parameter $\alpha
_{0}$. Note that in a boundary-free conical space the charge density for a massless field vanishes at points outside the magnetic flux. For a massive field, at distances larger than the Compton wavelength, $mr\gg 1$, the renormalized charge density is suppressed by the factor $e^{-2mr}$ for $%
1\leqslant q<2$ and by the factor $e^{-2mr\sin (\pi /q)}$ for $q\geqslant 2$.
Though the charge density given by Eq. (\[j00ren1\]) diverges at $r=0$, this divergence is integrable and the total fermionic charge$$Q=\phi _{0}\int_{0}^{\infty }dr\,r\langle j^{0}(x)\rangle _{0,\text{ren}}=%
\frac{e}{4}\int_{0}^{\infty }dx\,e^{-x}\sum_{\delta =\pm 1}\delta \mathcal{I}%
(q,\delta \alpha _{0},x), \label{Q}$$is finite. In the form given by Eq. (\[Q\]), the integrals with $\delta =1$ and $\delta =-1$ diverge separately and we cannot change the order of the integration an summation over $\delta $. In order to overcome this difficulty, we write the integral as $\int_{0}^{\infty }dx\,e^{-x}\cdots
=\lim_{s\rightarrow 1^{+}}\int_{0}^{\infty }dx\,e^{-sx}\cdots $. With this representation, evaluating the integrals with separate $\delta $ for $s>1$ and taking the limit $s\rightarrow 1$, one finds$$Q=-e\alpha _{0}/2. \label{Q1}$$This result for a conical space was previously obtained in Ref. [@Beze94]. As we see, the total charge does not depend on the angle deficit of the conical space. This property is a consequence of the fact that the total charge is a topologically invariant quantity depending only on the net flux.
In the absence of the angle deficit one has $q=1$ and the expressions for the VEVs of the fermionic current are simplified to$$\begin{aligned}
\langle j^{0}(x)\rangle _{0,\text{ren}} &=&-\frac{em\sin \left( \pi \alpha
_{0}\right) }{2\pi ^{2}r}\int_{0}^{\infty }dz\,\frac{\cosh (2\alpha _{0}z)}{%
\cosh z}e^{-2mr\cosh z}, \notag \\
\langle j^{2}(x)\rangle _{0,\text{ren}} &=&-\frac{e\sin \left( \pi \alpha
_{0}\right) }{4\pi ^{2}r^{3}}\int_{0}^{\infty }dz\,\frac{\cosh (2\alpha
_{0}z)}{\cosh ^{3}z}\left( 1+2mr\cosh z\right) e^{-2mr\cosh z}. \label{FCq1}\end{aligned}$$Alternative expressions for the VEVs of the charge density and azimuthal current in (2+1)-dimensional Minkowski spacetime in the presence of a magnetic flux were given in Ref. [@Flek91] (see also [@Site99] for the general case of a one-parameter family of boundary conditions at the origin). We compare these expressions with the formulae obtained in the present paper in Appendix \[sec:App2New\].
In Fig. \[fig3\], the VEVs of the charge density (left panel) and azimuthal current (right panel) are plotted as functions of the magnetic flux for a massive fermionic field in a conical space with $\phi _{0}=\pi $. In the limit $\alpha _{0}\rightarrow \pm 1/2$, $|\alpha _{0}|<1/2$, for the azimuthal current one has Eq. (\[Limj2\]). For the charge density the limiting values are given by $$\lim_{\alpha _{0}\rightarrow \pm 1/2}\langle j^{0}(x)\rangle _{0,\text{ren}%
}=\mp \frac{eqm}{2\pi ^{2}r}K_{0}(2mr). \label{Limj0}$$Both charge density and azimuthal current exhibit the jump structure at half-integer values for the ratio of the magneitc flux to the flux quantum (for a similar structure of the persistent currents in carbon nanotube based rings see, for example, Refs. [@Lin98]).
-- --
-- --
Alternative expressions for the VEVs of the fermionic current are obtained by using the integral representation (\[Rep2\]) for the functions $%
\mathcal{I}(q,\pm \alpha _{0},z)$. We start with the charge density. The corresponding regularized expression is given by Eq. (\[j00reg1\]). As we have already noticed, the first term on the right of this formula is finite in the limit $s\rightarrow 0$. After the application of (\[Rep2\]) to the series in the second term, we see that the parts corresponding to the first and last terms in the right-hand side of (\[Rep2\]) vanish in the limit $%
s\rightarrow 0$ due to the exponential decay of the Macdonald functions. The only term which survives in the limit $s\rightarrow 0$ is the part corresponding to the second term in the right-hand side of Eq. (\[Rep2\]). In the expression for the fermionic current this term is multiplied by $q$ and hence it does not depend on the angle deficit and on the magnetic flux. So, this term is the same as in the case of Minkowski spacetime in the absence of the magnetic flux and is subtracted in the renormalization procedure. As a result, for the renormalized charge density one finds the expression below:$$\begin{aligned}
&&\langle j^{0}(x)\rangle _{0,\text{ren}}=-\frac{2em}{(2\pi )^{5/2}r}%
\int_{0}^{\infty }dz\,z^{-1/2}e^{-m^{2}r^{2}/2z-z} \notag \\
&&\quad \times \Big[\text{sgn}(\alpha _{0})qB(q(|\alpha
_{0}|-1/2)+1/2,z)+2\int_{0}^{\infty }dy\,K_{iy}(z)g(q,\alpha _{0},y)\Big],
\label{j00renb}\end{aligned}$$where $$B(y,z)=\left\{
\begin{array}{cc}
0, & y\leqslant 0, \\
\sin (\pi y)K_{y}(z) & y>0,%
\end{array}%
\right. \label{Byx}$$and we have defined the function$$g(q,\alpha _{0},y)=\sum_{\delta =\pm 1}{\mathrm{Re}}\left[ \frac{\delta
\sinh (y\pi )}{e^{2\pi (y+i|q\delta \alpha _{0}-1/2|)/q}+1}\right] .
\label{gqalf}$$
For the VEV of the azimuthal current, in a similar way, we get the representation$$\begin{aligned}
&&\langle j^{2}(x)\rangle _{0,\text{ren}}=-\frac{4er^{-3}}{(2\pi )^{5/2}}%
\int_{0}^{\infty }dz\,z^{1/2}e^{-m^{2}r^{2}/2z-z} \notag \\
&&\quad \times \Big[\text{sgn}(\alpha _{0})qB(q(|\alpha
_{0}|-1/2)+1/2,z)+2\int_{0}^{\infty }dy\,K_{iy}(z)g(q,\alpha _{0},y)\Big].
\label{j02renb}\end{aligned}$$Note that under the condition (\[condalf0\]) there are no square integrable irregular modes and, in this case, the first terms in the square brackets of Eqs. (\[j00renb\]) and (\[j02renb\]) vanish.
In the special case $q=1$ we see that $g(1,\alpha _{0},y)=0$, and for the VEVs of the fermionic current we obtain the formulae$$\begin{aligned}
\langle j^{0}(x)\rangle _{0,\text{ren}} &=&-\frac{2em\sin (\pi \alpha _{0})}{%
(2\pi )^{5/2}r}\int_{0}^{\infty }dz\,e^{-m^{2}r^{2}/2z-z}\frac{K_{\alpha
_{0}}(z)}{\sqrt{z}}, \notag \\
\langle j^{2}(x)\rangle _{0,\text{ren}} &=&-\frac{4e\sin (\pi \alpha _{0})}{%
(2\pi )^{5/2}r^{3}}\int_{0}^{\infty }dz\,\sqrt{z}e^{-m^{2}r^{2}/2z-z}K_{%
\alpha _{0}}(z). \label{j02renbm0}\end{aligned}$$In the limit $\alpha _{0}\rightarrow \pm 1/2$ we recover results (\[Limj2\]) and (\[Limj0\]). In Appendix \[sec:App2New\], we show the equivalence of these expressions to the ones previously given in the literature for the fermionic densities induced by a magnetic flux in (2+1)-dimensional Minkowski spacetime.
In the discussion above we used the irreducible representation of the Clifford algebra corresponding to Eq. (\[DirMat\]). For the second representation the renormalized VEV of the azimuthal current is given by the same expressions, whereas the expressions for the VEV of the renormalized charge density change the sign. Consequently, the total induced charge ([Q1]{}) changes the sign as well.
Induced fermionic current in the exterior region {#sec:ExtFC}
================================================
Now we turn to the investigation of the induced fermionic current in the presence of a circular boundary at $r=a$ with boundary condition (\[BCMIT\]). The corresponding VEV is evaluated using the mode sum formula ([FCMode]{}) with the eigenspinors given by Eq. (\[psisig-\]). For the further discussion it is convenient to write these eigenspinors in an equivalent form by using the properties (\[gsig1\]):$$\begin{aligned}
\psi _{\gamma j}^{(+)}(x) &=&c_{0}e^{iqj\phi -iEt}\left(
\begin{array}{c}
g_{\beta _{j},\beta _{j}}(\gamma a,\gamma r)e^{-iq\phi /2} \\
\frac{\gamma \epsilon _{j}e^{iq\phi /2}}{E+m}g_{\beta _{j},\beta
_{j}+\epsilon _{j}}(\gamma a,\gamma r)%
\end{array}%
\right) , \notag \\
\psi _{\gamma j}^{(-)}(x) &=&c_{0}e^{-iqj\phi +iEt}\left(
\begin{array}{c}
\frac{\gamma \epsilon _{j}e^{-iq\phi /2}}{E+m}g_{\beta _{j},\beta
_{j}+\epsilon _{j}}(\gamma a,\gamma r) \\
g_{\beta _{j},\beta _{j}}(\gamma a,\gamma r)e^{iq\phi /2}%
\end{array}%
\right) , \label{psiplm}\end{aligned}$$where $c_{0}$ is given by Eq. (\[c0\]) with the replacement $|\lambda
_{n}|\rightarrow \beta _{j}$. In Eq. (\[psiplm\]), $\epsilon _{j}$ is defined by (\[epsj\]) for $j\neq -\alpha $ and $\epsilon _{j}=-1$ for $%
j=-\alpha $. We could also obtain the representation (\[psiplm\]), by taking instead of Eq. (\[Zsig\]) the linear combination of the functions $%
J_{\beta _{j}}(\gamma r)$ and $Y_{\beta _{j}}(\gamma r)$. Note that the corresponding barred notation in expression (\[gsig\]) may also be written in the form$$\bar{F}_{\beta _{j}}^{(\pm )}(z)=-\epsilon _{j}zF_{\beta _{j}+\epsilon
_{j}}(z)\pm (\sqrt{z^{2}+\mu ^{2}}+\mu )F_{\beta _{j}}(z), \label{barnot3}$$with $F=J,Y$ and $\mu =ma$.
As in the boundary-free case, the VEV of the radial component vanishes and for the charge density and the azimuthal current we have the expressions $$\begin{aligned}
\langle j^{0}(x)\rangle &=&\frac{eq}{4\pi }\sum_{j}\int_{0}^{\infty }d\gamma
\frac{\gamma }{E}\frac{(E-m)g_{\beta _{j},\beta _{j}+\epsilon
_{j}}^{2}(\gamma a,\gamma r)+(E+m)g_{\beta _{j},\beta _{j}}^{2}(\gamma
a,\gamma r)}{\bar{J}_{\beta _{j}}^{(-)2}(\gamma a)+\bar{Y}_{\beta
_{j}}^{(-)2}(\gamma a)}, \notag \\
\langle j^{2}(x)\rangle &=&\frac{eq}{2\pi r}\sum_{j}\epsilon
_{j}\int_{0}^{\infty }d\gamma \frac{\gamma ^{2}}{E}\frac{g_{\beta _{j},\beta
_{j}}(\gamma a,\gamma r)g_{\beta _{j},\beta _{j}+\epsilon _{j}}(\gamma
a,\gamma r)}{\bar{J}_{\beta _{j}}^{(-)2}(\gamma a)+\bar{Y}_{\beta
_{j}}^{(-)2}(\gamma a)}. \label{j02Ext}\end{aligned}$$Here, as before, the summation goes over $j=\pm 1/2,\pm 3/2,\ldots $. Equivalent forms are obtained using the eigenspinors (\[psisig-\]). From Eq. (\[j02Ext\]) it follows that the fermionic current is a periodic function of $\alpha $ with the period equal to 1. We assume that a cutoff function is introduced without explicitly writing it. The specific form of this function is not important for the discussion below.
In the presence of the boundary, the VEV of the fermionic current can be decomposed as $$\langle j^{\nu }(x)\rangle =\langle j^{\nu }(x)\rangle _{0}+\langle j^{\nu
}(x)\rangle _{\text{b}}, \label{jdecomp}$$where $\langle j^{\nu }(x)\rangle _{\text{b}}$ is the part induced by the boundary. In order to extract the latter explicitly, we subtract from Eq. (\[j02Ext\]) the VEVs when the boundary is absent. If the ratio of the magnetic flux to the flux quantum is not a half-integer, the boundary-free parts are given by Eq. (\[j020\]). In this case, for the evaluation of the difference, in the expression of the charge density we use the identity$$\frac{g_{\beta _{j},\lambda }^{2}(x,y)}{\bar{J}_{\beta _{j}}^{(-)2}(x)+\bar{Y%
}_{\beta _{j}}^{(-)2}(x)}-J_{\lambda }^{2}(y)=-\frac{1}{2}\sum_{l=1,2}\frac{%
\bar{J}_{\beta _{j}}^{(-)}(x)}{\bar{H}_{\beta _{j}}^{(-,l)}(x)}H_{\lambda
}^{(l)2}(y), \label{ident1}$$with $\lambda =\beta _{j},\beta _{j}+\epsilon _{j}$, and with the Hankel functions $H_{\nu }^{(l)}(x)$. The expression for the boundary-induced part in the VEV of the charge density takes the form$$\begin{aligned}
&&\langle j^{0}(x)\rangle _{\text{b}}=-\frac{eq}{8\pi }\sum_{j}\sum_{l=1,2}%
\int_{0}^{\infty }d\gamma \frac{\gamma }{E}\frac{\bar{J}_{\beta
_{j}}^{(-)}(\gamma a)}{\bar{H}_{\beta _{j}}^{(-,l)}(\gamma a)} \notag \\
&&\qquad \times \left[ (E-m)H_{\beta _{j}+\epsilon _{j}}^{(l)2}(\gamma
r)+(E+m)H_{\beta _{j}}^{(l)2}(\gamma r)\right] . \label{j001}\end{aligned}$$Now, in the complex plane $\gamma $, we rotate the integration contour by the angle $\pi /2$ for the term with $l=1$ and by the angle $-\pi /2$ for the term with $l=2$. The integrals over the segments $(0,im)$ and $(0,-im)$ cancel each other and, introducing the modified Bessel functions, we get the following expression$$\begin{aligned}
&&\langle j^{0}(x)\rangle _{\text{b}}=-\frac{eq}{2\pi ^{2}}%
\sum_{j}\int_{m}^{\infty }dz\,z \notag \\
&&\quad \times \Big\{m\frac{K_{\beta _{j}}^{2}(zr)+K_{\beta _{j}+\epsilon
_{j}}^{2}(zr)}{\sqrt{z^{2}-m^{2}}}{\mathrm{Re}}\left[ I_{\beta
_{j}}^{(-)}(za)/K_{\beta _{j}}^{(-)}(za)\right] \notag \\
&&\quad -\left[ K_{\beta _{j}}^{2}(zr)-K_{\beta _{j}+\epsilon _{j}}^{2}(zr)%
\right] {\mathrm{Im}}\left[ I_{\beta _{j}}^{(-)}(za)/K_{\beta _{j}}^{(-)}(za)%
\right] \Big\}, \label{j002b}\end{aligned}$$where we use the notation (the notation with the upper sign is used in the next section)$$F^{(\pm )}(z)=zF^{\prime }(z)+\left( \pm \mu \pm i\sqrt{z^{2}-\mu ^{2}}%
-\epsilon _{j}\beta _{j}\right) F(z). \label{F+}$$
The ratio in the integrand of Eq. (\[j002b\]) can also be written in the form$$\frac{I_{\beta _{j}}^{(-)}(x)}{K_{\beta _{j}}^{(-)}(x)}=\frac{W_{\beta
_{j},\beta _{j}+\epsilon _{j}}^{(-)}(x)+i\sqrt{1-\mu ^{2}/x^{2}}}{x[K_{\beta
_{j}}^{2}(x)+K_{\beta _{j}+\epsilon _{j}}^{2}(x)]+2\mu K_{\beta
_{j}}(x)K_{\beta _{j}+\epsilon _{j}}(x)}, \label{IKratio}$$with the notation$$\begin{aligned}
W_{\beta _{j},\beta _{j}+\epsilon _{j}}^{(\pm )}(x) &=&x\left[ I_{\beta
_{j}}(x)K_{\beta _{j}}(x)-I_{\beta _{j}+\epsilon _{j}}(x)K_{\beta
_{j}+\epsilon _{j}}(x)\right] \notag \\
&&\pm \mu \left[ I_{\beta _{j}+\epsilon _{j}}(x)K_{\beta _{j}}(x)-I_{\beta
_{j}}(x)K_{\beta _{j}+\epsilon _{j}}(x)\right] . \label{Wbet}\end{aligned}$$The real and imaginary parts appearing in Eq. (\[j002b\]) are easily obtained from Eq. (\[IKratio\]). Note that under the change $\alpha
\rightarrow -\alpha $, $j\rightarrow -j$, we have $\beta _{j}\rightarrow
\beta _{j}+\epsilon _{j}$, $\beta _{j}+\epsilon _{j}\rightarrow \beta _{j}$. From here it follows that the real/imaginary part in Eq. (\[IKratio\]) is an odd/even function under this change. Now, from Eq. (\[j002b\]) we see that the boundary-induced part in the VEV is an odd function of $\alpha $. When $\alpha $ is a half-integer, in the term of Eq. (\[j002b\]) with $%
j=-\alpha $ the orders of the modified Bessel functions are equal to $\pm
1/2 $. Using the expressions of these functions in terms of the elementary functions, we can see that the corresponding integral vanishes. For the terms with $j\neq -\alpha $, the contributions of $j<-\alpha $ and $%
j>-\alpha $ to the right-hand side of Eq. (\[j002b\]) cancel each other. Hence, if $\alpha $ is a half-integer the boundary-induced part in the VEV of the charge density vanishes. Recall that the same is the case for the boundary-free part.
In a similar way, using the identity$$\frac{g_{\beta _{j}}(x,y)g_{\beta _{j}+\epsilon _{j}}(x,y)}{\bar{J}_{\beta
_{j}}^{(-)2}(x)+\bar{Y}_{\beta _{j}}^{(-)2}(x)}=J_{\beta _{j}}(y)J_{\beta
_{j}+\epsilon _{j}}(y)-\frac{1}{2}\sum_{l=1,2}\frac{\bar{J}_{\beta
_{j}}^{(-)}(x)}{\bar{H}_{\beta _{j}}^{(-,l)}(x)}H_{\beta
_{j}}^{(l)}(y)H_{\beta _{j}+\epsilon _{j}}^{(l)}(y), \label{ident2}$$for the boundary-induced part in the VEV of the azimuthal current we find$$\begin{aligned}
&&\langle j^{2}(x)\rangle _{\text{b}}=-\frac{eq}{\pi ^{2}r}%
\sum_{j}\int_{m}^{\infty }dz\frac{z^{2}}{\sqrt{z^{2}-m^{2}}} \notag \\
&&\qquad \times K_{\beta _{j}}(zr)K_{\beta _{j}+\epsilon _{j}}(zr){\mathrm{Re%
}}\left[ I_{\beta _{j}}^{(-)}(za)/K_{\beta _{j}}^{(-)}(za)\right] .
\label{j21}\end{aligned}$$As in the case of charge density, this part is a periodical function of the magnetic flux with a period equal the flux quantum.
If we present the ratio of the magnetic flux to the flux quantum in the form (\[alf0\]), then the boundary induced VEVs are functions of $\alpha _{0}$ alone. They are odd functions of this parameter. In the limit $\alpha
_{0}\rightarrow \pm 1/2$, $|\alpha _{0}|<1/2$, the only nonzero contribution to $\langle j^{\nu }(x)\rangle _{\text{b}}$ comes from the term with $j=\mp
1/2$ and one has the limiting values$$\begin{aligned}
\lim_{\alpha _{0}\rightarrow \pm 1/2}\langle j^{0}(x)\rangle _{\text{b}}
&=&\pm \frac{eqm}{2\pi ^{2}r}K_{0}(2mr), \notag \\
\lim_{\alpha _{0}\rightarrow \pm 1/2}\langle j^{2}(x)\rangle _{\text{b}}
&=&\pm \frac{eqm}{2\pi ^{2}r^{2}}K_{1}(2mr). \label{Limj02b}\end{aligned}$$Now comparing with Eqs. (\[Limj2\]) and (\[Limj0\]), we see that the limiting value of the total current density (\[jdecomp\]) is zero and the latter is continuous at $\alpha _{0}=\pm 1/2$. This result was expected due to the continuity of the exterior eigenspinors as functions of the parameter $\alpha _{0}$. Comparing with the results of the previous section, we see that the limiting transitions $a\rightarrow 0$ and $|\alpha _{0}|\rightarrow
1/2$ do not commute.
For a massless field the expressions for the boundary-induced parts in the VEVs take the form$$\begin{aligned}
\langle j^{0}(x)\rangle _{\text{b}} &=&\frac{eq}{2\pi ^{2}a^{2}}%
\sum_{j}\int_{0}^{\infty }dz\,\frac{K_{\beta _{j}}^{2}(zr/a)-K_{\beta
_{j}+\epsilon _{j}}^{2}(zr/a)}{K_{\beta _{j}}^{2}(z)+K_{\beta _{j}+\epsilon
_{j}}^{2}(z)}, \notag \\
\langle j^{2}(x)\rangle _{\text{b}} &=&-\frac{eq}{\pi ^{2}a^{2}r}%
\sum_{j}\int_{0}^{\infty }dz\,\frac{K_{\beta _{j}}(zr/a)K_{\beta
_{j}+\epsilon _{j}}(zr/a)}{K_{\beta _{j}}^{2}(z)+K_{\beta _{j}+\epsilon
_{j}}^{2}(z)}W_{\beta _{j},\beta _{j}+\epsilon _{j}}^{(-)}(z),
\label{J02Extm0}\end{aligned}$$with the notation defined by Eq. (\[Wbet\]). We would like to point out that the boundary-induced charge density does not vanish for a massless filed. The corresponding boundary-free part vanishes and, hence, $\langle
j^{0}(x)\rangle =\langle j^{0}(x)\rangle _{\text{b}}$.
Now we turn to the investigation of the boundary-induced part in the VEV of fermionic current in the asymptotic regions of the parameters. In the limit $%
a\rightarrow 0$, for fixed values of $r$, by taking into account that $$\frac{I_{\beta _{j}}^{(-)}(za)}{K_{\beta _{j}}^{(-)}(za)}\approx a\frac{%
\epsilon _{j}m+i\sqrt{z^{2}-m^{2}}}{\Gamma ^{2}(q|j+\alpha |+1/2)}%
(za/2)^{2q|j+\alpha |-1}, \label{IKpla0}$$to the leading order, from Eqs. (\[j002b\]) and (\[j21\]), we have$$\begin{aligned}
\langle j^{0}(x)\rangle _{\text{b}} &\approx &\frac{eq}{\pi ^{2}}\frac{\text{%
sgn}(\alpha _{0})(a/2r)^{2q_{\alpha }}}{r^{2}\Gamma ^{2}(q_{\alpha }+1/2)}%
\int_{mr}^{\infty }dz\,\frac{z^{2q_{\alpha }}}{\sqrt{z^{2}-m^{2}r^{2}}}
\notag \\
&&\times \left[ \left( 2m^{2}r^{2}-z^{2}\right) K_{q_{\alpha
}-1/2}^{2}(z)+z^{2}K_{q_{\alpha }+1/2}^{2}(z)\right] , \label{j0bExta0} \\
\langle j^{2}(x)\rangle _{\text{b}} &\approx &\frac{2eqm}{\pi ^{2}r^{2}}%
\frac{\text{sgn}(\alpha _{0})(a/2r)^{2q_{\alpha }}}{\Gamma ^{2}(q_{\alpha
}+1/2)}\int_{mr}^{\infty }dz\frac{z^{2q_{\alpha }+1}}{\sqrt{z^{2}-m^{2}r^{2}}%
}K_{q_{\alpha }-1/2}(z)K_{q_{\alpha }-1/2}(z), \notag\end{aligned}$$with the notation$$q_{\alpha }=q(1/2-|\alpha _{0}|). \label{qalfa}$$For a massless field the asymptotic behavior for the charge density is directly obtained from Eq. (\[j0bExta0\]). The integrals involving the Macdonald function are evaluated in terms of the gamma function and one finds$$\langle j^{0}(x)\rangle _{\text{b}}\approx \frac{eq\,\text{sgn}(\alpha _{0})%
}{\pi r^{2}}\left( \frac{a}{2r}\right) ^{2q_{\alpha }}\frac{q_{\alpha
}\Gamma (2q_{\alpha }+1/2)\Gamma (q_{\alpha }+1)}{(2q_{\alpha }+1)\Gamma
^{3}(q_{\alpha }+1/2)}. \label{j0bExta0m0}$$For the azimuthal component the leading term in Eq. (\[j0bExta0\]) vanishes. The corresponding asymptotic behavior is directly found from Eq. (\[J02Extm0\]). The leading term is given by $$\langle j^{2}(x)\rangle _{\text{b}}\approx \frac{2eq}{\pi r^{3}}\frac{\text{%
sgn}(\alpha _{0})}{(2q_{\alpha })^{2}-1}\left( \frac{a}{2r}\right)
^{2q_{\alpha }+1}\frac{\Gamma (2q_{\alpha }+3/2)\Gamma (q_{\alpha }+1)}{%
(2q_{\alpha }+1)\Gamma ^{3}(q_{\alpha }+1/2)}, \label{j2bExtm0}$$for $q_{\alpha }>1/2$, and by the expression$$\langle j^{2}(x)\rangle _{\text{b}}\approx -\frac{eq\,\text{sgn}(\alpha _{0})%
}{2^{2q_{\alpha }+1}\pi r^{3}}\left( \frac{a}{2r}\right) ^{4q_{\alpha }}%
\frac{\Gamma (1/2-q_{\alpha })}{\Gamma ^{4}(1/2+q_{\alpha })}\Gamma
(2q_{\alpha }+1/2)\Gamma (3q_{\alpha }+1), \label{j2bExtm0b}$$in the case $q_{\alpha }<1/2$. For for $q_{\alpha }=1/2$, the leading terms behaves as $a^{2}\ln (a)$.
At large distances from the boundary, for a massive field, under the condition $mr\gg 1$, the dominant contribution to the integrals come from the region near the lower limit of the integration and to the leading order we find$$\begin{aligned}
\langle j^{0}(x)\rangle _{\text{b}} &\approx &-\frac{eqm^{2}e^{-2rm}}{4\sqrt{%
\pi }(rm)^{3/2}}\sum_{j}{\mathrm{Re}}\left[ I_{\beta
_{j}}^{(-)}(ma)/K_{\beta _{j}}^{(-)}(ma)\right] , \notag \\
\langle j^{2}(x)\rangle _{\text{b}} &\approx &-\frac{eqm^{3}e^{-2rm}}{4\sqrt{%
\pi }(mr)^{5/2}}\sum_{j}{\mathrm{Re}}\left[ I_{\beta
_{j}}^{(-)}(ma)/K_{\beta _{j}}^{(-)}(ma)\right] . \label{j02LargeDist}\end{aligned}$$As expected, we have an exponential suppression of the boundary-induced VEVs. For a massless field, the asymptotics at large distances are given by Eqs. (\[j0bExta0m0\])-(\[j2bExtm0b\]). In Fig. \[fig4\], we plot the VEVs of the charge density (left panel) and azimuthal current (right panel) for a massless fermionic field as functions of the magnetic flux. The graphs are plotted for $r/a=1.5$ and for several values of the parameter $q$. As we have already mentioned, in the exterior region the total VEVs of the charge density and azimuthal current vanish in the limt $|\alpha _{0}|\rightarrow 0$. Note that for a massless field the boundary-free part in the VEV of the charge density vanishes and the non-zero charge density on the left plot is induced by the circular boundary.
-- --
-- --
Fermionic current inside a circular boundary {#sec:Int}
============================================
In this section we consider the region inside a circular boundary with radius $a$, $r<a$, on which the fermionic field obeys the boundary condition (\[BCMIT\]) with $n_{\mu }=-\delta _{\mu }^{1}$. The boundary condition at the cone apex for the irregular mode is the same as that we have used in Section \[sec:BoundFree\] for the boundary-free conical geometry. The eigenspinors in this region have the form$$\begin{aligned}
\psi _{\gamma j}^{(+)} &=&\varphi _{0}e^{iqj\phi -iEt}\left(
\begin{array}{c}
e^{-iq\phi /2}J_{\beta _{j}}(\gamma r) \\
\frac{\gamma \epsilon _{j}e^{iq\phi /2}}{E+m}J_{\beta _{j}+\epsilon
_{j}}(\gamma r)%
\end{array}%
\right) , \notag \\
\psi _{\gamma j}^{(-)} &=&\varphi _{0}e^{-iqj\phi +iEt}\left(
\begin{array}{c}
\frac{\epsilon _{j}\gamma e^{-iq\phi /2}}{E+m}J_{\beta _{j}+\epsilon
_{j}}(\gamma r) \\
e^{iq\phi /2}J_{\beta _{j}}(\gamma r)%
\end{array}%
\right) , \label{psiInt}\end{aligned}$$with the same notations as in Section \[sec:BoundFree\]. From the boundary condition at $r=a$ we find that the eigenvalues of $\gamma $ are solutions of the equation$$J_{\beta _{j}}(\gamma a)-\frac{\gamma \epsilon _{j}}{E+m}J_{\beta
_{j}+\epsilon _{j}}(\gamma a)=0. \label{gamVal}$$Note that this equation may also be written in the form $\bar{J}_{\beta
_{j}}^{(+)}(\gamma a)=0$ with the barred notation defined by Eq. ([barnot2]{}). For a given $\beta _{j}$, Eq. (\[gamVal\]) has an infinite number of solutions which we denote by $\gamma a=\gamma _{\beta _{j},l}$, $%
l=1,2,\ldots $.
The normalization coefficient in Eq. (\[psiInt\]) is determined from the condition $$\int_{0}^{a}dr\int_{0}^{\phi _{0}}d\phi \,r\psi _{\gamma j}^{(\pm )\dagger
}\psi _{\gamma ^{\prime }j^{\prime }}^{(\pm )}=\delta _{ll^{\prime }}\delta
_{jj^{\prime }}\ . \label{NormInt}$$Using the standard integral for the square of the Bessel function (see, for example, [@Prud86]), one finds$$\varphi _{0}^{2}=\frac{a^{2}y^{2}}{\phi _{0}J_{\beta _{j}}^{2}(y)}\left[
2(y^{2}+\mu ^{2})-\left( 2\epsilon _{j}\beta _{j}+1\right) \sqrt{y^{2}+\mu
^{2}}+\mu \right] ^{-1},\;y=\gamma _{\beta _{j},l}, \label{NormCoefInt}$$where, as before, $\mu =ma$. For the further convenience we will write this expression in the form$$\varphi _{0}^{2}=\frac{yT_{\beta _{j}}(y)}{2\phi _{0}a^{2}}\frac{\mu +\sqrt{%
y^{2}+\mu ^{2}}}{\sqrt{y^{2}+\mu ^{2}}}, \label{phi0T}$$with the notation$$T_{\beta _{j}}(y)=\frac{y}{J_{\beta _{j}}^{2}(y)}\left[ y^{2}+\left( \mu
-\epsilon _{j}\beta _{j}\right) \left( \mu +\sqrt{y^{2}+\mu ^{2}}\right) -%
\frac{y^{2}}{2\sqrt{y^{2}+\mu ^{2}}}\right] ^{-1}. \label{Tnu}$$
Substituting the eigenspinors (\[psiInt\]) into the mode sum formula$$\langle j^{\nu }(x)\rangle =e\sum_{j}\sum_{l=1}^{\infty }\,\bar{\psi}%
_{\gamma j}^{(-)}(x)\gamma ^{\nu }\psi _{\gamma j}^{(-)}(x),
\label{modesumInt}$$for the VEVs of separate components of fermionic current we have$$\begin{aligned}
\langle j^{0}(x)\rangle &=&\frac{eq}{4\pi a^{2}}\sum_{j}\sum_{l=1}^{\infty
}yT_{\beta _{j}}(y)\Big[\Big(\frac{\mu }{\sqrt{y^{2}+\mu ^{2}}}+1\Big)%
J_{\beta _{j}}^{2}(yr/a)-\Big(\frac{\mu }{\sqrt{y^{2}+\mu ^{2}}}-1\Big)%
J_{\beta _{j}+\epsilon _{j}}^{2}(yr/a)\Big], \notag \\
\langle j^{2}(x)\rangle &=&\frac{eq}{2\pi a^{2}r}\sum_{j}\sum_{l=1}^{\infty }%
\frac{\epsilon _{j}y^{2}T_{\beta _{j}}(y)}{\sqrt{y^{2}+\mu ^{2}}}J_{\beta
_{j}}(yr/a)J_{\beta _{j}+\epsilon _{j}}(yr/a), \label{j2Int}\end{aligned}$$with $y=\gamma _{\beta _{j},l}$, and the radial component vanishes, $\langle
j^{1}(x)\rangle =0$. As before, we assume the presence of a cutoff function without explicitly writing it.
As the explicit form for $\gamma _{\beta _{j},l}$ is not known, Eqs. ([j2Int]{}) are not convenient for the direct evaluation of the VEVs. In addition, the separate terms in the mode sum are highly oscillatory for large values of the quantum numbers. In order to find a convenient integral representation, we apply to the series over $l$ the summation formula (see [@Saha04; @Saha08Book])$$\begin{aligned}
&&\sum_{l=1}^{\infty }f(\gamma _{\beta _{j},l})T_{\beta }(\gamma _{\beta
_{j},l})=\int_{0}^{\infty }dx\,f(x)-\frac{1}{\pi }\int_{0}^{\infty }dx
\notag \\
&&\quad \times \bigg[e^{-\beta _{j}\pi i}f(xe^{\pi i/2})\frac{K_{\beta
_{j}}^{(+)}(x)}{I_{\beta _{j}}^{(+)}(x)}+e^{\beta _{j}\pi i}f(xe^{-\pi i/2})%
\frac{K_{\beta _{j}}^{(+)\ast }(x)}{I_{\beta _{j}}^{(+)\ast }(x)}\bigg],
\label{SumForm}\end{aligned}$$the asterisk meaning complex conjugate. Here the notation $F^{(+)}(x)$ for a given function $F(x)$ is defined by Eq. (\[F+\]) for $x\geqslant \mu $ and by the relation $$F^{(+)}(x)=xF^{\prime }(x)+(\mu +\sqrt{\mu ^{2}-x^{2}}-\epsilon _{j}\beta
_{j})F(x), \label{F+2}$$for $x<\mu $. Note that in the latter case $F^{(+)\ast }(x)=F^{(+)}(x)$. The term in the VEVs corresponding to the first integral in the right-hand side of Eq. (\[SumForm\]) coincides with the VEV of the fermionic current for the situation where the boundary is absent.
As a result, the VEV of the fermionic current is presented in the decomposed form (\[jdecomp\]). For the function $f(x)$ corresponding to Eq. ([j2Int]{}), in the second term on the right-hand side of Eq. (\[SumForm\]), the part of the integral over the region $(0,\mu )$ vanishes. Consequently, the boundary-induced contribution for the charge density in the region inside the circle is given by the expression $$\begin{aligned}
&&\langle j^{0}(x)\rangle _{\text{b}}=-\frac{eq}{2\pi ^{2}}%
\sum_{j}\int_{m}^{\infty }dz\,z \notag \\
&&\qquad \times \Big\{m\frac{I_{\beta _{j}}^{2}(zr)+I_{\beta _{j}+\epsilon
_{j}}^{2}(zr)}{\sqrt{z^{2}-m^{2}}}{\mathrm{Re}}[K_{\beta
_{j}}^{(+)}(za)/I_{\beta _{j}}^{(+)}(za)] \notag \\
&&\qquad -[I_{\beta _{j}}^{2}(zr)-I_{\beta _{j}+\epsilon _{j}}^{2}(zr)]{%
\mathrm{Im}}[K_{\beta _{j}}^{(+)}(za)/I_{\beta _{j}}^{(+)}(za)]\Big\}.
\label{j0int1}\end{aligned}$$Similarly, for the boundary-induced part in the azimuthal component we find$$\langle j^{2}(x)\rangle _{\text{b}}=\frac{eq}{\pi ^{2}r}\sum_{j}\int_{m}^{%
\infty }dz\,\frac{z^{2}I_{\beta _{j}}(zr)I_{\beta _{j}+\epsilon _{j}}(zr)}{%
\sqrt{z^{2}-m^{2}}}{\mathrm{Re}}[K_{\beta _{j}}^{(+)}(za)/I_{\beta
_{j}}^{(+)}(za)]. \label{j2int1}$$For points away from the circular boundary, the boundary-induced contributions (\[j0int1\]) and (\[j2int1\]) are finite and the renormalization is reduced to that for the boundary-free geometry. These contributions are periodic functions of the parameter $\alpha $ with the period equal to 1. So, if we present this parameter in the form (\[alf0\]) with $n_{0}$ being an integer, then the VEVs depend on $\alpha _{0}$ alone and they are odd functions of this parameter. Similar to Eq. (\[IKratio\]), the ratio of the combinations of the modified Bessel functions in Eq. (\[j0int1\]) is presented in the form$$\frac{K_{\beta _{j}}^{(+)}(x)}{I_{\beta _{j}}^{(+)}(x)}=\frac{W_{\beta
_{j},\beta _{j}+\epsilon _{j}}^{(+)}(x)+i\sqrt{1-\mu ^{2}/x^{2}}}{x[I_{\beta
_{j}}^{2}(x)+I_{\beta _{j}+\epsilon _{j}}^{2}(x)]+2\mu I_{\beta
_{j}}(x)I_{\beta _{j}+\epsilon _{j}}(x)}, \label{KIratio}$$with the notation defined by Eq. (\[Wbet\]).
When the parameter $\alpha $ is a half-integer the contributions of the modes with $j\neq -\alpha $ to the boundary-induced VEVs inside the circle are still given by expressions (\[j0int1\]) and (\[j2int1\]). It can be easily seen that the contributions of the modes with $j<-\alpha $ and $%
j>-\alpha $ cancel each other. The contribution of the mode with $j=-\alpha $ should be considered separately. In Appendix \[sec:App2\] we show that this contribution vanishes as well. Therefore, we conclude that for $\alpha $ being a half-integer, the boundary-induced part in the VEV of the fermionic current vanishes.
For a massless field the formulae of the boundary-induced parts are simplified to$$\begin{aligned}
\langle j^{0}(x)\rangle _{\text{b}} &=&\frac{eq}{2\pi ^{2}a^{2}}%
\sum_{j}\int_{0}^{\infty }dz\,\frac{I_{\beta _{j}}^{2}(zr/a)-I_{\beta
_{j}+\epsilon _{j}}^{2}(zr/a)}{I_{\beta _{j}}^{2}(z)+I_{\beta _{j}+\epsilon
_{j}}^{2}(z)}, \notag \\
\langle j^{2}(x)\rangle _{\text{b}} &=&\frac{eq}{\pi ^{2}ra^{2}}%
\sum_{j}\int_{0}^{\infty }dz\,\frac{I_{\beta _{j}}(zr/a)I_{\beta
_{j}+\epsilon _{j}}(zr/a)}{I_{\beta _{j}}^{2}(z)+I_{\beta _{j}+\epsilon
_{j}}^{2}(z)}W_{\beta _{j},\beta _{j}+\epsilon _{j}}^{(+)}(z).
\label{j02intm0}\end{aligned}$$Note that for a massless field $W_{\beta _{j},\beta _{j}+\epsilon
_{j}}^{(+)}(z)=W_{\beta _{j},\beta _{j}+\epsilon _{j}}^{(-)}(z)$. In the limit $\alpha _{0}\rightarrow \pm 1/2$, $|\alpha _{0}|<1/2$, the only nonzero contributions to Eqs. (\[j0int1\]) and (\[j2int1\]) come from the term with $j=\mp 1/2$ and, by making use of Eqs. (\[Limj2\]), for the total VEVs we find$$\begin{aligned}
\lim_{\alpha _{0}\rightarrow \pm 1/2}\langle j^{0}(x)\rangle &=&\mp \frac{eqm%
}{2\pi ^{2}r}K_{0}(2mr)\pm \frac{eq}{\pi ^{2}r}\int_{m}^{\infty }dz\,\frac{%
amz\cosh (2zr)-(z+m)e^{2za}}{\sqrt{z^{2}-m^{2}}(\frac{z+m}{z-m}e^{4za}+1)},
\notag \\
\lim_{\alpha _{0}\rightarrow \pm 1/2}\langle j^{2}(x)\rangle &=&\mp \frac{eqm%
}{2\pi ^{2}r^{2}}K_{1}(2mr)\mp \frac{eqa}{\pi ^{2}r^{2}}\int_{m}^{\infty
}dz\,\frac{z^{2}}{\sqrt{z^{2}-m^{2}}}\frac{\sinh (2zr)}{\frac{z+m}{z-m}%
e^{4za}+1}. \label{j02intLim}\end{aligned}$$For a massless field they reduce to the expressions:$$\begin{aligned}
\lim_{\alpha _{0}\rightarrow \pm 1/2}\langle j^{0}(x)\rangle &=&\mp \frac{eq%
}{8a\pi r}, \notag \\
\lim_{\alpha _{0}\rightarrow \pm 1/2}\langle j^{2}(x)\rangle &=&\mp \frac{eq%
}{8\pi ^{2}r^{3}}\left[ 1+\frac{\pi r/2a}{\sin (\pi r/2a)}\right] .
\label{j02intLim0}\end{aligned}$$Note that the limiting values are linear functions of the parameter $q$.
The general expressions for the VEVs are simplified in asymptotic regions of the parameters. First we consider large values of the circle radius. For the modified Bessel functions in the integrands of Eqs. (\[j0int1\]) and ([j2int1]{}), with $za$ in their arguments, we use the asymptotic expansions for large values of the argument. By taking into account that for a massive field the dominant contribution into the integrals comes from the integration range near the lower limit, to the leading order we find$$\begin{aligned}
\langle j^{0}(x)\rangle _{\text{b}} &\approx &\frac{eqm^{2}e^{-2ma}}{8\sqrt{%
\pi }(ma)^{3/2}}\sum_{j}\epsilon _{j}\left[ (\beta _{j}+\epsilon
_{j})I_{\beta _{j}}^{2}(mr)+\beta _{j}I_{\beta _{j}+\epsilon _{j}}^{2}(mr)%
\right] , \notag \\
\langle j^{2}(x)\rangle _{\text{b}} &\approx &-\frac{eqm^{2}e^{-2ma}}{8\sqrt{%
\pi }r(ma)^{3/2}}\sum_{j}\,(2\epsilon _{j}\beta _{j}+1)I_{\beta
_{j}}(mr)I_{\beta _{j}+\epsilon _{j}}(mr). \label{j02LargeRad}\end{aligned}$$In this limit, for a fixed value of the radial coordinate, the boundary-induced VEVs decay exponentially. For a massless field, assuming $%
r/a\ll 1$, we expand the modified Bessel function in the numerators of integrands in Eq. (\[j02intm0\]) in powers of $r/a$. The dominant contribution comes from the term $j=1/2$ for $\alpha _{0}<0$ and from the term $j=-1/2$ for $\alpha _{0}>0$. To the leading order we have$$\begin{aligned}
\langle j^{0}(x)\rangle _{\text{b}} &\approx &-\frac{eq}{2\pi ^{2}a^{2}}%
\frac{\text{sgn}(\alpha _{0})(r/2a)^{2q_{\alpha }-1}}{\Gamma ^{2}(q_{\alpha
}+1/2)}\int_{0}^{\infty }dz\,\frac{z^{2q_{\alpha }-1}}{I_{q_{\alpha
}+1/2}^{2}(z)+I_{q_{\alpha }-1/2}^{2}(z)}, \notag \\
\langle j^{2}(x)\rangle _{\text{b}} &\approx &-\frac{eq}{\pi ^{2}a^{3}}\frac{%
\text{sgn}(\alpha _{0})(r/2a)^{2q_{\alpha }-1}}{(2q_{\alpha }+1)\Gamma
^{2}(q_{\alpha }+1/2)}\int_{0}^{\infty }dz\,\frac{z^{2q_{\alpha
}}W_{q_{\alpha }-1/2,q_{\alpha }+1/2}^{(+)}(z)}{I_{q_{\alpha
}+1/2}^{2}(z)+I_{q_{\alpha }-1/2}^{2}(z)}, \label{j02LargeRadm0}\end{aligned}$$where $q_{\alpha }$ is defined in Eq. (\[qalfa\]). As it is seen, for a massless field the decay of the VEVs is as power-law.
For points near the apex of the cone, $r\rightarrow 0$, we have the following leading terms$$\begin{aligned}
&&\langle j^{0}(x)\rangle _{\text{b}}\approx \frac{eq}{2\pi ^{2}a^{2}}\frac{%
\text{sgn}(\alpha _{0})(r/2a)^{2q_{\alpha }-1}}{\Gamma ^{2}(q_{\alpha }+1/2)}%
\int_{\mu }^{\infty }dz\,\frac{z^{2q_{\alpha }}}{\sqrt{z^{2}-\mu ^{2}}}
\notag \\
&&\quad \times \frac{\mu W_{q_{\alpha }-1/2,q_{\alpha
}+1/2}^{(+)}(z)-(z^{2}-\mu ^{2})/z}{z[I_{q_{\alpha
}-1/2}^{2}(z)+I_{q_{\alpha }+1/2}^{2}(z)]+2\mu I_{q_{\alpha
}-1/2}(z)I_{q_{\alpha }+1/2}(z)}, \label{j0intApex} \\
&&\langle j^{2}(x)\rangle _{\text{b}}\approx -\frac{eq}{\pi ^{2}a^{3}}\frac{%
\text{sgn}(\alpha _{0})(r/2a)^{2q_{\alpha }-1}}{(2q_{\alpha }+1)\Gamma
^{2}(q_{\alpha }+1/2)}\int_{\mu }^{\infty }dz\,\frac{z^{2q_{\alpha }+2}}{%
\sqrt{z^{2}-\mu ^{2}}} \notag \\
&&\quad \times \frac{W_{q_{\alpha }-1/2,q_{\alpha }+1/2}^{(+)}(z)}{%
z[I_{q_{\alpha }-1/2}^{2}(z)+I_{q_{\alpha }+1/2}^{2}(z)]+2\mu I_{q_{\alpha
}-1/2}(z)I_{q_{\alpha }+1/2}(z)}. \label{j2intApex}\end{aligned}$$For a massless field these expressions reduce to Eq. (\[j02LargeRadm0\]). From here it follows that in the limit $r\rightarrow 0$ the boundary-induced part vanishes when $|\alpha _{0}|<1/2-1/(2q)$ and diverges for $|\alpha
_{0}|>1/2-1/(2q)$. Notice that in the former case the irregular mode is absent and the divergence in the latter case comes from the irregular mode. This divergence is integrable. In the case $|\alpha _{0}|=1/2-1/(2q)$, corresponding to $q_{\alpha }=1/2$, the boundary-induced VEV tends to a finite limiting value. In particular, for the magnetic vortex in the background Minkowski spacetime, the boundary-induced contribution diverges as $r^{-2|\alpha _{0}|}$. In Fig. \[fig5\], the VEVs of the charge density (left panel) and azimuthal current (right panel) are plotted for a massless fermionic field inside a circular boundary as functions of the magnetic flux. The graphs are plotted for $r/a=0.5$ and for several values of the opening angle for the conical space. For a massless field the boundary-free part in the VEV of the charge density vanishes and the charge density on the left plot is induced by the boundary.
-- --
-- --
Summary and conclusions {#sec:Conc}
=======================
In this paper we have investigated the VEV of the fermionic current induced by a magnetic flux string in a (2+1)-dimensional conical spacetime with a circular boundary. The case of massive fermionic field is considered with MIT bag boundary condition on the circle. In (2+1)-dimensional spacetime there are two inequivalent irreducible representations of the Dirac matrices. We have used the representation (\[DirMat\]). The corresponding results for the second representation are obtained by changing $m\rightarrow
-m$. Under this change, the boundary-free contribution to the VEV of the azimuthal current is not changed, whereas the VEV of the charge density changes the sign. For the evaluation of the expectation values we have employed the direct mode summation method. The corresponding positive and negative energy eigenspinors in the region outside the circular boundary are given by Eqs. (\[psisig+\]) and (\[psisig-\]).
The VEV of the fermionic current in the boundary-free conical space is investigated in Sect. \[sec:BoundFree\]. For this geometry, under the condition (\[condalf0\]), there are no square integrable irregular modes. In the case $|\alpha _{0}|>(1-1/q)/2$, the theory of von Neumann deficiency indices leads to a one-parameter family of allowed boundary conditions at the origin. Here we consider a special boundary condition that arises when imposing bag boundary condition at a finite radius, which is then shrunk to zero. The VEVs of the fermionic current for other boundary conditions on the cone apex are evaluated in a similar way. The contribution of the regular modes is the same for all boundary conditions and the formulae differ by the parts related to the irregular modes. For the boundary condition under consideration, the eigenspinors for the boundary-free geometry are obtained from the corresponding functions in the region outside a circular boundary with radius $a$, taking the limit $a\rightarrow 0$. They are presented by Eq. (\[psi0\]). When the magnetic flux, measured in units of the flux quantum, is a half-integer, there is a special mode corresponding to the angular momentum $j=-\alpha $, with the negative-energy eigenspinor given by Eq. (\[psibetSp\]).
In the boundary-free geometry, the regularized expressions of the VEVs are given by Eqs. (\[j00reg1\]) and (\[j02reg1\]) for the charge density and the azimuthal current, respectively, and the VEV of the radial component vanishes. These VEVs are periodic functions of the parameter $\alpha $ with the period equal to 1. So, if we present this parameter as (\[alf0\]), with $n_{0}$ being an integer number, then the VEVs are functions of $\alpha
_{0}$ alone. These functions are odd with respect to the reflection $\alpha
_{0}\rightarrow -\alpha _{0}$. Both charge density and azimuthal current exhibit the jump structure at half-integer values for the ratio of the magnetic flux to the flux quantum. Simple expressions for the renormalized VEVs, Eqs. (\[j00Spren\]) and (\[j02Spren\]), are obtained in the special case where the parameter $q$ related to the planar angle deficit is an integer and the magnetic flux takes special values given by Eq. ([alphaSpecial]{}). In the general case of parameters $\alpha $ and $q$, we have derived two different representations for the renormalized VEVs. The first one is based on Eq. (\[seriesI3\]) and the corresponding expressions for the charge density and azimuthal current have the forms (\[j02ren2\]) and (\[j00ren2\]). The second representation is obtained by using the Abel-Plana summation formula and the corresponding expressions are given by Eqs. (\[j00renb\]), (\[j02renb\]). For a massless field the VEV of the charge density vanishes for points outside the magnetic vortex and the VEV of the azimuthal current behaves as $r^{-3}$. For a massive field, for points near the vortex the VEVs behave as $1/r$ and $1/r^{3}$ for the charge density and the azimuthal current, respectively. At distances larger than the fermion Compton wavelength, the VEVs decay exponentially with the decay rate depending on the opening angle of the cone. The total charge induced by the magnetic vortex does not depend on the angle deficit of the conical space and is given by Eq. (\[Q1\]). In the special case of a magnetic flux in Minkowski spacetime, the formulae for the VEVs of the charge density and the azimuthal current reduce to Eqs. (\[FCq1\]), or equivalently, Eqs. (\[j02renbm0\]). In Appendix \[sec:App2New\] we show the equivalence of Eqs. (\[j02renbm0\]) to the expressions for the VEVs of charge density and azimuthal current known from the literature.
The effects of a circular boundary on the VEV of the fermionic current are considered in Sect. \[sec:ExtFC\] for the exterior region. From the point of view of the physics in this region, the circular boundary can be considered as a simple model for the defect core. The mode sums of the charge density and the azimuthal current are given by Eqs. (\[j02Ext\]) and the radial component vanishes. In order to extract from the VEVs the contributions induced by the boundary, we have subtracted the boundary-free parts. Rotating the integration contours in the complex plane, we have derived rapidly convergent integral representations for the boundary-induced contributions, Eqs. (\[j002b\]) and (\[j21\]). These formulae are simplified in the case of a massless field with expressions (\[J02Extm0\]). In the exterior region, the total VEV of the fermionic current is a continuous function of the magnetic flux. Note that unlike the boundary-free part, the boundary-induced part in the VEV of the charge density for a massless field is not zero. The parts in the VEVs induced by the boundary are periodic functions of the magnetic flux with the period equal to the flux quantum. These parts vanish for the special case of the magnetic flux corresponding to $|\alpha _{0}|=1/2$. In the limit when the radius of the circle goes to zero and for $|\alpha _{0}|<1/2$, for a massive field the boundary-induced contributions in the exterior region behave as $%
a^{2q_{\alpha }}$, with $q_{\alpha }$ defined by Eq. (\[qalfa\]). For a massless field the corresponding asymptotics are given by Eqs. ([j0bExta0m0]{})-(\[j2bExtm0b\]). At large distances from the boundary and for a massive field, the contributions coming from the boundary decay exponentially \[see Eqs. (\[j02LargeDist\])\]. In the same limit and for a massless field, the boundary-induced VEV in the charge density decays as $%
(a/r)^{2q_{\alpha }}$. For the azimuthal current, the contribution induced by a circular boundary behaves as $(a/r)^{2q_{\alpha }+1}$ when $q_{\alpha
}>1/2$, and like $(a/r)^{4q_{\alpha }}$ for $q_{\alpha }<1/2$. Note that, when the circular boundary is present, the VEVs of physical quantities in the exterior region are uniquely defined by the boundary conditions and by the bulk geometry. This means that if we consider a non-trivial core model for both conical space and magnetic flux with finite thickness $b<a$ and with the line element (\[ds21\]) in the region $r>b$, the results in the region outside the circular boundary will not be changed.
The boundary-induced VEVs in the region inside a circular boundary are studied in Sect. \[sec:Int\]. The corresponding mode sums for the charge density and the azimuthal current are given by Eq. (\[j2Int\]). They contain series over the zeros of the function given by Eq. (\[gamVal\]). For the summation of these series we have employed a variant of the generalized Abel-Plana formula. The latter allowed us to extract explicitly from the VEVs the parts corresponding to the conical space without boundaries and to present the contributions induced by the circle in terms of exponentially convergent integrals. In the interior region the boundary-induced parts in the renormalized VEVs of the charge density and the azimuthal current are given by Eqs. (\[j0int1\]) and (\[j2int1\]). For a massless fermionic field these formulae are reduced to Eqs. ([j02intm0]{}). For large values of the circle radius and for a massive field the boundary-induced contribution decay exponentially \[Eqs. ([j02LargeRad]{})\]. In the case of a massless field, the VEVs decay as $%
(r/a)^{2q_{\alpha }-1}$, for both charge density and azimuthal current. For points near the apex of the cone, the leading terms in the corresponding asymptotic expansions are given by Eqs. (\[j0intApex\]) and ([j2intApex]{}). In particular, the boundary-induced parts vanish at the apex when $|\alpha _{0}|<1/2-1/(2q)$ and diverge for $|\alpha _{0}|>1/2-1/(2q)$.
The formulas for the VEV of the fermionic current are easily generalized for the case of a spinor field with quasiperiodic boundary condition (\[PerBC\]) along the azimuthal direction. This problem is reduced to the one we have considered by a gauge transformation. The corresponding expressions for the VEVs are obtained from those given above changing the definition of the parameter $\alpha $ by Eq. (\[Replace\]).
The results obtained in the present paper can be applied for the evaluation of the VEV of the fermionic current in graphitic cones. Graphitic cones are obtained from graphene sheet if one or more sectors are excised. The opening angle of the cone is related to the number of sectors removed, $N_{c}$, by the formula: $\phi _{0}=2\pi (1-N_{c}/6)$ , with $N_{c}=1,2,\ldots ,5$ (for the electronic properties of graphitic cones see, e.g., [@Lamm00] and references therein). All these angles have been observed in experiments [Kris97]{}. The electronic band structure of graphene close to the Dirac points shows a conical dispersion $E(\mathbf{k})=v_{F}|\mathbf{k}|$, where $%
\mathbf{k}$ is the momentum measured relatively to the Dirac points and $%
v_{F}\approx 10^{8}$ cm/s represents the Fermi velocity which plays the role of a speed of light. Consequently, the long-wavelength description of the electronic states in graphene can be formulated in terms of the Dirac-like theory in (2+1)-dimensional spacetime. The corresponding excitations are described by a pair of two-component spinors, corresponding to the two different triangular sublattices of the honeycomb lattice of graphene (see, for instance, [@Cast09]). In both cases of finite and truncated graphitic cones the corresponding 2-dimensional surface has a circular boundary. As the Dirac field lives on the cone surface, it is natural to impose bag boundary condition (\[BCMIT\]) on the bounding circle which ensures the zero fermion flux through the edge of the cone. A more detailed investigation of the fermionic current in graphitic cones, based on the results of the present paper, will be presented elsewhere.
Acknowledgments {#acknowledgments .unnumbered}
===============
E.R.B.M. and V.B.B. thank Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq) and FAPES-ES/CNPq (PRONEX) for partial financial support. A.A.S. was supported by Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq) and by the Armenian Ministry of Education and Science Grant No. 119.
Integral representations {#sec:IntRep}
========================
In this section we derive two integral representations for the function $%
\mathcal{I}(q,\alpha _{0},z)$ defined by Eq. (\[seriesI0\]). In the first approach, we use the integral representation for the modified Bessel function $I_{\beta _{j}}(z)$ (see formula 9.6.20 in Ref. [@hand]). To be allowed to replace the order of the integration and the summation over $j$, we integrate by parts the first term in this representation:$$I_{\beta _{j}}(z)=\frac{\sin (\pi \beta _{j})}{\pi \beta _{j}}e^{-x}+\frac{z%
}{\pi }\int_{0}^{\pi }dy\sin y\frac{\sin (\beta _{j}y)}{\beta _{j}}e^{z\cos
y}-\frac{\sin (\pi \beta _{j})}{\pi }\int_{0}^{\infty }dye^{-z\cosh y-\beta
_{j}y}. \label{intRepI}$$Substituting (\[intRepI\]) into Eq. (\[seriesI0\]) and interchanging the order of the summation and integration, we apply the formula [@Prud86]$$\sum_{j}\frac{\sin (\beta _{j}y)}{\beta _{j}}=(-1)^{l}\frac{\pi \cos \left[
(2l+1)\pi (\alpha _{0}-1/2q)\right] }{q\cos [\pi (\alpha _{0}-1/2q)]},
\label{sinnu}$$for $2l\pi /q<y<(2l+2)\pi /q$. For the first term in the right-hand side of (\[intRepI\]) one has $y=\pi $. For this term, when $q=2l$ with $%
l=1,2,\ldots $, the corresponding summation formula has the form$$\sum_{j}\frac{\sin (\pi \beta _{j})}{\beta _{j}}=(-1)^{q/2}\frac{\pi }{q}%
\cos (q\pi \alpha _{0})\tan [\pi (\alpha _{0}-1/2q)]. \label{sinnu2}$$Finally, for the series corresponding to the last term in Eq. (\[intRepI\]) we have$$\sum_{j}\sin (\pi \beta _{j})e^{-\beta _{j}y}=\frac{f(q,\alpha _{0},y)}{%
\cosh (qy)-\cos (q\pi )}, \label{sinnu3}$$with the notation$$\begin{aligned}
f(q,\alpha _{0},y) &=&\cos \left[ q\pi \left( 1/2-\alpha _{0}\right) \right]
\cosh \left[ \left( q\alpha _{0}+q/2-1/2\right) y\right] \notag \\
&&-\cos \left[ q\pi \left( 1/2+\alpha _{0}\right) \right] \cosh \left[
\left( q\alpha _{0}-q/2-1/2\right) y\right] . \label{fqualf}\end{aligned}$$
Combining the formulae given above, we find the following integral representation for the series (\[seriesI0\]):$$\begin{aligned}
&& \mathcal{I}(q,\alpha _{0},z) =\frac{e^{z}}{q}-\frac{1}{\pi }%
\int_{0}^{\infty }dy\frac{e^{-z\cosh y}f(q,\alpha _{0},y)}{\cosh (qy)-\cos
(q\pi )} \notag \\
&& \qquad +\frac{2}{q}\sum_{l=1}^{p}(-1)^{l}\cos [2\pi l(\alpha
_{0}-1/2q)]e^{z\cos (2\pi l/q)}, \label{seriesI3}\end{aligned}$$with $2p<q<2p+2$. In the case $q=2p$, the term $$-(-1)^{q/2}\frac{e^{-z}}{q}\sin (q\pi \alpha _{0}), \label{replaced}$$should be added to the right-hand side of Eq. (\[seriesI3\]). Note that for $1\leqslant q<2$, the last term on the right-hand side of Eq. ([seriesI3]{}) is absent. Formula (\[seriesI3\]) is simplified in the case $%
q=1$:$$\mathcal{I}(1,\alpha _{0},z)=e^{z}-\frac{\sin (\pi \alpha _{0})}{\pi }%
\int_{0}^{\infty }dye^{-z\cosh y}\frac{\cosh [\left( 1/2-\alpha _{0}\right)
y]}{\cosh (y/2)}. \label{seriesIq1}$$
In the special case (\[alphaSpecial\]) with an integer $q$, the integral term in Eq. (\[seriesI3\]) vanishes. For $q=2p+1$ one finds $\mathcal{I}%
(q,\alpha ,z)=(2/q)\sum_{l=0}^{(q-1)/2\prime }e^{z\cos (2\pi l/q)}$, where, as before, the prime on the summation sign mens that the term with $l=0$ should be halved. For even values of $q$, by taking into account the additional term (\[replaced\]), we find $\mathcal{I}(q,\alpha
,z)=(2/q)\sum_{l=0}^{q/2\prime }e^{z\cos (2\pi l/q)}-e^{-z}/q$. Note that for an integer $q$, from definition (\[seriesI0\]) one has $\mathcal{I}%
(q,\alpha ,z)=2\sum_{n=0}^{\infty \prime }I_{qn}(z)$. Now, we can see that in the special case under consideration, formula (\[seriesI3\]) coincides with Eq. (\[SerSp\]).
We can give an alternative integral representation of the series ([seriesI0]{}) by using the Abel-Plana summation formula in the form (see [Most97,Saha08Book]{})$$\sum_{n=0}^{\infty }f(n+1/2)=\int_{0}^{\infty }dz\,f(z)-i\int_{0}^{\infty
}dz\,\frac{f(iz)-f(-iz)}{e^{2\pi z}+1}. \label{APF}$$First of all we write this formula in the form more appropriate for the application to Eq. (\[seriesI0\]). Let us consider the series$$\sum_{j}f(|j+u|+v\epsilon _{j})=\sum_{\delta =\pm 1}\sum_{n=0}^{\infty
}f(n+1/2+\delta |u+v|), \label{Ser1}$$where $|u|\leqslant 1/2$, $|v|\leqslant 1/2$. Applying formula (\[APF\]), we get$$\sum_{j}f(|j+u|+v\epsilon _{j})=\sum_{\delta =\pm 1}\int_{0}^{\infty
}dz\,f(z+\delta w)-i\sum_{\delta =\pm 1}\int_{0}^{\infty }dz\,\frac{\delta
f(i\delta (z+iw))+\delta f(i\delta (z-iw)}{e^{2\pi z}+1}, \label{Ser2}$$with the notation $w=|u+v|$. Introducing new integration variables, we present this formula in the form $$\begin{aligned}
\sum_{j}f(|j+u|+v\epsilon _{j}) &=&2\int_{0}^{\infty }dzf(z)-\int_{0}^{w}dz
\left[ f(z)-f(-z)\right] \notag \\
&&-i\sum_{\delta =\pm 1}\int_{i\delta w}^{\infty +i\delta w}dz\,\frac{%
f(iz)-f(-iz)}{e^{2\pi (z-i\delta w)}+1}. \label{SumForm1}\end{aligned}$$Now, deforming the integration contour, we write the integral along the half-line $(i\delta w,\infty +i\delta w)$ in the complex plane $z$ as the sum of the integrals along the segment $(i\delta w,0)$ and along $(0,\infty
) $. At this step we note that in the case $1/2<w<1$ the integrand has a pole at $z=i\delta (w-1/2)$. We exclude the poles by small semicircles in the right-half plane with radius tending to zero (see Fig. \[fig6\]). The sum of the integrals along the segments $(iw,0)$ and $(-iw,0)$ cancels the second integral in the right-hand side of Eq. (\[SumForm1\]) and we get the following result$$\sum_{j}f(|j+u|+v\epsilon _{j})=A+2\int_{0}^{\infty
}dxf(x)-i\int_{0}^{\infty }dy\,\sum_{\delta =\pm 1}\frac{f(iy)-f(-iy)}{%
e^{2\pi (y+i\delta |u+v|)}+1}, \label{SumForm2}$$where $A=0$ for $0\leqslant |u+v|\leqslant 1/2$, and$$A=f(1/2-|u+v|)-f(|u+v|-1/2), \label{A}$$for $1/2<|u+v|<1$. The term $A$ comes from the contributions of the above mentioned poles to the integrals.
We apply to the series in Eq. (\[seriesI0\]) formula (\[SumForm2\]) with the function $f(z)=I_{qz}(x)$ and with the parameters $u=\alpha _{0}$, $%
v=-1/(2q)$. This leads to the following result$$\begin{aligned}
&&\mathcal{I}(q,\alpha _{0},x)=A(q,\alpha _{0},x)+\frac{2}{q}%
\int_{0}^{\infty }dz\,I_{z}(x)\, \notag \\
&&\qquad -\frac{4}{\pi q}\int_{0}^{\infty }dz\,{\mathrm{Re}}\left[ \frac{%
\sinh (z\pi )K_{iz}(x)}{e^{2\pi (z+i|q\alpha _{0}-1/2|)/q}+1}\right] ,
\label{Rep2}\end{aligned}$$with $A(q,\alpha _{0},x)=0$ for $|\alpha _{0}-1/2q|\leqslant 1/2$, and $$A(q,\alpha _{0},x)=\frac{2}{\pi }\sin [\pi (|q\alpha
_{0}-1/2|-q/2)]K_{|q\alpha _{0}-1/2|-q/2}(x), \label{Aq}$$for $1/2<|\alpha _{0}-1/2q|<1$. Note that in Eq. (\[Rep2\]) the function $%
K_{iz}(x)$ is real and the real part in the integrand can be written explicitly by observing that$${\mathrm{Re\,}}(e^{u+iv}+1)^{-1}=\frac{1}{2}\frac{e^{-u}+\cos v}{\cosh
u+\cos v}. \label{Repart}$$
In the special case $q=1$, from (\[Rep2\]) we have$$\begin{aligned}
&&\mathcal{I}(1,\alpha _{0},x)=A(1,\alpha _{0},x)+2\int_{0}^{\infty
}dz\,I_{z}(x)\, \notag \\
&&\qquad +\frac{4}{\pi }\int_{0}^{\infty }dz\,{\mathrm{Re}}\left[ \frac{%
\sinh (z\pi )K_{iz}(x)}{e^{2\pi (z-i\alpha _{0})}-1}\right] , \label{Repq2}\end{aligned}$$with $A(1,\alpha _{0},x)=-(2/\pi )\sin (\pi \alpha _{0})K_{\alpha _{0}}(x)$ in the case $-1/2<\alpha _{0}<0$ and $A(1,\alpha _{0},x)=0$ for $0\leqslant
\alpha _{0}\leqslant 1/2$. Note that the last term in the right-hand side is an even function of $\alpha _{0}$.
Comparison with the previous results in the Minkowski bulk {#sec:App2New}
==========================================================
The VEVs of the fermionic current induced by a magnetic flux in (2+1)-dimensional Minkowski spacetime have been previously investigated in Ref. [@Flek91]. The expressions for the VEVs of charge density and the azimuthal current derived in this paper have the form (in our notations)$$\begin{aligned}
\langle j^{0}(x)\rangle _{0,\text{ren}} &=&-em\frac{\sin \left( \pi \alpha
_{0}\right) }{\pi ^{3}}\int_{m}^{\infty }dk\frac{kK_{\alpha _{0}}^{2}(kr)}{%
\sqrt{k^{2}-m^{2}}}, \notag \\
\langle j^{2}(x)\rangle _{0,\text{ren}} &=&e\frac{\sin (\pi \alpha _{0})}{%
\pi ^{3}}\int_{m}^{\infty }dk\,k^{3}\frac{K_{\alpha _{0}}^{2}(kr)-K_{\alpha
_{0}-1}(kr)K_{\alpha _{0}+1}(kr)}{\sqrt{k^{2}-m^{2}}}. \label{j02Prev}\end{aligned}$$In this section, we show that these representations are equivalent to Eqs. (\[j02renbm0\]). First we consider the charge density. As the first step, we use the integral representation for the square of the Macdonald function:$$K_{\alpha _{0}}^{2}(x)=\frac{1}{2}\int_{0}^{\infty }\frac{dz}{z}%
e^{-x^{2}/2z-z}K_{\alpha _{0}}(z). \label{IntRepK2}$$This formula is easily obtained from the integral representation of the product of two Macdonald functions given in Ref. [@Wats44] (page 439). Inserting Eq. (\[IntRepK2\]) into (\[j02Prev\]), after the change of the order of integrations, the integral over $k$ is taken simply and for the charge density we obtain the result given in Eq. (\[j02renbm0\]).
In order to see the equivalence of the representations for the azimuthal current, we present the expression with the Macdonald functions in the form$$K_{\alpha _{0}}^{2}(kr)-K_{\alpha _{0}-1}(kr)K_{\alpha _{0}+1}(kr)=-\frac{2}{%
r^{2}}\int_{r}^{\infty }dxxK_{\alpha _{0}}^{2}(kx). \label{IntRepK3}$$After substituting this into Eq. (\[j02Prev\]), we use the integral representation (\[IntRepK2\]). Then, we first take the integral over $k$ and after that the integral over $x$. As a result, the integral representation (\[j02renbm0\]) for the azimuthal current is obtained. Hence, we have shown that, in the special case of Minkowski bulk, our results for the VEVs of the charge density and azimuthal current, given by Eq. (\[j02renbm0\]), agree with those from the literature. Note that we have also derived alternative representations (\[FCq1\]). For the numerical evaluation the latter are more convenient.
Contribution of the mode with $j=-\protect\alpha $ {#sec:App2}
==================================================
When the parameter $\alpha $ is equal to a half-integer, the contribution of the mode with $j=-\alpha $ to the VEV of the fermionic current should be evaluated separately. Here we consider the region inside a circle with radius $a$. Similar to Eq. (\[psibetSp\]), the negative-energy eigenspinor for this mode has the form$$\psi _{\gamma ,-\alpha }^{(-)}(x)=\frac{b_{0}}{\sqrt{r}}e^{iq\alpha \phi
+iEt}\left(
\begin{array}{c}
\frac{\gamma e^{-iq\phi /2}}{E+m}\sin (\gamma r-\gamma _{0}) \\
e^{iq\phi /2}\cos (\gamma r-\gamma _{0})%
\end{array}%
\right) , \label{psigamSp}$$where $\gamma _{0}$ is defined after Eq. (\[psibetSp\]). From boundary condition (\[BCMIT\]) it follows that the eigenvalues of $\gamma $ are solutions of the equation$$m\sin (\gamma a)+\gamma \cos (\gamma a)=0. \label{modeqSp}$$The positive roots of this equation we denote by $\gamma _{l}=\gamma a$, $%
l=1,2,\ldots $. From the normalization condition, for the coefficient in (\[psigamSp\]) one has$$b_{0}^{2}=\frac{E+m}{aE\phi _{0}}\left[ 1-\sin (2\gamma a)/(2\gamma a)\right]
^{-1}. \label{b02}$$
Using Eq. (\[psigamSp\]), for the contributions of the mode under consideration to the VEVs of the fermionic current we find:$$\begin{aligned}
\langle j^{0}(x)\rangle _{j=-\alpha } &=&\frac{e}{ar\phi _{0}}%
\sum_{l=1}^{\infty }\frac{1+\mu \left[ \gamma _{l}\sin (2\gamma _{l}r/a)-\mu
\cos (2\gamma _{l}r/a)\right] /(aE)^{2}}{1-\sin (2\gamma _{l})/(2\gamma _{l})%
}, \notag \\
\langle j^{2}(x)\rangle _{j=-\alpha } &=&-\frac{e}{ar^{2}\phi _{0}}%
\sum_{l=1}^{\infty }\frac{\gamma _{l}}{(aE)^{2}}\frac{\mu \sin (2\gamma
_{l}r/a)+\gamma _{l}\cos (2\gamma _{l}r/a)}{1-\sin (2\gamma _{l})/(2\gamma
_{l})}, \label{j02SpAp}\end{aligned}$$where $(aE)^{2}=\gamma _{l}^{2}+\mu ^{2}$ and $\mu =ma$. The part corresponding to the radial component vanishes. We assume the presence of a cutoff function. For the summation of the series in Eqs. (\[j02SpAp\]), we use the Abel-Plana-type formula$$\sum_{l=1}^{\infty }\frac{\pi f(\gamma _{l})}{1-\sin (2\gamma _{l})/(2\gamma
_{l})}=-\frac{\pi f(0)/2}{1/\mu +1}+\int_{0}^{\infty
}dz\,f(z)-i\int_{0}^{\infty }dz\frac{f(iz)-f(-iz)}{\frac{z+\mu }{z-\mu }%
e^{2z}+1}. \label{SumFormAp}$$Eq. (\[SumFormAp\]) is obtained from more general summation formula given in [@Rome02] (see also [@Saha08Book]) taking $b_{1}=0$ and $%
b_{2}=-1/\mu $. For the functions $f(z)$ corresponding to Eq. (\[j02SpAp\]) one has $f(0)=0$. The contribution of the last term in Eq. (\[SumFormAp\]) to $\langle j^{\nu }(x)\rangle _{j=-\alpha }$ is finite in the limit when the cutoff is removed. Noting that for both series in Eq. (\[j02SpAp\]) the function $f(z)$ is an even function, we conclude that the last term in (\[SumFormAp\]) does not contribute to the both charge density and azimuthal current. For the remaining parts coming from the second term in the right-hand side of Eq. (\[SumFormAp\]) one has$$\begin{aligned}
\langle j^{0}(x)\rangle _{j=-\alpha } &=&\frac{e}{\pi r\phi _{0}}%
\int_{0}^{\infty }d\gamma \left[ 1+h(\gamma )\right] , \notag \\
\langle j^{2}(x)\rangle _{j=-\alpha } &=&-\frac{e}{\pi r^{2}\phi _{0}}%
\int_{0}^{\infty }d\gamma \left[ \cos (2\gamma r)+h(\gamma )\right] ,
\label{j02Ap2}\end{aligned}$$with $h(\gamma )=mE^{-2}\left[ \gamma \sin (2\gamma r)-m\cos (2\gamma r)%
\right] $ and $E^{2}=\gamma ^{2}+m^{2}$. These parts do not depend on the circle radius. Now, we can see that the integral with the function $h(\gamma
)$ is zero. The remained term in the charge density is subtracted by the renormalization and the remaining integral in the expression of azimuthal current is zero for $r>0$. Hence, we conclude that the term with $j=-\alpha $ does not contribute to the renormalized VEV of the fermionic current.
[99]{} A. Vilenkin and E.P.S. Shellard, *Cosmic Strings and Other Topological Defects* (Cambridge University Press, Cambridge, England, 1994).
S. Sarangi and S.H.H. Tye, Phys. Lett. B **536**, 185 (2002); E.J. Copeland, R.C. Myers, and J. Polchinski, JHEP **06**, 013 (2004); G. Dvali and A. Vilenkin, JCAP **0403**, 010 (2004).
T.M. Helliwell and D.A. Konkowski, Phys. Rev. D **34**, 1918 (1986).
W.A. Hiscock, Phys. Lett. B **188**, 317 (1987).
B. Linet, Phys. Rev. D **35**, 536 (1987).
V.P. Frolov and E.M. Serebriany, Phys. Rev. D **35**, 3779 (1987).
J.S. Dowker, Phys. Rev. D **36**, 3095 (1987); J.S. Dowker, Phys. Rev. D **36**, 3742 (1987).
P.C.W. Davies and V. Sahni, Class. Quantum Grav. **5**, 1 (1988).
A.G. Smith, in *The Formation and Evolution of Cosmic Strings*, Proceedings of the Cambridge Workshop, Cambridge, England, 1989, edited by G.W. Gibbons, S.W. Hawking, and T. Vachaspati (Cambridge University Press, Cambridge, England, 1990).
G.E.A. Matsas, Phys. Rev. D **41**, 3846 (1990).
B. Allen and A.C. Ottewill, Phys. Rev. D **42**, 2669 (1990); B. Allen, J.G. Mc Laughlin, and A.C. Ottewill, Phys. Rev. D **45**, 4486 (1992);B. Allen, B.S. Kay, and A.C. Ottewill, Phys. Rev. D **53**, 6829 (1996).
T. Souradeep and V. Sahni, Phys. Rev. D **46**, 1616 (1992).
K. Shiraishi and S. Hirenzaki, Class. Quantum Grav. **9**, 2277 (1992).
V.B. Bezerra and E.R. Bezerra de Mello, Class. Quantum Grav. **11**, 457 (1994); E.R. Bezerra de Mello, Class. Quantum Grav. **11**, 1415 (1994).
G. Cognola, K. Kirsten, and L. Vanzo, Phys. Rev. D **49**, 1029 (1994).
M.E.X. Guimarães and B. Linet, Commun. Math. Phys. **165**, 297 (1994); B. Linet, J. Math. Phys. **36**, 3694 (1995).
E.S. Moreira Jnr, Nucl. Phys. B **451**, 365 (1995).
M. Bordag, K. Kirsten, and S. Dowker, Commun. Math. Phys. **182**, 371 (1996).
D. Iellici, Class. Quantum Grav. **14**, 3287 (1997).
N.R. Khusnutdinov and M. Bordag, Phys. Rev. D **59**, 064017 (1999).
J. Spinelly and E.R. Bezerra de Mello, Class. Quantum Grav. **20**, 873 (2003); J. Spinelly and E.R. Bezerra de Mello, J. High Energy Phys. **09**, 005 (2008).
V.B. Bezerra and N.R. Khusnutdinov, Class. Quantum Grav. **23**, 3449 (2006).
Yu.A. Sitenko and N.D. Vlasii, Class. Quantum Grav. **26**, 195009 (2009).
E.R. Bezerra de Mello, Class. Quantum Grav. **27**, 095017 (2010).
E.R. Bezerra de Mello and A.A. Saharian, Phys. Lett. B **642**, 129 (2006); E.R. Bezerra de Mello and A.A. Saharian, Phys. Rev. D **78**, 045021 (2008).
E.R. Bezerra de Mello and A.A. Saharian, J. High Energy Phys. **04**, 046 (2009); E.R. Bezerra de Mello and A.A. Saharian, J. High Energy Phys., in press, arXiv:1006.0224.
V.M. Mostepanenko, N.N. Trunov, *The Casimir Effect and Its Applications* (Clarendon, Oxford, 1997); K.A. Milton, *The Casimir Effect: Physical Manifestation of Zero-Point Energy* (World Scientific, Singapore, 2002); V.A. Parsegian, *Van der Waals Forces* (Cambridge University Press, Cambridge, 2005); M. Bordag, G.L. Klimchitskaya, U. Mohideen, and V.M. Mostepanenko, *Advances in the Casimir Effect* (Oxford University Press, Oxford, 2009).
G.L. Klimchitskaya, U. Mohidden, and V.M. Mostepanenko, Rev. Mod. Phys. **81**, 1827 (2009).
I. Brevik and T. Toverud, Class. Quantum Grav. **12**, 1229 (1995).
E.R. Bezerra de Mello, V.B. Bezerra, A.A. Saharian, and A.S. Tarloyan, Phys. Rev. D **74**, 025017 (2006).
E.R. Bezerra de Mello, V.B. Bezerra, and A.A. Saharian, Phys. Lett. B **645**, 245 (2007).
E.R. Bezerra de Mello, V.B. Bezerra, A.A. Saharian, and A.S. Tarloyan, Phys. Rev. D **78**, 105007 (2008).
S. Deser, R. Jackiw and S. Templeton, Ann. Phys. **140**, 372 (1982); A.J. Niemi and G.W. Semenoff, Phys. Rev. Lett. **51**, 2077 (1983); R. Jackiw, Phys. Rev. D **29**, 2375 (1984); A.N. Redlich, Phys. Rev. D **29**, 2366 (1984); M.B. Paranjape, Phys. Rev. Lett. **55**, 2390 (1985); D. Boyanovsky and R. Blankenbecler, Phys. Rev. D **31**, 3234 (1985); R. Blankenbecler and D. Boyanovsky, Phys. Rev. D **34**, 612 (1986).
T. Jaroszewicz, Phys. Rev. D **34**, 3128 (1986).
E.G. Flekkøy and J.M. Leinaas, Int. J. Mod. Phys. A **6**, 5327 (1991).
H. Li, D.A. Coker, and A.S. Goldhaber, Phys. Rev. D **47**, 694 (1993).
V.P. Gusynin, V.A. Miransky and L.A. Shovkovy, Phys. Rev. D **52**, 4718 (1995); R.R. Parwani, Phys. Lett. B **358**, 101 (1995).
Yu.A. Sitenko, Phys. At. Nucl. **60**, 2102 (1997); Yu.A. Sitenko, Phys. Rev. D **60**, 125017 (1999).
G.V. Dunne, Topological Aspects of Low Dimensional Systems (Springer, Berlin, 1999).
A.H. Castro Neto, F. Guinea, N.M.R. Peres, K.S. Novoselov, and A.K. Geim, Rev. Mod. Phys. **81**, 109 (2009).
S. Bellucci and A.A. Saharian, Phys. Rev. D **79**, 085019 (2009); S. Bellucci and A.A. Saharian, Phys. Rev. D **80**, 105003 (2009); S. Bellucci, A.A. Saharian, and V.M. Bardeghyan, arXiv:1002.1391.
Yu.A. Sitenko and N.D. Vlasii, Low Temp. Phys. **34**, 826 (2008).
S. Leseduarte and A. Romeo, Commun. Math. Phys. **193**, 317 (1998).
C.G. Beneventano, M. De Francia, K. Kirsten, and E.M. Santangelo, Phys. Rev. D **61**, 085019 (2000); M. De Francia and K. Kirsten, Phys. Rev. D **64**, 065021 (2001).
P. de Sousa Gerbert and R. Jackiw, Commun. Math. Phys. **124**, 229 (1989); P. de Sousa Gerbert, Phys. Rev. D **40**, 1346 (1989); Yu.A. Sitenko, Ann. Phys. **282**, 167 (2000).
A.P. Prudnikov, Yu.A. Brychkov, and O.I. Marichev, *Integrals and Series* (Gordon and Breach, New York, 1986), Vol. 2.
M.F. Lin and D.S. Chuu, Phys. Rev. B **57**, 6731 (1998); S. Latil, S. Roche, and A. Rubio, Phys. Rev. B **67**, 165420 (2003).
A.A. Saharian and E.R. Bezerra de Mello, J. Phys. A: Math. Gen. **37**, 3543 (2004).
A.A. Saharian, *The Generalized Abel-Plana Formula with Applications to Bessel Functions and Casimir Effect* (Yerevan State University Publishing House, Yerevan, 2008); Preprint ICTP/2007/082; arXiv:0708.1187.
P.E. Lammert and V.H. Crespi, Phys. Rev. Lett. **85**, 5190 (2000); A. Cortijo and M.A.H. Vozmediano, Nucl. Phys. B **763**, 293 (2007); Yu.A. Sitenko and N.D. Vlasii, Nucl. Phys. B **787**, 241 (2007); C. Furtado, F. Moraes, and A.M.M. Carvalho, Phys. Lett. A **372**, 5368 (2008); A. Jorio, G. Dresselhaus and M.S. Dresselhaus, *Carbon Nanotubes: Advanced Topics in the Synthesis, Structure, Properties and Applications* (Springer, Berlin, 2008).
A. Krishnan, et al, Nature **388**, 451 (1997); S.N. Naess, A. Elgsaeter, G. Helgesen and K.D. Knudsen, Sci. Technol. Adv. Mater. **10**, 065002 (2009).
*Handbook of Mathematical Functions,* edited by M. Abramowitz and I.A. Stegun, (Dover, New York, 1972).
G.N. Watson, *A Treatise on the Theory of Bessel Functions* (Cambridge University Press, Cambridge, 1944).
A. Romeo and A.A. Saharian, J. Phys. A **35**, 1297 (2002).
[^1]: E-mail: emello@fisica.ufpb.br
[^2]: E-mail: valdir@fisica.ufpb.br
[^3]: E-mail: saharian@ysu.am
|
---
author:
- 'Min Hyung Cho[^1]'
- 'Jingfang Huang [^2]'
- Dangxing Chen
- 'Wei Cai[^3]'
title: '[[A Heterogeneous FMM for 2-D Layered Media Helmholtz Equation I: Two & Three Layers Cases]{}]{}[^4]'
---
[^1]: Department of Mathematical Sciences, University of Massachusetts Lowell, Lowell, MA 01854, ()
[^2]: Department of Mathematics, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599-3250. (, )
[^3]: Corresponding author, Beijing Computational Science Research Center, Beijing, China, (); Department of Mathematics and Statistics, University of North Carolina at Charlotte, Charlotte, NC 28223 ().
[^4]: Submitted to the editors DATE.
|
---
abstract: 'We present , the Void IDentification and Examination toolkit, an open-source Python/C++ code for finding cosmic voids in galaxy redshift surveys and $N$-body simulations, characterizing their properties, and providing a platform for more detailed analysis. At its core, uses a substantially enhanced version of <span style="font-variant:small-caps;">zobov</span> (Neyinck 2008) to calculate a Voronoi tessellation for estimating the density field and a performing a watershed transform to construct voids. Additionally, provides significant functionality for both pre- and post-processing: for example, can work with volume- or magnitude-limited galaxy samples with arbitrary survey geometries, or dark matter particles or halo catalogs in a variety of common formats. It can also randomly subsample inputs and includes a Halo Occupation Distribution model for constructing mock galaxy populations. uses the watershed levels to place voids in a hierarchical tree, outputs a summary of void properties in plain ASCII, and provides a Python API to perform many analysis tasks, such as loading and manipulating void catalogs and particle members, filtering, plotting, computing clustering statistics, stacking, comparing catalogs, and fitting density profiles. While centered around <span style="font-variant:small-caps;">ZOBOV</span>, the toolkit is designed to be as modular as possible and accommodate other void finders. has been in development for several years and has already been used to produce a wealth of results, which we summarize in this work to highlight the capabilities of the toolkit. is publicly available at [http://bitbucket.org/cosmicvoids/vide\_public]{} and .'
address:
- 'Sorbonne Universités, UPMC Univ Paris 06, UMR7095, Institut d’Astrophysique de Paris, F-75014, Paris, France'
- 'CNRS, UMR7095, Institut d’Astrophysique de Paris, F-75014, Paris, France'
- 'Center for Cosmology and AstroParticle Physics, Ohio State University, Columbus, OH 43210, USA'
- 'Department of Physics, University of Illinois at Urbana-Champaign, Urbana, IL 61801, USA'
- 'Department of Astronomy, University of Illinois at Urbana-Champaign, Urbana, IL 61801, USA'
- 'Theoretical Division, Los Alamos National Laboratory, Los Alamos, NM 87545, USA'
- 'INAF - Osservatorio Astronomico di Trieste, Via Tiepolo 11, 1-34143, Trieste, Italy'
- 'INFN sez. Trieste, Via Valerio 2, 1-34127, Trieste, Italy'
- 'Department of Astronomy, Ohio State University, Columbus, OH 43210, USA'
- 'Department of Physics and Astronomy, Vanderbilt University, Nashville, TN 37235, USA'
- 'Jeremiah Horrocks Institute, University of Central Lancashire, Preston, PR1 2HE, United Kingdom'
author:
- 'P. M. Sutter'
- Guilhem Lavaux
- Nico Hamaus
- Alice Pisani
- 'Benjamin D. Wandelt'
- Mike Warren
- 'Francisco Villaescusa-Navarro'
- Paul Zivick
- Qingqing Mao
- 'Benjamin B. Thompson'
bibliography:
- 'vide.bib'
title: 'VIDE: The Void IDentification and Examination Toolkit'
---
cosmology: large-scale structure of universe ,methods: data analysis
Introduction
============
Cosmic voids are emerging as a novel probe of both cosmology and astrophysics, as well as fascinating objects of study themselves. These large empty regions in the cosmic web, first discovered over thirty years ago [@Gregory1978; @Joeveer1978; @Kirshner1981], are now known to fill up nearly the entire volume of the Universe [@Hoyle2004; @Pan2011; @Sutter2012a]. These voids exhibit some intriguing properties. For example, while apparently just simple vacant spaces, they actually contain a complex, multi-level hierarchical dark matter substructure [@vandeWey1993; @Gottlober2003; @Aragon2012]. Indeed, the interiors of voids appear as miniature cosmic webs, albeit at a different mean density [@Goldberg2004]. However, these void substructures obey simple scaling relations that enable direct translations of void properties between different tracer types (e.g., galaxies and dark matter) [@Benson2003; @Ricci2014; @Sutter2013a]. Their internal growth is relatively simple: voids do not experience major merger events over their long lifetimes [@Sutter2014a] and their interiors are largely unaffected by larger-scale environments [@Dubinski1993; @Fillmore1984; @vandeWey2009].
As underdense regions, voids are the first objects in the large-scale structure to be dominated by dark energy. This fact coupled with their simple dynamics makes them a unique and potentially potent probe of cosmological parameters, either through their intrinsic properties [e.g., @Biswas2010; @Bos2012], exploitation of their statistical isotropy via the test [@LavauxGuilhem2011; @Sutter2012b; @Sutter2014b], or cross-correlation with the cosmic microwave background [@Thompson1987; @Granett2008; @Ilic2013; @Planck2013b; @Cai2014]. Additionally, fifth forces and modified gravity are unscreened in void environments, making them singular probes of exotic physics [e.g., @Li2012; @Clampitt2013; @Spolyar2013; @Carlesi2013a].
Voids offer a unique laboratory for studying the relationship between galaxies and dark matter unaffected by complicated baryonic physics. As noted above, there appears to be a self-similar relationship between voids in dark matter distributions and voids in galaxies [@Sutter2013a] via a universal density profile [@Hamaus2014 hereafter HSW]. Void-galaxy cross-correlation analyses also reveal a striking feature: the large-scale clustering power of compensated voids is identically zero, which may give rise to a static cosmological ruler [@Hamaus2013]. Observationally, measurements of the anti-lensing shear signal of background galaxies have revealed the internal dark matter substructure in voids [@Melchior2014; @Clampitt2014], and Ly-alpha absorption measurements have illuminated dark matter properties in void outskirts [@Tejos2012]. Finally, studying the formation of void galaxies reveals the secular evolution of dark matter halos [@Rieder2013] and their mass function [@Neyrinck2014].
Voids present a useful region for investigating astrophysical phenomena, as well. For example, the detection of magnetic fields within voids constrains the physics of the primordial Universe [@Taylor2011; @Beck2013]. Contrasting galaxies in low- and high-density environments probes the relationship between dark matter halo mass and galaxy evolution [@VandeWeygaertR.2011; @Kreckel2011; @Ceccarelli2012; @Hoyle2012].
Given the burgeoning interest in voids, there remains surprisingly little publicly-accessible void information. There are a few public catalogs of voids identified in galaxy redshift surveys, primarily the SDSS [e.g., @Pan2011; @Sutter2012a; @Nadathur2014; @Sutter2013c; @Sutter2014b], and there are fewer still catalogs of voids found in simulations and mock galaxy populations [@Sutter2013a]. And while there are many published methods for finding voids based on a variety of techniques, such as spherical underdensities [@Hoyle2004; @Padilla2005], watersheds [@Platen2007; @Neyrinck2008], and phase-space dynamics [@Lavaux2010; @vandeWey2010; @Sousbie2011; @Cautun2013; @Neyrinck2013], most codes remain private. In order to accommodate the expanding application of voids and to engender the development of communities and collaborations, it is essential to provide easy-to-use, flexible, and strongly supported void-finding codes.
In this paper we present [^1], for Void IDentification and Examination, a toolkit based on the publicly-available watershed code <span style="font-variant:small-caps;">zobov</span> [@Neyrinck2008] for finding voids but considerably enhanced and extended to handle a variety of simulations and observations. also provides an extensive interface for analyzing void catalogs. In Section \[sec:inputs\] we outline the input data options for void finding, followed by Section \[sec:finding\] where we describe our void finding technique and our extensions and modifications to <span style="font-variant:small-caps;">zobov</span>. Section \[sec:analysis\] details our Python-based analysis toolkit functionality, and Section \[sec:guide\] is a quick-start user’s guide. We summarize and provide an outlook for future uses and upgrades to in Section \[sec:conclusions\].
Input Data Options {#sec:inputs}
==================
Simulations
-----------
To identify voids in $N$-body dark matter populations, is able to read [Gadget]{} [@Gadget], [FLASH]{} [@Dubey2008], and [RAMSES]{} [@ramses] simulation outputs, files in the Self-Describing Format [@warren2013], and generic ASCII files listing positions and velocities. Void finding can be done on the dark matter particles themselves, or in randomly subsampled subsets with user-defined mean densities. Subsampling can be done either in a post-processing step or *in situ* during void finding.
can also find voids in halo populations. The user must provide an ASCII file and specify the columns containing the halo mass, position, and other properties. The user can use all identified halos or specify a minimum mass threshold for inclusion in the void finding process.
The user may construct a mock galaxy population from a halo catalog using a Halo Occupation Distribution (HOD) formalism [@Berlind2002]. HOD modeling assigns central and satellite galaxies to a dark matter halo of mass $M$ according to a parametrized distribution. implements the five-parameter model of @Zheng2007, where the mean number of central galaxies is given by $$\left\langle N_{\rm cen}(M)\right\rangle = \frac{1}{2} \left[
1 + \operatorname{erf}\left(\frac{\log M - \log M_{\rm min}}{\sigma_{\log M}}\right)
\right]$$ and the mean number of satellites is given by $$\left< N_{\rm sat}(M)\right> = \left\langle N_{\rm cen}(M) \right\rangle
\left( \frac{M-M_0}{M_1'}\right)^\alpha,$$ where $M_{\rm min}$, $\sigma_{\log M}$, $M_0$, $M_1'$, and $\alpha$ are free parameters that must be fitted to a given survey. The probability distribution of central galaxies is a nearest-integer distribution (i.e., all halos above a given mass threshold host a central galaxy), and satellites follow Poisson statistics, These satellites are placed around the central galaxy with random positions assuming a probability given by the NFW [@NFW] profile for a halo of the given mass. The user can also specify an overall mean density in case the HOD model was generated from a simulation with different cosmological parameters than the one used for void finding, which causes a mismatch in the target galaxy density. While not a full fix (which would require a new HOD fit), this at least alleviates some of the mismatch.
We have included — but not fully integrated into the pipeline — a separate code for fitting HOD parameters to a given simulation. The implemented model has three parameters, and given two of those the third is fixed by demanding that the abundance of galaxies of a given population reproduces the observed value. We then explore the 2-dimensional space of the other two parameters trying to minimize the $\chi^2$ that results from comparing the two-point correlation function of the simulated galaxies with the observed data.
In all cases (particles, halos, or mock galaxies), void finding can be done on the real-space particle positions or after being placed on a lightcone (i.e., redshift space) assuming given cosmological parameters, where the $z$-axis of the simulation is taken to be the line of sight. Voids can also be found after particle positions have been perturbed using their peculiar velocities. The user may define independent slices and sub-boxes of the full simulation volume, and will handle all periodic effects and boundary cleaning (see Section \[sec:finding\]) automatically.
Observations
------------
In addition to providing an ASCII file listing galaxy right ascension, declination, and redshift, the user must provide a pixelization of the survey mask using <span style="font-variant:small-caps;">healpix</span> [@Gorski2005][^2]. The <span style="font-variant:small-caps;">healpix</span> description of the sphere provides equal-area pixels, and the <span style="font-variant:small-caps;">healpix</span> implementation itself provides built-in tools to easily determine which pixels lie on the boundary between the survey area and any masked region. This is essential for to constrain voids to the survey volume (see Section \[sec:finding\]). Figure \[fig:mask\] shows a pixelization of the SDSS DR9 mask and the location of the boundary pixels. To accurately capture the shape of the mask we require a resolution of at least $n_{\rm side}=512$.
also provides a utility for constructing a rudimentary mask from the galaxy positions themselves.
The default behavior is to assume that the galaxy population is volume-limited (i.e., uniform galaxy density as a function of redshift). However, the user may provide a selection function and identify voids in a full magnitude-limited survey. As mentioned in @Neyrinck2008, in this case the Voronoi densities will be weighted prior to void finding such that $\rho' = \rho/w(z)$, where $w(z)$ is the relative number density as a function of redshift (note that this functionality can also be used for arbitrary re-weighting, if desired). This re-weighting is only used in the watershed phase for purposes of constructing voids, and does not enter into later volume or density calculations. If re-weighting is chosen then the selection function will be used to calculate a global mean density.
Void Finding {#sec:finding}
============
The core of our void finding algorithm is <span style="font-variant:small-caps;">zobov</span> [@Neyrinck2008], which creates a Voronoi tessellation of the tracer particle population and uses the watershed transform to group Voronoi cells into zones and subsequently voids [@Platen2007]. The Voronoi tessellation provides a density field estimator based on the underlying particle positions. By implicitly performing a Delaunay triangulation (the dual of the Voronoi tessellation), <span style="font-variant:small-caps;">zobov</span> assumes constant density across the volume of each Voronoi cell, which effectively sets a local smoothing scale for the continuous field necessary to perform the watershed transform. There is no additional smoothing. For magnitude-limited surveys, where the mean galaxy density varies as a function of redshift, we weight the Voronoi cell volumes by the local value of the radial selection function.
To construct voids the algorithm first groups nearby Voronoi cells into [*z*ones]{}, which are local catchment basins. Next, the watershed transform merges adjacent zones into voids by finding minimum-density barriers between them and joining zones together to form larger agglomerations. We impose a density-based threshold within <span style="font-variant:small-caps;">zobov</span> where adjacent zones are only added to a void if the density of the wall between them is less than $0.2$ times the mean particle density $\bar{n}$. For volume-limited surveys we calculate the mean particle density by dividing the survey volume (from the <span style="font-variant:small-caps;">healpix</span> mask and redshift extents) by the total number of galaxies. For magnitude-limited surveys the mean particle density is estimated from the selection function. This density criterion prevents voids from expanding deeply into overdense structures and limits the depth of the void hierarchy [@Neyrinck2008].
However, this process does not place a restriction on the density of the initial zone, and thus in principle a void can have any minimum density. By default reports every identified basin as a void, but in the section below we describe some provided facilities for filtering the catalog based on various criteria, depending on the specific user application.
In this picture, a void is simply a depression in the density field: voids are aspherical aggregations of Voronoi cells that share a common low-density basin and are bounded by a common set of higher-density walls, as demonstrated by Figure \[fig:void\], which shows a typical 20 [$h^{-1}$Mpc]{} void identified in the SDSS DR7 galaxy survey [@Sutter2012a]. This also means that voids may have any *mean* density, since the watershed includes in the void definition all wall particles all the way up to the very highest-density separating ridgeline.
We may construct a nested hierarchy of voids [@LavauxGuilhem2011; @Bos2012] using the topologically-identified watershed basins and ridgelines. We begin by identifying the initial zones as the deepest voids, and as we progressively merge voids across ridgelines we establish super-voids. There is no unique definition of a void hierarchy, and we take the semantics of @LavauxGuilhem2011: a parent void contains all the zones of a sub-void plus at least one more. All voids have only one parent but potentially many children, and the children of a parent occupy distinct subvolumes separated by low-lying ridgelines. There are also childless “field” voids. Figure \[fig:hierarchy\] shows a cartoon of this void hierarchy construction. Without the application of the $0.2 \bar{n}$ density cut discussed above, <span style="font-variant:small-caps;">zobov</span> would identify a single super-void spanning the entire volume, and thus there would be a single hierarchical tree. However, with the cut applied there are multiple top-level voids.
For historical reasons [@Sutter2012b; @Sutter2012a], the default catalog removes voids with high central densities ($\rho (R < 0.25 R_{\rm eff}) > 0.2 \bar \rho$) and any children voids, but the user can trivially access all voids.
We have made several modifications and improvements to the original <span style="font-variant:small-caps;">zobov</span> algorithm, which itself is an inversion of the halo-finding algorithm <span style="font-variant:small-caps;">voboz</span> [@voboz]. First, we have made many speed and performance enhancements, as well as re-written large portions in a templated C++ framework for more modularity and flexibility. This strengthens <span style="font-variant:small-caps;">zobov</span> with respect to numerical precision. Floating-point precision can occasionally lead to disjoint Voronoi graphs, especially in high-density regions: one particle may be linked to another while its partner does not link back to it. We therefor enforce bijectivity in the Voronoi graph (so that the tessellation is self-consistent) by ensuring that all links are bidirectional.
We apply <span style="font-variant:small-caps;">zobov</span> to very large simulations. To run the analysis on such simulations while keeping memory consumption within reasonable bounds, it is necessary to split the volume on which the Delaunay tesselation is done into several subvolumes. Additionally many large simulations store the particle positions in single precision mode. The original <span style="font-variant:small-caps;">zobov</span> re-shifted each subvolume such that their geometrical center was always at the coordinate $(0,0,0)$. However that involves computing differences with single precision floats, increasing the numerical noise. We found practically that shifting the subvolume to place a corner at $(0,0,0)$ incurs slightly fewer problems. However, this is insufficient to completely mitigate the issue that when subvolumes are merged the volumes of the tetrahedra are not exactly the same on both sides of the subvolume boundary. So we again enforce the connectivity of the mesh, but lose the Delaunay properties at each subvolume boundary. While this is not exact, it still constructs a fully connected mesh and allows the computation of the watershed transform.
Finally, we have improved the performance of the watershed transform in <span style="font-variant:small-caps;">zobov</span> using a priority queue algorithm to sort out the zones that need to be processed according to their core density and spilling density (the saddle point with the minimum density). The priority queue algorithm has a $\mathcal{O}(1)$ time complexity for getting the next element and replaces the $\mathcal{O}(N)$ full search algorithm from the original <span style="font-variant:small-caps;">zobov</span>. The cost of insertion is at most the number of zones that were not processed but in practice this is negligible. With these improvements we have identified voids in simulations with up to $1024^3$ particles in $\sim 10$ hours using 16 cores.
By default also use the latest version of the [qhull]{} library[^3] [@qhull], where we take advantage of provided functions for constructing Voronoi graphs that are more stable in high-density regions.
Since <span style="font-variant:small-caps;">zobov</span> was originally developed in the context of simulations with periodic boundary conditions, care must be taken in observations and simulation sub-volumes. To prevent the growth of voids outside survey boundaries, we place a large number of mock particles along any identified edge (Figure \[fig:mask\]) and along the redshift caps. The user inputs the density of mock particles, and places them randomly within the projected volume of the pixels. To prevent spillover and misidentification of edge zones, studies indicate that the density of these mock particles should be at least the density of the sample studied, and preferably as high as computationally tolerable [@Sutter2012a; @Nadathur2014; @Sutter2013d]. We assign essentially infinite density to these mock particles, preventing the watershed procedure from merging zones external to the survey. Since their local volumes are arbitrary, we prevent these mock particles from participating in any void by disconnecting their adjacency links in the Voronoi graph.
This process leaves a population of voids near — but not directly adjacent to — the survey edge, which can induce a subtle bias in the alignment distribution, affecting cosmological probes such as the test [@Sutter2012b]. Also, while these *edge* voids are indeed underdensities, making them useful for certain applications, their true extent and shape is unknown. thus provides two void catalogs: *all*, which includes every identified void, and *central*, where voids are guaranteed to sit well away from any survey boundaries or internal holes: the maximum distance from the void center to any member particle is less than the distance to the nearest boundary. Voids near any redshift caps (defined using the same criterion) are removed from all catalogs.
For subvolumes taken from a larger simulation box (see the discussion above for input data options), the edges of the subvolume are assumed to be periodic for purposes of the watershed, regardless of whether that edge is actually periodic or not. However, any void near a non-periodic edge is removed from all catalogs using the distance criterion described in the preceding paragraph, since these voids will have ill-defined extents. For all simulation-based catalogs there is no difference between the *all* and *central* catalogs.
We use the mean particle spacing to set a lower size limit for voids because of the effects of shot noise. does not include any void with effective radius smaller than $\bar{n}^{-1/3}$, where $\bar{n}$ is the mean number density of the sample. We define the effective radius as $$R_{\rm eff} \equiv \left( \frac{3}{4 \pi} V \right)^{1/3},$$ where $V$ is the total volume of all the Voronoi cells that make up the void.
Figure \[fig:density\] shows an example void population with , taken from the analysis of @Hamaus2013. In this figure we show a slice from an $N$-body simulation, a set of mock galaxies painted onto the simulation using the HOD formalism discussed above, and the distribution of voids identified in the mock galaxies.
provides some basic derived void information. The most important quantity is the *macrocenter*, or volume-weighted center of all the Voronoi cells in the void: $${\bf X}_v = \frac{1}{\sum_i V_i} \sum_i {\bf x}_i V_i,
\label{eq:macrocenter}$$ where ${\bf x}_i$ and $V_i$ are the positions and Voronoi volumes of each tracer particle $i$, respectively. also computes void shapes by taking void member particles and constructing the inertia tensor: $$\begin{aligned}
M_{xx} & = &\sum_{i=1}^{N_p} (y_i^2 + z_i^2) \\
M_{xy} & = & - \sum_{i=1}^{N_p} x_i y_i, \nonumber\end{aligned}$$ where $N_p$ is the number of particles in the void, and $x_i$, $y_i$, and $z_i$ are coordinates of the particle $i$ relative to the void macrocenter. The other components of the tensor are obtained by cyclic permutations. This definition of the inertia tensor is designed to give greater weight to the particles near the void edge, which is useful for applications such as the test. Other definitions, such as volume-weighted tensors, can be implemented trivially with the toolkit functionality discussed below. We use the inertia tensor to compute eigenvalues and eigenvectors and form the ellipticity: $$\epsilon = 1- \left( \frac{J_1}{J_3}\right)^{1/4},
\label{eq:ellip}$$ where $J_1$ and $J_3$ are the smallest and largest eigenvalues of the inertia tensor, respectively.
Post-Processing & Analysis {#sec:analysis}
==========================
provides a Python-based application programming interface (API) for loading and manipulating the void catalog and performing analysis and plotting. The utilities described below present simulation and observation void catalogs on an equal footing: density and volume normalizations, the presence of boundary particles, and other differences are handled internally by such that the user does not need to implement special cases, and simulations and observations may be directly compared to each other.
All the following analysis routines are compatible with releases of the Public Cosmic Void Catalog after version *2013.10.25*. Complete documentation is available at .
Catalog Access
--------------
After loading a void catalog, the user has immediate access to all void properties (ID, macrocenter, radius, density contrast, RA, Dec, hierarchy level, ellipticity, etc.) as well as the positions ($x$, $y$, $z$, RA, Dec, redshift), velocities (if known), and local Voronoi volumes of all void member particles. By default these quantities are presented to the user as members of an object, but we provide a utility for selectively converting these quantities to NumPy arrays for more efficient processing. The user additionally has access to all particle or galaxy sample information, such as redshift extents, the mask, simulation extents and cosmological parameters, and other information. Upon request the user can also load all particles in the simulation or observation.
Since by default returns every void above the minimum size threshold set by the mean interparticle spacing of the sample, we provide some simple facilities for performing the most common catalog filtering. For example, the user may select voids based on size, position in the hierarchy, central density, minimum density, and density contrast.
Plotting
--------
includes several built-in plotting routines. First, users may plot cumulative number functions of multiple catalogs on a logarithmic scale. Volume normalizations are handled automatically, and 1$\sigma$ Poisson uncertainties are shown as shaded regions, as demonstrated in Figure \[fig:numberfunc\]. Secondly, users may plot a slice from a single void and its surrounding environment. In these plots we bin the background particles onto a two-dimensional grid and plot the density on a logarithmic scale. We draw the void member galaxies as small semi-transparent disks with radii equal to the effective radii of their corresponding Voronoi cells. Figure \[fig:matching\] highlights these kinds of plots. The user may also plot an ellipticity distribution for any sample of voids. All plots are saved in png, pdf, and eps formats.
Catalog Comparison
------------------
The user can directly compare two void catalogs by using a built-in function for computing the overlap. This function attempts to find a one-to-one match between voids in one catalog and another and reports the relative properties (size, ellipticity, etc.) between them. It also records which voids cannot find a reliable match and how many potential matches a void may have.
The potential matches are found by considering all voids in the matched void catalog whose centers lie within the watershed volume of the original void. Then for each potential matched void the user can choose to use either the unique particle IDs or the amount of volume overlap to find matches. We take the potential matched void with the greatest amount of overlap (volume or number of particles) as the best match.
In the case of volume overlap, to simplify the measurement we place each particle at the center of a sphere whose volume is the same as its Voronoi cell. We approximate the Voronoi volumes as spheres to provide a stricter definition of overlap, since the Voronoi volume can be highly elongated and lead to unwanted matching. We measure the distance between particles and assume they overlap if their distances meet the criterion $$d \le \alpha ( R_1 + R_2 ),$$ where $d$ is the distance and $R_1$ and $R_2$ are the radii of the spheres assigned to particles 1 and 2, respectively. The user may select the value $\alpha$, but we found the factor of $\alpha=0.25$ to strike the best balance between conservatively estimating overlap while still accounting for our spherical estimate of the Voronoi volume of each particle. If the particles meet the distance criterion, the volume is added to the total amount of overlap; the void with the most amount of overlap is considered the best match.
A more detailed discussion of the matching process can be found in @Sutter2013b. Figure \[fig:matching\] shows how a void identified in a real-space HOD mock galaxy population has been matched to a void identified in a corresponding redshift-space population.
Clustering Statistics
---------------------
allows the user to compute simple two-point clustering statistics, i.e. power spectra and correlation functions. To perform this, reads in particle positions and uses a cloud-in-cell mesh interpolation scheme [@Hockney1988] to construct a three-dimensional density field of fluctuations $$\delta(\mathbf{r}) = \frac{n(\mathbf{r})}{\bar{n}} - 1\;,$$ where $n(\mathbf{r})$ is the spatially varying number density of tracers with a mean of $\bar{n}$.
Then, Fourier modes of the density fluctuation with mesh wave vector $\mathbf{k}$ are computed using a standard DFT algorithm (computed using an FFT), $$\delta(\mathbf{k})=\frac{1}{N_c}\sum_\mathbf{r} \delta(\mathbf{r})\exp(-i\mathbf{k}\cdot\mathbf{r})\;,$$ and the angle-averaged power spectrum is estimated as $$P(k) = \frac{V}{N_k}\sum_{\Delta k}\frac{\left|\delta(\mathbf{k})\right|^2}{W^2_\mathrm{cic}(\mathbf{k})}\;.$$ Here $V$ denotes the simulation volume, $N_c$ the number of grid cells in the mesh, $N_k$ the number of Fourier modes in a $k$-shell of thickness $\Delta k$, and $$W_\mathrm{cic}(\mathbf{k})=\prod_{i=x,y,z}\frac{\sin^2(k_i)}{k_i^2} \;$$ is the cloud-in-cell window function to correct for artificial suppression of power originating from the mesh assignment scheme. An inverse Fourier transform of the power spectrum yields the correlation function, $$\xi(r)=\frac{1}{V}\sum_\mathbf{k} P(\mathbf{k})\exp(i\mathbf{k}\cdot\mathbf{r})\;.$$
As this procedure can be applied to any particle type, returns power spectra and correlation functions for both void centers and the tracer particles used to identify voids (e.g., dark matter particles or mock galaxies). In addition, it provides the cross-power spectrum and cross-correlation function between the void centers and the tracer particles. This routine creates plots for projected density fields (Figure \[fig:density\]) and power spectra and correlation functions (Figure \[fig:xcor1D\]).
Stacking & Profiles
-------------------
The user may construct three-dimensional stacks of voids, where void macrocenters are aligned and particle positions are shifted to be expressed as relative to the stack center. The stacked void may optionally contain only void member particles or all particles within a certain radius. Note that this stacking routine does not re-align observed voids to share a common line of sight. With this stacked profile contains routines for building a spherically averaged radial density profile and fitting the universal HSW void profile to it. All proper normalizations are handled internally. Figure \[fig:profile\] shows an example of -produced density profiles and best-fit HSW profile curves. These profiles are taken at fixed void size but from many different samples, such as high-density dark matter and mock galaxy populations.
also provides routines for reporting theoretical best-fit HSW profiles discovered in a variety of populations and void sizes, taken from @Hamaus2014 and @Sutter2013a. The user can select the sample density, tracer type, and void size closest to their population of interest. These profiles are useful for many applications, such as ISW predictions and comparing theory to data [e.g., @Pisani2013].
Quick-Start Guide {#sec:guide}
=================
uses Python and C++, and so requires only a few prerequisites prior to installation: namely, CMake, Python 2.7, NumPy ($\ge$ 1.6.1), and Matplotlib ($\ge$ 1.1.rc). will, by default, automatically download and install all other required Python and C++ libraries (healpy, cython, GSL, Boost, etc.), unless the user indicates via a setup parameter that the libraries are already available on the system.
After installation, the user should move to the `pipeline/datasets` directory and create a *dataset* file. The dataset describes the observation or simulation parameters, listing of input files, the desired subsampling levels, and various other bookkeeping parameters. Inputs are prepared with [prepareInputs]{}, which organizes and (if necessary) converts inputs, creates void-finding pipeline scripts, and performs subsampling or HOD generation if requested by the user.
The user runs the command [generateCatalog]{} on each pipeline script. This command performs the actual void finding in three stages: 1) further processing of inputs (e.g., application of boundary particles) for compatibility with <span style="font-variant:small-caps;">zobov</span>, 2) void finding, and 3) filtering of voids near boundaries, construction of the void hierarchy, and generation of void property summary outputs.
After void finding provides a Python library ([void\_python\_tools.voidUtil]{}) for loading, manipulating, and analyzing the resulting void catalogs, as presented in the previous section. The user simply loads a catalog by pointing to an output directory, and has immediate access to all void properties and member galaxies. Using this loaded catalog the user may exploit all the functionality described above.
Complete documentation is available at <http://bitbucket.org/cosmicvoids/vide_public/wiki/>.
Conclusions {#sec:conclusions}
===========
We have presented and discussed the capabilities of , a new Python/C++ toolkit for identifying cosmic voids in cosmological simulations and galaxy redshift surveys. performs void identification using a substantially modified and enhanced version of the watershed code <span style="font-variant:small-caps;">zobov</span>. The modifications mostly improve the speed and robustness of the original implementation, as well as adding extensions to handle observational survey geometries within the watershed framework. Furthermore, is able to support a variety of mock and real datasets, and provides extensive and flexible tools for loading and analyzing void catalogs. We have highlighted these analysis tools (filtering, plotting, clustering statistics, stacking, profile fitting, etc.) using examples from previous and current research using .
The analysis toolkit enables a wide variety of both theoretical and observational void research. For example, the myriad basic and derived void properties available to the user, such as sky positions, shapes, and sizes, permit simple explorations of void properties and cross-correlation with other datasets. Extracting void member particles and their associated volumes can be used for examining galaxy properties in low-density environments. Cross-matching is useful for understanding the impacts of peculiar velocities or galaxy bias, as well as providing a platform for studying the effects of modified gravity or fifth forces on a void-by-void basis. Void power spectra, shape distributions, number functions, and density profiles, easily accessible via , are sensitive probes of cosmology. Users may also access previously-fit HSW density profiles, enabling theoretical predictions of the ISW or gravitational lensing signals. The ease of filtering the void catalog on various criteria allow the user to optimize the selection of voids to maximize the scientific return for their particular research aims.
While the current release version of is fully functional, we do plan a number of further enhancements, such as integrating the HOD fitting routines directly into the pipeline (rather than the standalone version currently packaged), relaxing the <span style="font-variant:small-caps;">zobov</span> constraint of cubic boxes, and implementing angular selection functions. In the immediate future we will add more plotting functionality and include two-dimensional clustering statistic construction capabilities.
is community-oriented: the code is currently hosted at , which supports immediate distribution of updates and bug fixes to the user base, as well as providing a host for a wiki and bug tracker. Suggestions and fixes will be incorporated into the code, with numbered versions serving as milestones. Users may utilize the [git]{} functionality to create branches, make modifications, and request merging of their changes into the main release. We have designed to be as modular as possible: while currently based on <span style="font-variant:small-caps;">zobov</span>, any void finder that accepts and outputs similar data formats can be included within the toolkit.
There has been an explosive growth in void interest and research in the past decade, as indicated by the number of published void-finding algorithms, studies of void properties, and investigations of cosmological probes. The release of is in direct response to the growing demand for simple, fast, scalable, and robust tools for finding voids and exploiting their properties for scientific gain. By designing to be extensible and modular the platform can easily grow to meet the needs of current and future void communities.
The code and documentation is currently hosted at , with links to numbered versions at .
Acknowledgments {#acknowledgments .unnumbered}
===============
The authors are heavily indebted to Mark Neyrinck, who wisely and generously made <span style="font-variant:small-caps;">zobov</span> publicly available. To recognize that contribution, the authors ask that <span style="font-variant:small-caps;">zobov</span> be cited alongside <span style="font-variant:small-caps;">vide</span>.
PMS would like to thank Jeremy Tinker for providing the HOD code used in [VIDE]{}.
The authors acknowledge support from NSF Grant NSF AST 09-08693 ARRA. BDW acknowledges funding from an ANR Chaire d’Excellence (ANR-10-CEXC-004-01), the UPMC Chaire Internationale in Theoretical Cosmology, and NSF grants AST-0908 902 and AST-0708849. This work made in the ILP LABEX (under reference ANR-10-LABX-63) was supported by French state funds managed by the ANR within the Investissements d’Avenir programme under reference ANR-11-IDEX-0004-02.
[^1]: The French word for “empty”.
[^2]: http://healpix.jpl.nasa.gov
[^3]:
|
---
abstract: 'We study the time evolution of a high-momentum gluon or quark propagating through an infinite, thermalized, partonic medium utilizing a Boltzmann equation approach. We calculate the collisional energy loss of the parton, study its temperature and flavor dependence as well as the the momentum broadening incurred through multiple interactions. Our transport calculations agree well with analytic calculations of collisional energy-loss where available, but offer the unique opportunity to address the medium response as well in a consistent fashion.'
address:
- ' $^{1}$Department of Physics, Andong National University, Andong, South Korea'
- '$^{2}$Department of Physics, Duke University, Durham, NC 27705, USA'
author:
- 'Ghi R. Shin$^{1,2}$, Steffen A. Bass$^{2}$ and Berndt Müller$^{2}$'
bibliography:
- '/Users/bass/Publications/SABrefs.bib'
title: 'Transport Theoretical Description of Collisional Energy Loss in Infinite Quark-Gluon Matter'
---
Introduction
============
The currently prevailing view on the structure of the matter produced in nuclear collisions at the Relativistic Heavy-Ion Collider (RHIC) is anchored by two experimental observations [@Arsene:2004fa; @Adcox:2004mh; @Back:2004je; @Adams:2005dq; @Muller:2006ee]: (1) The emission of hadrons with a transverse momentum $p_T$ of several GeV/c or more is strongly suppressed of (jet-quenching), implying the presence of matter of with a very large color opacity, and (2) The anisotropic (“elliptic”) flow in non-central collisions is near the ideal hydrodynamic limit, requiring an early onset of the period during which the expansion is governed by fluid dynamics (earlier than 1 fm/c after the initial impact) as well as nearly ideal fluid properties with a viscosity-to-entropy density ratio $\eta/s \ll 1$. The matter created at RHIC has been thus called the strongly interacting Quark-Gluon Plasma (sQGP) [@Gyulassy:2004zy].
The origin of the jet-quenching phenomenon can be understood as follows [@Majumder:2010qh]: during the early pre-equilibrium stage of the relativistic heavy-ion collision, scattering of partons causing the formation of deconfined quark-gluon matter often engenders large momentum transfers which leads to the formation of two back-to-back hard partons. These traverse the dense medium, losing energy and finally fragment into hadrons which are observed by the experiments. Within the framework of perturbative QCD, the process with largest energy loss of a fast parton is gluon radiation induced by collisions with the quasi-thermal medium. Elastic collisions add to the energy loss and are thought to be the dominant process for heavy quarks traversing the deconfined medium.
Even though the phenomenon is being referred to as “jet-quenching”, the overwhelming majority of computations of this effect have focused on the leading particle of the jet and do not take the evolution of the radiated quanta into account. A variety of schemes for quantitative calculations of the radiative energy loss have been developed [@Zakharov:1996fv; @Baier:1996kr; @Gyulassy:1999zd; @Wiedemann:2000ez; @Guo:2000nz; @Arnold:2001ba]. However, quantitative comparisons of theoretical model predictions incorporating a realistic hydrodynamic medium evolution with data from relativistic heavy ion collisions at RHIC [@Nonaka:2006yn] have revealed significant remaining ambiguities in the value of the extracted transport coefficients of the sQGP [@Bass:2008rv]. Different leading particle energy loss schemes may thus imply different values for these transport coefficients [@Majumder:2010qh]. A systematic comparison of the assumptions underlying the various leading particle energy loss schemes and of their numerical implementations is currently under way [@Horowitz:2009eb; @Majumder:2010qh].
Recently, with the advent of sophisticated experimental techniques for the reconstruction of full jets emitted from an ultra-relativistic heavy-ion collision [@Bruna:2009em; @Lai:2009zq], attention has shifted from leading particle energy-loss to the evolution of medium-modified jets. The study of the evolution of the entire jet in the medium is expected to lead to a better understanding of the dynamics of energy-deposition into medium and of the subsequent medium response, e.g. the possible formation of Mach-cones etc. [@CasalderreySolana:2004qm; @Neufeld:2009ep; @Qin:2009uh]. The current state of the art in for medium modified jets are Monte-Carlo generators [@Lokhtin:2005px; @Zapp:2009ud; @Armesto:2009fj; @Auvinen:2009qm], which calculate the medium-modified jet but do not take the medium response into account. The consistent treatment of the jet in medium as well as the medium response requires the application of transport theory, e.g. via Boltzmann equation based calculation, such as is done in the Parton Cascade Model (PCM) [@Geiger:1991nj]. Parton Cascades have already been applied to the time-evolution of ultra-relativistic heavy-ion collisions. However, for the most part the PCM calculations have focused on reaction dynamics [@molnar:2000jh; @bass:2002fh], thermalization [@Xu:2004mz], electromagnetic probes [@bass:2002pm; @renk:2005yg] and bulk properties of the medium [@molnar:2004yh], as well as to a far lesser extent on leading particle energy-loss [@Fochler:2008ts; @Fochler:2010wn]. We would also like to note the recent progress in combining a Boltzmann equation based particle transport approach with pQCD cross sections for the scattering of high momentum particles with soft particle interactions mediated by a Yang-Mills field [@Schenke:2008gg].
It is our long-term goal to advance the application of the PCM to the description medium modified jets and the respective response of the medium to a jet propagating through it and depositing energy in it. The achievement of this goal requires the validation of the PCM against analytic test cases which can be reliably calculated for a simplified medium, e.g. a pure gluon plasma or a quark-gluon plasma in thermal and chemical equilibrium. In the present manuscript we take a first step in this direction. We calculate elastic energy loss in an infinite, homogeneous medium at fixed temperature within the PCM approach and compare our results to analytic calculations of the same quantity. In addition, we calculate the rate of momentum broadening of a hard parton propagating through the medium and compare the results of our analysis to analytic expressions for the transport coefficient $\hat q$.
Quark Gluon Plasma and Parton Cascade Simulations
=================================================
The medium in our calculations is an ideal Quark-Gluon-Plasma ,i.e. a gas of $u, d$ and $s$ quarks and anti-quarks as well as gluons at fixed temperature $T$ in full thermal and chemical equilibrium. In addition, we also conduct studies for a one-component gluon plasma in thermal and chemical equilibrium. For our transport calculation we define a box with periodic boundary conditions (to simulate infinite matter) and sample thermal quark and gluon distribution functions to generate an ensemble of particles at a given temperature and zero chemical potential. We then insert a hard probe, i.e. a high momentum parton, into the box and track its evolution through the medium. The medium particles may interact with the probe as well as with each other according to a Boltzmann Transport equation:
$$p^\mu \frac{\partial}{\partial x^\mu} F_i(x,\vec p) = {\cal C}_i[F]
\label{eq03}$$
where the collision term ${\cal C}_i$ is a nonlinear functional of the phase-space distribution function. Although the collision term, in principle, includes factors encoding the Bose-Einstein or Fermi-Dirac statistics of the partons, we neglect those effects here.
The collision integrals have the form: $$\label{ceq1}
{\cal C}_i[F] = \frac{(2 \pi)^4}{2 S_i E_i} \cdot
\int \prod\limits_j {\rm d}\Gamma_j \, | {\cal M} |^2
\, \delta^4(P_{\rm in} - P_{\rm out}) \,
D(F_k(x, \vec p))$$ with $$D(F_k(x,\vec p)) \,=\,
\prod\limits_{\rm out} F_k(x,\vec p) \, - \,
\prod\limits_{\rm in} F_k(x,\vec p) \quad$$ and $$\prod\limits_j {\rm d}\Gamma_j = \prod\limits_{{j \ne i} \atop {\rm in,out}}
\frac{{\rm d}^3 p_j}{(2\pi)^3\,(2p^0_j)}
\quad.$$ $S_i$ is a statistical factor defined as $S_i \,=\, \prod\limits_{j \ne i} K_a^{\rm in}!\, K_a^{\rm out}!$ with $K_a^{\rm in,out}$ identical partons of species $a$ in the initial or final state of the process, excluding the $i$th parton.
The matrix elements $| {\cal M} |^2$ account for the following processes: $$\label{processes}
\begin{array}{lll}
\,g g \to g g \quad&\quad q q \to q q \quad&\quad q g \to q g \\
\,q q' \to q q' \quad& \quad q \bar q \to q \bar q \quad& \quad \bar q g \to \bar q g\\
(g g \to q \bar q) \quad& \,\,\,\,(q \bar q \to g g) \quad& \,\,\,\,(q \bar q \to q' \bar q') \\
\end{array}$$ with $q$ and $q'$ denoting different quark flavors. The flavor changing processes in parenthesis are optional and can be disabled to study the effect of jet flavor conversion. The gluon radiation processes, e.g. $g g \rightarrow g g g$ are not included in this study, but will be addressed in a forthcoming publication. The amplitudes for the above processes have been calculated in refs. [@Cutler:1977qm; @Combridge:1977dm] for massless quarks. The corresponding scattering cross sections are expressed in terms of spin- and colour-averaged amplitudes $|{\cal M}|^2$: $$\label{dsigmadt}
\left( \frac{{\rm d}\hat \sigma}
{{\rm d} Q^2}\right)_{ab\to cd} \,=\, \frac{1}{16 \pi \hat s^2}
\,\langle |{\cal M}|^2 \rangle$$ For the transport calculation we also need the total cross section as a function of $\hat s$ which can be obtained from (\[dsigmadt\]): $$\label{sigmatot}
\hat \sigma_{ab}(\hat s) \,=\,
\sum\limits_{c,d} \, \int\limits_{(p_T^{\rm min})^2}^{\hat s}
\left( \frac{{\rm d}\hat \sigma }{{\rm d} Q^2}
\right)_{ab\to cd} {\rm d} Q^2 \quad .$$
Since our medium is in full thermal and chemical equilibrium, we can use the effective thermal mass of a gluon and a quark in the system to regularize the cross sections [@biro:1993qt]: $$\begin{aligned}
\mu_D^2 &=& \pi \alpha_s d_p \int {{d^3 p}\over{(2\pi)^3}}
{{C_2}\over{|\vec p|}} f_p(\vec p ; T),\end{aligned}$$ where $d_p$ is the degeneracy factor of a parton p and $C_2$ is $N_c$ for gluons and $(N_c^2-1)/(2N_c)$ for quarks. Inserting the thermal distribution yields a Debye mass of a gluon of $\mu_D = g T$ in a thermal gluon system at temperature $T$ and of $\mu_D = \sqrt{ (2 N_c + N_f)/6} g T$ for quarks and gluons in a quark-gluon plasma. For example, the dominant elastic cross sections thus are: $$\begin{aligned}
\label{diffxs}
{{d\sigma^{gg\rightarrow gg}}\over{dq_\perp^2 }} &=&
{2\pi\alpha_s^2} {9 \over 4} {1 \over {(q_\perp^2 + \mu_D^2 )^2}},
\label{diff_cs_gg_gg} \\
{{d\sigma^{gq\rightarrow gq}}\over{dq_\perp^2 }} &=&
{2\pi\alpha_s^2} {1 \over {(q_\perp^2 + \mu_D^2 )^2}},
\label{diff_cs_gq_gq} \\
{{d\sigma^{qq\rightarrow qq}}\over{dq_\perp^2 }} &=&
{2\pi\alpha_s^2} {4 \over 9} {1 \over {(q_\perp^2 + \mu_D^2 )^2}},
\label{diff_cs_qq_qq}\end{aligned}$$ where the first order of Feynman diagrams have been included.
For our studies we use two distinct implementations of the Parton Cascade Model, the the Andong parton cascade code [@shin:2002fg], and the VNI/BMS code [@bass:2002fh]. Both codes have been modified to contain the same cross sections so that we can verify the outcome of our calculations through cross-comparisons between the two PCM implementations. For the sake of simplicity we keep the coupling constant fixed at a value of $\alpha_s = 0.3$.
To obtain the temporal evolution of a high momentum parton propagating through a gluon or a quark-gluon plasma, a high energy gluon or quark of initial energy $E_0$ is injected into or thermal (quark-)gluon-plasma box at the center of it’s xy-plane with ${\vec p}=(0,0,p_z=\sqrt{E_i^2})$. Following each individual scattering event involving our hard probe, we record the energies and momenta of the outgoing partons. The parton with the larger momentum in the final state is considered to be our continuing probe particle, due to the dominance of small angle scattering in the implemented cross sections. A simple estimate on the number of interactions the probe will undergo per unit time or length can be obtained via its mean free path in the medium: $$\begin{aligned}
\lambda_s &=& {{1}\over {\sigma_T \rho}},\end{aligned}$$ where $\sigma_T$ is the total thermal cross section and $\rho$ the density of a medium. A gluon’s mean free path in a gluon plasma is $1.1$ fm at $T=300$ MeV and $0.76$ fm at $T=400$ MeV, and in the quark-gluon plasma $0.83$ fm and $0.59$ fm, respectively. The quark mean free paths are about $9/4$ times longer than those of gluon ones at same temperature.
Results and Discussion
======================
![The Monte Carlo sampling of transverse momentum transfer at each scattering. The jet only is allowed to have a collision with medium particles.[]{data-label="fig1"}](fig1.eps){width=".95\textwidth"}
In our work, we shall focus solely on the hard parton propagating through the medium and analyze its energy loss and momentum broadening as a function of time and distance traveled. We would like to point out that our calculation includes the full information on the medium-response as well, however, a detailed analysis on its characteristics it outside the scope of the present work and will be discussed in a forthcoming publication.
Let us start by comparing the distribution of transverse momentum transfers experienced by gluons with $E_0= 100$ GeV and $E_0=500$ GeV, respectively, while propagating through a gluon plasma at $T=300$ MeV to the analytic expression given by the differential cross section (\[diffxs\]).
Figure \[fig1\] shows that we find excellent agreement between our transport calculation and the analytic expression for $E_0= 500$ GeV. At the lower incident probe energy, $E_0=100$ GeV we note a suppression in the high momentum transfer tail of the distribution which we can attribute to the effects of phase space and energy conservation, the gluon loses a substantial amount of energy while traversing a medium of 50 fm depth and thus for most of the time does not carry sufficient momentum to scatter with high momentum transfers while the analytic formula continuously assumes that the CM energy between the colliding particles is much larger than the Debye mass.
![Left: the gluon jet energy as a function of traveling time in the medium at $T=400MeV$: The travelling distance is proportional to the time. GP is a gluon plasma and QGP a quark-gluon plasma and the simulation does not include $gg \rightarrow ggg$ process. Right: the quark jet energy as a function of time: The penetration length is proportional to the time. Plasma temperature $T=400MeV$.[]{data-label="fig2"}](fig2_1.eps "fig:"){width="49.00000%"}![Left: the gluon jet energy as a function of traveling time in the medium at $T=400MeV$: The travelling distance is proportional to the time. GP is a gluon plasma and QGP a quark-gluon plasma and the simulation does not include $gg \rightarrow ggg$ process. Right: the quark jet energy as a function of time: The penetration length is proportional to the time. Plasma temperature $T=400MeV$.[]{data-label="fig2"}](fig2_2.eps "fig:"){width="49.00000%"}
We now focus on the elastic energy loss of a high-momentum parton in medium. The left frame of Fig. \[fig2\] shows the energy as a function of time for a gluon with initial energy of 100 GeV (or 50 GeV, respectively), propagating through a quark-gluon plasma (QGP) or gluon plasma (GP) at a temperature of T=400 MeV. The right frame repeats the calculation for a light quark instead of a gluon. The calculation, which includes only elastic processes (i.e. the flavor exchange reaction channels have been disabled in order to unambiguously study the flavor dependence) clearly shows the anticipated linear decrease of jet energy with time. One should note, that analytic calculations usually study the jet energy as a function of distance traveled – in our case we substitute time for distance in order to have a quantity that is not affected by the periodic boundary conditions for the system in our calculations. We have verified the linear relationship between distance traveled and elapsed time with a slope near unity.
The two frames clearly show the difference between a GP and a QGP in terms of energy-loss at the same temperature. This difference is due to the significantly higher parton density in a QGP compared with a GP for the same temperature, resulting in a larger number of scattering partners for the hard probe to interact with. The difference is less pronounced for the quark probe than the gluon probe, due to the difference in their interaction cross sections. We also observe that the linear decrease of energy as a function of time tapers off in the long time limit as the probe energy approaches the thermal regime. While at the probe energies studied here (between 50 GeV and 400 GeV) this occurs at times not relevant in the context of an ultra-relativistic heavy-ion collision, for smaller probe energies frequently seen at RHIC, this may be a significant effect.
![Left: gluon energy as a function of time for different initial values of the energy in a gluon plasma. Right: the same for a fixed initial energy and various medium temperatures. The solid lines represent an analytical calculation (see text for details).[]{data-label="fig3"}](dEdt_fit.eps "fig:"){width="48.00000%"}![Left: gluon energy as a function of time for different initial values of the energy in a gluon plasma. Right: the same for a fixed initial energy and various medium temperatures. The solid lines represent an analytical calculation (see text for details).[]{data-label="fig3"}](dEdt_fit2.eps "fig:"){width="48.00000%"}
Fig. \[fig3\] shows the energy and temperature dependence of the elastic energy loss of a hard gluon in a gluon plasma. The calculations (symbols) are compared to an analytical calculation [@bjorken:1982tu; @thoma:1992kq]: $$- {{dE}\over{dt}} = \int {{d^3 k}\over{(2\pi)^3}} F_g(\vec k;T)
\int dq_\perp^2 (1-\cos\theta) \nu {{d\sigma}\over{dq_\perp^2}}
\label{e_loss_1}$$ where $\nu = E-E'$ is the energy difference between before and after collision. Utilizing the characteristics of our medium and the cross sections in our calculation, this equation can be discretized to a form which lends itself to a comparison with our calculation: $$E_p(z) = E_p(0) - z \frac{\alpha_s C_2 \mu_D^2}{2} \ln\left[
\frac{\sqrt{E_p(z) T}}{\mu_D} \right]
\label{dedx}$$ Using an iterative procedure we can calculate $E_p(z)$ for the initial gluon energies and temperatures used in and compare them to our calculation. The agreement between the analytical calculation and the PCM is remarkable and serves as validation of the PCM framework. The validation of the PCM calculation in a well-controlled infinite matter calculation in full equilibrium at fixed temperature is of significant importance, since the PCM can easily be used to study realistic dynamic systems far off equilibrium, e.g. an ultra-relativistic heavy-ion collision, which can only be poorly described in the framework of (semi-)analytic calculations.
![Left: Energy as a function of distance for a gluon propagating through a GP and a QGP at the same temperature and at equivalent entropy-densities. A scaling of the energy-loss with the entropy-density is observed. Right: distribution of probe energies after traversing the medium for 5, 10, 50, 70 and 100 fm at a temperature of 300 MeV.[]{data-label="fig4"}](dEdz_flavor3.eps "fig:"){width=".48\textwidth"}![Left: Energy as a function of distance for a gluon propagating through a GP and a QGP at the same temperature and at equivalent entropy-densities. A scaling of the energy-loss with the entropy-density is observed. Right: distribution of probe energies after traversing the medium for 5, 10, 50, 70 and 100 fm at a temperature of 300 MeV.[]{data-label="fig4"}](fig4.eps "fig:"){width="49.00000%"}
In we observe a difference in the energy loss a parton suffers when propagating through a GP or a QGP. We understand this difference to be due to the different overall particle densities associated with a GP and a QGP at the same temperature. In order to validate this point, we initialized a GP and a QGP at an identical entropy density of $s = 87.5$ fm$^{-3}$, corresponding to a temperature of $T=457$ MeV for a GP and $T=318$ MeV for a QGP. The results of this calculation (also in comparison to a GP and a QGP at a temperature of $T=400$ MeV) are shown in : for the same entropy density, both the GP and the QGP inflict a nearly identical amount of elastic energy loss to the hard probe. This scaling suggests that the entropy density, rather than the temperature, may be a robust quantity for the characterization of a thermal QCD medium by energy-loss measurements. We note that it is by no means clear whether the deconfined medium in the early stages of a heavy-ion reaction more closely resembles a GP (or a liberated “glasma” [@Lappi:2006fp]) or a QGP in full chemical equilibrium [@biro:1993qt]. Most likely the chemical composition of the deconfined medium created in a ultra-relativistic heavy-ion collision changes significantly as a function of time, whereas its entropy density (after the initial equilibration) does not vary as much.
The right frame of shows the distribution of probe energies for a gluon with initial energy of 50 GeV passing through a gluon plasma at temperature $T=300$ MeV after 5, 10, 50, 70 and 100 fm, respectively. The gluon distribution at $t=0$ is a $\delta$-function at $E=E_0$. The Figure shows several interesting features. We find that elastic energy loss acts as a diffusion processes in momentum space, with the width of the distribution being a function of the traveling distance. Likewise the peak of the distribution shifts as a function of traveling distance. Even after traveling significant distances, about an order of magnitude larger than possible in a relativistic heavy-ion collision, the gluon has not thermalized with the medium, in which a medium particle has an average energy of about 0.9 GeV at T=300 MeV.
Another important quantity for the characterization of the hot and dense QCD medium is the transport coefficient $\hat q$, which is defined as [@majumder:2007zh]: $$\begin{aligned}
\hat{q}_R &=& \rho(T) \int dq_\perp^2 q^2 {{d\sigma}\over{dq_\perp^2}}
\label{q_hat}\end{aligned}$$ with the squared momentum transfer $q^2 = -t$. Generally, $\hat q$ can be interpreted as the amount of squared transverse momentum per unit length a probe accumulates as it propagates through the QCD medium. This interpretation lends itself to a definition of $\hat q$ suitable for extraction from microscopic transport calculations: $$\hat q \,=\, \frac{1}{l_z} \sum \limits_{i=1}^{N_{coll}} \left(\Delta p_{\perp,i}\right)^2$$ One should note that $\hat q$ in general should depend on the distance $l_z$ the probe has traveled through the medium, since the probe loses energy in the process, and thus the average momentum transfers in individual interactions of the probe with the medium may vary as a function of that distance.
![Sum of squared transverse momentum transfers per unit length as a function of distance traveled for a 50 GeV gluon in a gluon plasma at different temperatures. This quantity corresponds to the transport coefficient $\hat q$.[]{data-label="fig5"}](qhat.eps){width="90.00000%"}
Figure \[fig5\] shows $\hat q$ for a gluon plasma as a function of distance traveled through the medium for several temperatures. The initial values of $\hat q$ depend strongly on the temperature, ranging from approximately 1.5 GeV$^2$/fm at $T=300$ MeV up to about 9 GeV$^2$/fm at 600 MeV temperature. For $T=400$ MeV we find good agreement with recent calculations by [@Schenke:2008gg] using a Boltzmann transport for hard binary collisions combined with soft interactions mediated by a collective Yang-Mills field and by [@Fochler:2010wn] with the BAMPS parton cascade model in its binary scattering mode. At leading-log order and in the weak coupling limit, using the same matrix element as in our PCM calculation, the following analytic expression has been derived for $\hat q$ [@Arnold:2009ik]: $$\hat q (\Lambda) \, \approx \, \alpha_s\, T\, m_D^2 \ln\left(\frac{T^2}{m_D^2}\right) + 4 \pi \, \alpha_s^2 \, {\cal{N}}
\ln\left(\frac{\Lambda^2}{T^2}\right)
\label{qhat}$$ if the cut-off $\Lambda \geq T$. Comparing expression (\[qhat\]) to expression (\[dedx\]) and following the derivation of (\[qhat\]) we find that the cut-off $\Lambda^2 \sim E_p T$. Replacing $\Lambda^2$ by $a E_p(z) T$ (with $a$ being a proportionality constant to be determined) in expression (\[qhat\]) provides a good fit to our results for a choice of $a=0.1$.
Conclusions
===========
We have studied elastic energy loss of high energy parton in an infinite, homogeneous, thermal medium within the PCM approach and have compared our results to analytic calculations of the same quantity. In addition, we have calculated the rate of momentum broadening of a hard parton propagating through the medium and have compared the results of our analysis to an analytic expression for the transport coefficient $\hat q$. We find good agreement between the PCM calculations and the analytic expressions (within the approximations used in both cases), giving us significant confidence that our transport approach provides a reliable description of a gas of quarks and gluons at temperatures above $\approx 2 T_C$ in the weak coupling limit. We expect that the results presented in this manuscript can be used as a benchmark by other parton-based microscopic transport calculations. The validation of the PCM against the analytic test cases presented in this manuscript now allows us to advance the application of the PCM to the description of medium modified jets in relativistic heavy-ion collisions and the response of the medium to a hard parton propagating through it.
Acknowledgements
================
Ghi R. Shin was supported by a grant from Andong National University (2008) and thanks the members of the Physics Department at Duke University for their warm hospitality during his sabbatical visit. S.A.B. and B.M. were supported by the DOE under grant DE-FG02-05ER41367. The calculations were performed using resources provided by the Open Science Grid, which is supported by the National Science Foundation and the U.S. Department of Energy’s Office of Science.
Refences {#refences .unnumbered}
========
|
---
abstract: 'This paper deals with a Boltzmann-type kinetic model describing the interplay between vehicle dynamics and safety aspects in vehicular traffic. Sticking to the idea that the macroscopic characteristics of traffic flow, including the distribution of the driving risk along a road, are ultimately generated by one-to-one interactions among drivers, the model links the personal (i.e., individual) risk to the changes of speeds of single vehicles and implements a probabilistic description of such microscopic interactions in a Boltzmann-type collisional operator. By means of suitable statistical moments of the kinetic distribution function, it is finally possible to recover macroscopic relationships between the average risk and the road congestion, which show an interesting and reasonable correlation with the well-known free and congested phases of the flow of vehicles.'
address:
- 'Department of Engineering, Information Sciences, and Mathematics, University of L’Aquila, Via Vetoio (Coppito 1), 67100 Coppito AQ, Italy'
- 'Department of Mathematical Sciences “G. L. Lagrange”, Politecnico di Torino, Corso Duca degli Abruzzi 24, 10129 Turin, Italy'
author:
- Paolo Freguglia
- Andrea Tosin
bibliography:
- 'FpTa-safety\_traffic.bib'
title: 'Proposal of a risk model for vehicular traffic: A Boltzmann-type kinetic approach'
---
Introduction
============
Road safety is a major issue in modern societies, especially in view of the constantly increasing motorisation levels across several EU and non-EU countries. Although recent studies suggest that this fact is actually correlated with a general decreasing trend of fatality rates, see e.g., [@DaCoTA2012; @yannis2011TRB], the problem of assessing quantitatively the risk in vehicular traffic, and of envisaging suitable countermeasures, remains of paramount importance.
So far, road safety has been studied mainly by means of statistical models aimed at fitting the probability distribution of the fatality rates over time [@oppe1989AAP] or at forecasting road accidents using time series [@abdel-aty2000AAP; @miaou1993AAP]. Efforts have also been made towards the construction of safety indicators, which should allow one to classify the safety performances of different roads and to compare, on such a basis, different countries [@hermans2009AAP; @hermans2008AAP]. However, there is in general no agreement on which procedure, among several possible ones, is the most suited to construct a reliable indicator and, as a matter of fact, the position of a given country in the ranking turns out to be very sensitive to the indicator used. Despite this, a synthetic analysis is ultimately necessary: a mere comparison of the crash data of different countries may be misleading, therefore a more abstract and comprehensive concept of *risk* has to be formulated [@shen2012AAP].
A recent report on road safety in New Zealand [@KiwiRAP2012] introduces the following definitions of two types of risk:
Collective risk
: is a measure of the total number of fatal and serious injury crashes per kilometre over a section of road, cf. [@KiwiRAP2012 p. 13];
Personal risk
: is a measure of the danger to each individual using the state highway being assessed, cf. [@KiwiRAP2012 p. 14].
While substantially qualitative and empirical, these definitions raise nevertheless an important conceptual point, namely the fact that the risk is intrinsically *multiscale*. Each driver (*microscopic* scale) bears a certain personal level of danger, namely of potential risk, which, combined with the levels of danger of all other drivers, forms an emergent risk for the indistinct ensemble of road users (*macroscopic* scale). Hence the large-scale tangible manifestations of the road risk originate from small-scale, often unobservable, causes. Such an argument is further supported by some psychological theories of risk perception, among which probably the most popular one in the context of vehicular traffic is the so-called *risk homeostasis theory*. According to this theory, each driver possesses a certain target level of personal risk, which s/he feels comfortable with; then, at every time s/he compares their perceived risk with such a target level, adjusting their behaviour so as to reduce the gap between the two [@wilde1998IP]. Actually, the risk homeostasis theory is not widely accepted, some studies rejecting it on the basis of experimental evidences, see e.g., [@evans1986RA]. The main criticism is, in essence, that the aforesaid risk regulatory mechanism of the drivers (acting similarly to the thermal homeostatic system in warm-blooded animals, whence the name of the theory) is too elementary compared to the much richer variety of possible responses, to such an extent that some paradoxical consequences are produced. For instance, the number of traffic accidents per unit time would tend to be constant independently of possible safety countermeasures, because so tends to be the personal risk per unit time. Whether one accepts or not this theory, there is a common agreement on the fact that the background of all observable manifestations of the road risk is the individual behaviour of the drivers. In this respect, conceiving a mathematical model able to explore the link between small and large scale effects acquires both a theoretical and a practical interest. In fact, if on the one hand data collection is a useful practice in order to grasp the essential trends of the considered phenomenon, on the other hand the interpretation of the data themselves, with possibly the goal of making simulations and predictions, cannot rely simply on empirical observation.
The mathematical literature offers nowadays a large variety of traffic models at all observation and representation scales, from the microscopic and kinetic to the macroscopic one, see e.g., [@piccoli2009ENCYCLOPEDIA] and references therein for a critical survey. Nevertheless, there is a substantial lack of models dedicated to the joint simulation of traffic flow and safety issues. In [@moutari2013IMAJAM] the authors propose a model, which is investigated analytically in [@herty2011ZAMM] and then further improved in [@moutari2014CMS], for the simulation of car accidents. The model is a macroscopic one based on the coupling of two second order traffic models, which are instantaneously defined on two disjoint adjacent portions of the road and which feature different traffic pressure laws accounting for more and less careful drivers. Car collisions are understood as the intersection of the trajectories of two vehicles driven by either type of driver. In particular, analytical conditions are provided, under which a collision occurs depending on the initial space and speed distributions of the vehicles.
In this paper, instead of modelling physical collisions among cars, we are more interested in recovering the point of view based on the concept of risk discussed at the beginning. Sticking to the idea that observable traffic trends are ultimately determined by the individual behaviour of drivers, we adopt a Boltzmann-type kinetic approach focusing on binary interactions among drivers, which are responsible for both speed changes (through instantaneous acceleration, braking, overtaking) and, consequently, also for changes in the individual levels of risk. In practice, as usual in the kinetic theory approach, we consider the time-evolution of the statistical distribution of the microscopic states of the vehicles, taking into account that such states include also the *personal risk* of the drivers. Then, by extracting suitable mean quantities at equilibrium from the kinetic distribution function, we obtain some information about the macroscopic traffic trends, including the *average risk* and the *probability of accident* as functions of the road congestion. To some extent, these can be regarded as measures of a *potential collective risk* useful to both road users and traffic governance authorities.
In more detail, the paper is organised as follows. In Section \[sec:model\] we present the Boltzmann-type kinetic model and specialise it to the case of a *quantised* space of microscopic states, given that the vehicle speed and the personal risk can be conveniently understood as discrete variables organised in levels. In Section \[sec:interactions\] we detail the modelling of the microscopic interactions among the vehicles, regarding them as stochastic jump processes on the discrete state space. Indeed, the aforementioned variety of human responses suggests that a probabilistic approach is more appropriate at this stage. In Section \[sec:simulations\] we perform a computational analysis of the model, which leads us to define the *risk diagram* of traffic parallelly to the more celebrated fundamental and speed diagrams. By means of such a diagram we display the average risk as a function of the vehicle density and we take inspiration for proposing the definition of a *safety criterion* which discriminates between *safety* and *risk regimes* depending on the local traffic congestion. Interestingly enough, such regimes turn out to be correlated with the well-known phase transition between free and congested flow regimes also reproduced by our model. In Section \[sec:conclusions\] we draw some conclusions and briefly sketch research perspectives regarding the application of ideas similar to those developed in this paper to other systems of interacting particles prone to safety issues, for instance crowds. Finally, in Appendix \[app:basictheo\] we develop a basic well-posedness and asymptotic theory of our kinetic model in measure spaces (Wasserstein spaces), so as to ground the contents of the paper on solid mathematical bases.
Boltzmann-type kinetic model with stochastic interactions {#sec:model}
=========================================================
In this section we introduce a model based on Boltzmann-type kinetic equations, in which short-range interactions among drivers are modelled as stochastic transitions of microscopic state. This allows us to introduce the randomness of the human behaviour in the microscopic dynamics ruling the individual response to local traffic and safety conditions.
In particular, we consider the following (dimensionless) microscopic states of the drivers: the *speed* $v\in [0,\,1]\subset{\mathbb{R}}$ and the *personal risk* $u\in [0,\,1]\subset{\mathbb{R}}$, with $u=0$, $u=1$ standing for the lowest and highest risk, respectively. The kinetic (statistical) representation of the system is given by the *(one particle) distribution function* over the microscopic states, say $f=f(t,\,v,\,u)$, $t\geq 0$ being the time variable, such that $f(t,\,v,\,u)\,dv\,du$ is the fraction of vehicles which at time $t$ travel at a speed comprised between $v$ and $v+dv$ with a personal risk between $u$ and $u+du$. Alternatively, if the distribution function $f$ is thought of as normalised with respect to the total number of vehicles, $f(t,\,v,\,u)\,dv\,du$ can be understood as the probability that a representative vehicle of the system possesses a microscopic state in $[v,\,v+dv]\times [u,\,u+du]$ at time $t$.
\[rem:u\] The numerical values of $u$ introduced above are purely conventional: they are used to mathematise the concept of personal risk but neither refer nor imply actual physical ranges. Hence they serve mostly *interpretative* than strictly quantitative purposes: for instance, the definition and identification of the macroscopic risk and safety regimes of traffic, cf. Section \[sec:riskdiag\]. The quantitative information on these regimes will be linked to a more standard and well-defined physical quantity, such as the vehicle density.
The above statistical representation does not include the space coordinate among the variables which define the microscopic state of the vehicles. This is because we are considering a simplified setting, in which the distribution of the vehicles along the road is supposed to be *homogeneous*. While being a rough physical approximation, this assumption nonetheless allows us to focus on the interaction dynamics among vehicles, which feature mainly speed variations, here further linked to variations of the personal risk. Hence the two microscopic variables introduced above are actually the most relevant ones for constructing a minimal mathematical model which describes traffic dynamics as a result of concurrent mechanical and behavioural effects.
Use of the distribution function for computing observable quantities
--------------------------------------------------------------------
If the distribution function $f$ is known, several statistics over the microscopic states of the vehicles can be computed. Such statistics provide macroscopic observable quantities related to both the traffic conditions and the risk along the road.
We recall in particular some of them, which will be useful in the sequel:
- The *density* of vehicles at time $t$, denoted $\rho=\rho(t)$, which is defined as the zeroth order moment of $f$ with respect to both $v$ and $u$: $$\rho(t):=\int_0^1\int_0^1 f(t,\,v,\,u)\,dv\,du.$$ Throughout the paper we will assume $0\leq\rho\leq 1$, where $\rho=1$ represents the maximum (dimensionless) density of vehicles that can be locally accommodated on the road.
- The *average flux* of vehicles at time $t$, denoted $q=q(t)$, which is defined as the first order moment of the speed distribution, the latter being the marginal of $f$ with respect to $u$. Hence: $$q(t)=\int_0^1 v\left(\int_0^1 f(t,\,v,\,u)\,du\right)\,dv.
\label{eq:ave_flux}$$
- The *mean speed* of vehicles at time $t$, denoted $V=V(t)$, which is defined from the usual relationship between $\rho$ and $q$, i.e., $q=\rho V$, whence $$V(t)=\frac{q(t)}{\rho(t)}.
\label{eq:mean_speed}$$
- The *statistical distribution of the risk*, say $\varphi=\varphi(t,\,u)$, namely the marginal of $f$ with respect to $v$: $$\begin{aligned}
\begin{aligned}[t]
\varphi(t,\,u):=\int_0^1f(t,\,v,\,u)\,dv,
\end{aligned}
\label{eq:risk_distr}\end{aligned}$$ which is such that $\varphi(t,\,u)\,du$ is the number of vehicles which, at time $t$, bear a personal risk comprised between $u$ and $u+du$ (regardless of their speed). Using $\varphi$ we obtain the *average risk*, denoted $U(t)$, along the road at time $t$ as: $$\begin{aligned}
\begin{aligned}[t]
U(t):=\frac{1}{\rho(t)}\int_0^1u\varphi(t,\,u)\,du=\frac{1}{\rho(t)}\int_0^1 u\left(\int_0^1 f(t,\,v,\,u)\,dv\right)\,du.
\end{aligned}
\label{eq:ave_risk}\end{aligned}$$ Notice that $\int_0^1\varphi(t,\,u)\,du=\rho(t)$, which explains the coefficient $\frac{1}{\rho(t)}$ in .
Evolution equation for the distribution function
------------------------------------------------
A mathematical model consists in an evolution equation for the distribution function $f$, derived consistently with the principles of the kinetic theory of vehicular traffic.
In our spatially homogeneous setting, the time variation of the number of vehicles with speed $v$ and personal risk $u$ is only due to short-range interactions, which cause acceleration and braking. Since the latter depend ultimately on people’s driving style, they cannot be modelled by appealing straightforwardly to standard mechanical principles. Hence, in the following, speed and risk transitions will be regarded as correlated stochastic processes. This way both the subjectivity of the human behaviour and the interplay between mechanical and behavioural effects will be taken into account.
In formulas we write $$\partial_tf=Q^{+}(f,\,f)-fQ^{-}(f),$$ where:
- $Q^{+}(f,\,f)$ is a bilinear *gain operator*, which counts the average number of interactions per unit time originating new vehicles with post-interaction state $(v,\,u)$;
- $Q^{-}(f)$ is a linear *loss operator*, which counts the average number of interactions per unit time causing vehicles with pre-interaction state $(v,\,u)$ to change either the speed or the personal risk.
Let us introduce the following compact notations: ${\mathbf{w}}:=(v,\,u)$, $I=[0,\,1]\subset{\mathbb{R}}$. Then the expression of the gain operator is as follows (see e.g., [@tosin2009AML]): $$Q^{+}(f,\,f)(t,\,{\mathbf{w}})=\iint_{I^4}\eta({\mathbf{w}}_\ast,\,{\mathbf{w}}^\ast){\mathcal{P}}({\mathbf{w}}_\ast\to{\mathbf{w}}\,\vert\,{\mathbf{w}}^\ast,\,\rho)f(t,\,{\mathbf{w}}_\ast)f(t,\,{\mathbf{w}}^\ast)\,d{\mathbf{w}}_\ast\,d{\mathbf{w}}^\ast
\label{eq:gain}$$ where ${\mathbf{w}}_\ast,\,{\mathbf{w}}^\ast$ are the pre-interaction states of the two vehicles which interact and:
- $\eta({\mathbf{w}}_\ast,\,{\mathbf{w}}^\ast)>0$ is the *interaction rate*, i.e., the frequency of interaction between a vehicle with microscopic state ${\mathbf{w}}_\ast$ and one with microscopic state ${\mathbf{w}}^\ast$;
- ${\mathcal{P}}({\mathbf{w}}_\ast\to{\mathbf{w}}\,\vert\,{\mathbf{w}}^\ast,\,\rho)\geq 0$ is the *transition probability distribution*. More precisely, ${\mathcal{P}}({\mathbf{w}}_\ast\to{\mathbf{w}}\,\vert\,{\mathbf{w}}^\ast,\,\rho)\,d{\mathbf{w}}$ is the probability that the vehicle with microscopic state ${\mathbf{w}}_\ast$ switches to a microscopic state contained in the elementary volume $d{\mathbf{w}}$ of $I^2$ centred in ${\mathbf{w}}$ because of an interaction with the vehicle with microscopic state ${\mathbf{w}}^\ast$. Conditioning by the density $\rho$ indicates that, as we will see later (cf. Section \[sec:interactions\]), binary interactions are influenced by the local macroscopic state of the traffic.
For fixed pre-interaction states the following property holds: $$\int_{I^2}{\mathcal{P}}({\mathbf{w}}_\ast\to{\mathbf{w}}\,\vert\,{\mathbf{w}}^\ast,\,\rho)\,d{\mathbf{w}}=1 \qquad \forall\,{\mathbf{w}}_\ast,\,{\mathbf{w}}^\ast\in I^2,\ \forall\,\rho\in [0,\,1].
\label{eq:tabgames}$$
Likewise, the expression of the loss operator is as follows (see again [@tosin2009AML]): $$Q^{-}(f)(t,\,{\mathbf{w}})=\int_{I^2}\eta({\mathbf{w}},\,{\mathbf{w}}^\ast)f(t,\,{\mathbf{w}}^\ast)\,d{\mathbf{w}}^\ast.$$ On the whole, the loss term $fQ^{-}(f)$ can be derived from the gain term by assuming that the first vehicle already holds the state ${\mathbf{w}}$ and counting on average all interactions which, in the unit time, can make it switch to whatever else state. One has: $$f(t,\,{\mathbf{w}})Q^{-}(f)(t,\,{\mathbf{w}})=\iint_{I^4}\eta({\mathbf{w}},\,{\mathbf{w}}^\ast){\mathcal{P}}({\mathbf{w}}\to{\mathbf{w}}'\,\vert\,{\mathbf{w}}^\ast,\,\rho)f(t,\,{\mathbf{w}})f(t,\,{\mathbf{w}}^\ast)\,d{\mathbf{w}}'\,d{\mathbf{w}}^\ast,$$ then property gives the above expression for $Q^{-}(f)$.
Putting together the terms introduced so far, and assuming, for the sake of simplicity, that $\eta({\mathbf{w}}_\ast,\,{\mathbf{w}}^\ast)=1$ for all ${\mathbf{w}}_\ast,\,{\mathbf{w}}^\ast\in I^2$, we finally obtain the following integro-differential equation for $f$: $$\partial_tf=\iint_{I^4}{\mathcal{P}}({\mathbf{w}}_\ast\to{\mathbf{w}}\,\vert\,{\mathbf{w}}^\ast,\,\rho)f(t,\,{\mathbf{w}}_\ast)f(t,\,{\mathbf{w}}^\ast)\,d{\mathbf{w}}_\ast\,d{\mathbf{w}}^\ast-\rho f.
\label{eq:kinetic_model}$$ Notice that, with constant interaction rates, the loss term is directly proportional to the vehicle density. Moreover, owing to property , it results $$\int_{I^2}\Bigl(Q^{+}(f,\,f)(t,\,{\mathbf{w}})-\rho f(t,\,{\mathbf{w}})\Bigr)\,d{\mathbf{w}}=0,$$ therefore integrating with respect to ${\mathbf{w}}$ gives that such a density is actually constant in time (conservation of mass).
For the study of basic qualitative properties of the reader may refer to Appendix \[app:basictheo\], where we tackle the well-posedness of the Cauchy problem associated with in the frame of *measure-valued differential equations* in Wasserstein spaces. Such a theoretical setting, which is more abstract than the one usually considered in the literature for similar equations, see e.g., [@bellomo2015NHM Appendix A] and [@pucci2014DCDSS], is here motivated by the specialisation of the model that we are going to discuss in the next section.
Discrete microscopic states {#sec:discrete}
---------------------------
For practical reasons, it may be convenient to think of the microscopic states $v,\,u$ as *quantised* (i.e., distributed over a set of *discrete*, rather than continuous, values). This is particularly meaningful for the personal risk $u$, which is a non-mechanical quantity naturally meant in levels, but may be reasonable also for the speed $v$, see e.g., [@delitala2007M3AS; @fermo2013SIAP], considering that the cruise speed of a vehicle tends to be mostly piecewise constant in time, with rapid transitions from one speed level to another.
In the state space $I^2=[0,\,1]\times [0,\,1]\subset{\mathbb{R}}^2$ we consider therefore a lattice of microscopic states $\{{\mathbf{w}}_{ij}\}$, with ${\mathbf{w}}_{ij}=(v_i,\,u_j)$ and, say, $i=1,\,\dots,\,n$, $j=1,\,\dots,\,m$. For instance, if the lattice is uniformly spaced we have $$v_i=\frac{i-1}{n-1}, \qquad u_j=\frac{j-1}{m-1},$$ with in particular $v_1=u_1=0$, $v_n=u_m=1$, and $v_i<v_{i+1}$ for all $i=1,\,\dots,\,n-1$, $u_j<u_{j+1}$ for all $j=1,\,\dots,\,m-1$.
Proceeding at first in a formal fashion, over such a lattice we postulate the following form of the kinetic distribution function: $$f(t,\,{\mathbf{w}})=\sum_{i=1}^{n}\sum_{j=1}^{m}f_{ij}(t)\delta_{{\mathbf{w}}_{ij}}({\mathbf{w}}),
\label{eq:discr_f}$$ where $\delta_{{\mathbf{w}}_{ij}}({\mathbf{w}})=\delta_{v_i}(v)\otimes\delta_{u_j}(u)$ is the two-dimensional Dirac delta function, while $f_{ij}(t)$ is the fraction of vehicles which, at time $t$, travel at speed $v_i$ with personal risk $u_j$ (or, depending on the interpretation given to $f$, it is the probability that a representative vehicle of the system possesses the microscopic state ${\mathbf{w}}_{ij}=(v_i,\,u_j)$ at time $t$). In order to specialise to the kinetic distribution function , we rewrite it in weak form by multiplying by a test function $\phi\in C(I^2)$ and integrating over $I^2$: $$\begin{aligned}
\frac{d}{dt} & \int_{I^2}\phi({\mathbf{w}})f(t,\,{\mathbf{w}})\,d{\mathbf{w}}\\
&= \iint_{I^4}\left(\int_{I^2}\phi({\mathbf{w}}){\mathcal{P}}({\mathbf{w}}_\ast\to{\mathbf{w}}\,\vert\,{\mathbf{w}}^\ast,\,\rho)\,d{\mathbf{w}}\right)
f(t,\,{\mathbf{w}}_\ast)f(t,\,{\mathbf{w}}^\ast)\,d{\mathbf{w}}_\ast\,d{\mathbf{w}}^\ast \\
&\phantom{=} -\rho\int_{I^2}\phi({\mathbf{w}})f(t,\,{\mathbf{w}})\,d{\mathbf{w}};\end{aligned}$$ next we read $f(t,\,{\mathbf{w}})\,d{\mathbf{w}}$ as an integration measure, not necessarily regular with respect to Lebesgue, and from we get: $$\begin{aligned}
\sum_{i=1}^{n} & \sum_{j=1}^{m}f'_{ij}(t)\phi({\mathbf{w}}_{ij}) \\
&= \sum_{i_\ast,i^\ast=1}^{n}\sum_{j_\ast,j^\ast=1}^{m}
\left(\int_{I^2}\phi({\mathbf{w}}){\mathcal{P}}({\mathbf{w}}_{i_\ast j_\ast}\to{\mathbf{w}}\,\vert\,{\mathbf{w}}_{i^\ast j^\ast},\,\rho)\,d{\mathbf{w}}\right)
f_{i_\ast j_\ast}(t)f_{i^\ast j^\ast}(t) \\
&\phantom{=} -\sum_{i=1}^{n}\sum_{j=1}^{m}\rho f_{ij}(t)\phi({\mathbf{w}}_{ij}).\end{aligned}$$ In view of the quantisation of the state space, the transition probability distribution must have a structure comparable to , i.e., it must be a discrete probability distribution over the post-interaction state ${\mathbf{w}}$. Hence we postulate: $${\mathcal{P}}({\mathbf{w}}_{i_\ast j_\ast}\to{\mathbf{w}}\,\vert\,{\mathbf{w}}_{i^\ast j^\ast},\,\rho)
=\sum_{i=1}^{n}\sum_{j=1}^{m}{\mathcal{P}}_{i_\ast j_\ast,i^\ast j^\ast}^{ij}(\rho)\delta_{{\mathbf{w}}_{ij}}({\mathbf{w}}),$$ where ${\mathcal{P}}_{i_\ast j_\ast,i^\ast j^\ast}^{ij}(\rho)\in [0,\,1]$ is the probability that the vehicle with microscopic state ${\mathbf{w}}_{i_\ast j_\ast}$ jumps to the microscopic state ${\mathbf{w}}_{ij}$ because of an interaction with the vehicle with microscopic state ${\mathbf{w}}_{i^\ast j^\ast}$, given the local traffic congestion $\rho$. Plugging this into the equation above yields $$\begin{aligned}
\sum_{i=1}^{n} & \sum_{j=1}^{m}f'_{ij}(t)\phi({\mathbf{w}}_{ij}) \\
&= \sum_{i=1}^{n}\sum_{j=1}^{m}\left(\sum_{i_\ast,i^\ast=1}^{n}
\sum_{j_\ast,j^\ast=1}^{m}{\mathcal{P}}_{i_\ast j_\ast,i^\ast j^\ast}^{ij}(\rho)f_{i_\ast j_\ast}(t)f_{i^\ast j^\ast}(t)
-\rho f_{ij}(t)\right)\phi({\mathbf{w}}_{ij}),\end{aligned}$$ whence finally, owing to the arbitrariness of $\phi$, we obtain $$f'_{ij}=\sum_{i_\ast,i^\ast=1}^{n}\sum_{j_\ast,j^\ast=1}^{m}{\mathcal{P}}_{i_\ast j_\ast,i^\ast j^\ast}^{ij}(\rho)f_{i_\ast j_\ast}f_{i^\ast j^\ast}-\rho f_{ij}.
\label{eq:discr_kinetic_model}$$
\[rem:prop\_discr\_model\] This discrete-state kinetic-type equation has been studied in the literature, see e.g., [@colasuonno2013CM; @delitala2007M3AS]. In particular, it has been proved to admit smooth solutions $t\mapsto f_{ij}(t)$, which are unique and nonnegative for prescribed nonnegative initial data $f_{ij}(0)$ and, in addition, preserve the total mass $\sum_{i=1}^{n}\sum_{j=1}^{m}f_{ij}(t)$ in time.
The arguments above can be made rigorous by appealing to the theory for developed in Appendix \[app:basictheo\]. In particular, we can state the following result:
\[theo:cont-discr\_link\] Let the transition probability distribution have the form $${\mathcal{P}}({\mathbf{w}}_\ast\to{\mathbf{w}}\,\vert\,{\mathbf{w}}^\ast,\,\rho)
=\sum_{i=1}^{n}\sum_{j=1}^{m}{\mathcal{P}}^{ij}({\mathbf{w}}_\ast,\,{\mathbf{w}}^\ast,\,\rho)\delta_{{\mathbf{w}}_{ij}}({\mathbf{w}}),$$ where the mapping $({\mathbf{w}}_\ast,\,{\mathbf{w}}^\ast,\,\rho)\mapsto{\mathcal{P}}^{ij}({\mathbf{w}}_\ast,\,{\mathbf{w}}^\ast,\,\rho)$ is Lipschitz continuous for all $i,\,j$, i.e., there exists a constant ${\operatorname{Lip}}({\mathcal{P}}^{ij})>0$ such that $$\begin{gathered}
\abs{{\mathcal{P}}^{ij}({\mathbf{w}}_{\ast 2},\,{\mathbf{w}}^\ast_2,\,\rho_2)-{\mathcal{P}}^{ij}({\mathbf{w}}_{\ast 2},\,{\mathbf{w}}^\ast_2,\,\rho_2)} \\
\leq{\operatorname{Lip}}({\mathcal{P}}^{ij})\Bigl(\abs{{\mathbf{w}}_{\ast 2}-{\mathbf{w}}_{\ast 1}}+\abs{{\mathbf{w}}^\ast_2-{\mathbf{w}}^\ast_1}+\abs{\rho_2-\rho_1}\Bigr)\end{gathered}$$ for all ${\mathbf{w}}_{\ast 1},\,{\mathbf{w}}_{\ast 2},\,{\mathbf{w}}^\ast_1,\,{\mathbf{w}}^\ast_2\in I^2$, $\rho\in [0,\,1]$.
Let moreover $$f_0({\mathbf{w}})=\sum_{i=1}^{n}\sum_{j=1}^{m}f^0_{ij}\delta_{{\mathbf{w}}_{ij}}({\mathbf{w}})$$ be a prescribed kinetic distribution function at time $t=0$ over the lattice of microscopic states $\{{\mathbf{w}}_{ij}\}\subset I^2$, such that $$f^0_{ij}\geq 0\ \forall\,i,\,j, \qquad \sum_{i=1}^{n}\sum_{j=1}^{m}f^0_{ij}=\rho\in [0,\,1].$$
Set ${\mathcal{P}}_{i_\ast j_\ast,i^\ast j^\ast}^{ij}(\rho):={\mathcal{P}}^{ij}({\mathbf{w}}_{i_\ast j_\ast},\,{\mathbf{w}}_{i^\ast j^\ast},\,\rho)$. Then the corresponding unique solution to is with coefficients $f_{ij}(t)$ given by along with the initial conditions $f_{ij}(0)=f^0_{ij}$. In addition, it depends continuously on the initial datum as stated by Theorem \[theo:cont\_dep\].
The given transition probability distribution satisfies Assumption \[ass:Lip\_P\], in fact $$\begin{aligned}
& {W_1\left({\mathcal{P}}({\mathbf{w}}_{\ast 1}\to\cdot\,\vert\,{\mathbf{w}}^\ast_1,\,\rho_1),\,{\mathcal{P}}({\mathbf{w}}_{\ast 2}\to\cdot\,\vert\,{\mathbf{w}}^\ast_2,\,\rho_2)\right)} \\
&\qquad =\sup_{\varphi\in C_{b,1}(I^2)\cap{\operatorname{Lip}}_1(I^2)}\sum_{i=1}^{n}\sum_{j=1}^{m}\varphi({\mathbf{w}}_{ij})
\left({\mathcal{P}}^{ij}({\mathbf{w}}_{\ast 2},\,{\mathbf{w}}^\ast_2,\,\rho_2)-{\mathcal{P}}^{ij}({\mathbf{w}}_{\ast 1},\,{\mathbf{w}}^\ast_1,\,\rho_1)\right) \\
&\qquad \leq\left(\sum_{i=1}^{n}\sum_{j=1}^{m}{\operatorname{Lip}}({\mathcal{P}}^{ij})\right)
\Bigl(\abs{{\mathbf{w}}_{\ast 2}-{\mathbf{w}}_{\ast 1}}+\abs{{\mathbf{w}}^\ast_2-{\mathbf{w}}^\ast_1}+\abs{\rho_2-\rho_1}\Bigr).\end{aligned}$$ Furthermore, $f_0\in{\mathcal{M}_+^{\rho}(I^2)}$. Then, owing to Theorem \[theo:exist\_uniqueness\], we can assert that the Cauchy problem associated with admits a unique mild solution[^1].
The calculations preceding this theorem show that if the $f_{ij}(t)$’s satisfy then is indeed such a solution, considering also that it is nonnegative (cf. Remark \[rem:prop\_discr\_model\]) and matches the initial condition $f_0$.
Theorem \[theo:cont-discr\_link\] requires the mapping $({\mathbf{w}}_\ast,\,{\mathbf{w}}^\ast,\,\rho)\mapsto{\mathcal{P}}^{ij}({\mathbf{w}}_\ast,\,{\mathbf{w}}^\ast,\,\rho)$ to be Lipschitz continuous in $I^2\times I^2\times [0,\,1]$ but the solution depends ultimately only on the values ${\mathcal{P}}^{ij}({\mathbf{w}}_{i_\ast j_\ast},\,{\mathbf{w}}_{i^\ast j^\ast},\,\rho)$, cf. . Therefore, when constructing specific models, we can confine ourselves to specifying the values ${\mathcal{P}}_{i_\ast j_\ast,i^\ast j^\ast}^{ij}(\rho)$, taking for granted that they can be variously extended to points $({\mathbf{w}}_\ast,\,{\mathbf{w}}^\ast)\ne ({\mathbf{w}}_{i_\ast j_\ast},\,{\mathbf{w}}_{i^\ast j^\ast})$ in a Lipschitz continuous way.
Modelling microscopic interactions {#sec:interactions}
==================================
From now on, we will systematically refer to the discrete-state setting ruled by . In order to describe the interactions among the vehicles, it is necessary to model the transition probabilities ${\mathcal{P}}_{i_\ast j_\ast,i^\ast j^\ast}^{ij}(\rho)$ associated with the jump processes over the lattice of discrete microscopic states.
As a first step, we propose the following factorisation: $$\begin{aligned}
{\mathcal{P}}_{i_\ast j_\ast,i^\ast j^\ast}^{ij}(\rho) &=
{\operatorname{Prob}}(u_{j_\ast}\to u_j\,\vert\,v_{i_\ast},\,v_{i^\ast},\,\rho)\cdot{\operatorname{Prob}}(v_{i_\ast}\to v_i\,\vert\,v_{i^\ast},\,\rho) \\
& =: ({\mathcal{P}}')_{i_\ast j_\ast,i^\ast}^{j}(\rho)\cdot({\mathcal{P}}'')_{i_\ast i^\ast}^{i}(\rho),\end{aligned}$$ which implies that changes in the personal risk (first term at the right-hand side) depend on the current speeds of the interacting pairs while the driving style (i.e., the way in which the speed changes, second term at the right-hand side) is not directly influenced by the current personal risk. In a sense, we are interpreting the change of personal risk as a function of the driving conditions, however linked to the *subjectivity* of the drivers and hence described in probability. By subjectivity we mean the fact that different drivers may not respond in the same way to the same conditions. More advanced models may account for a joint influence of speed and risk levels on binary interactions, but for the purposes of the present paper the approximation above appears to be satisfactory.
As a second step, we detail the transition probabilities $({\mathcal{P}}')_{i_\ast j_\ast,i^\ast}^{j}(\rho)$, $({\mathcal{P}}'')_{i_\ast i^\ast}^{i}(\rho)$ just introduced. It is worth stressing that they will be mainly inspired by a prototypical analysis of the driving styles. In particular, they will be parameterised by the vehicle density $\rho\in [0,\,1]$ so as to feed back the global traffic conditions to the local interaction rules. Other external *objective* factors which may affect the flow of vehicles and the personal risk, such as e.g., weather or road conditions (number of lanes, number of directions of travel, type of wearing course), will be summarised by a parameter $\alpha\in [0,\,1]$, whose low, resp. high, values stand for poor, resp. good, conditions.
The interpretation of $\alpha$ is conceptually analogous to that of $u$ discussed in Remark \[rem:u\]: its numerical values do not refer to actual physical (measured) ranges but serve to convey, in mathematical terms, the influence of external conditions on binary interactions.
Risk transitions {#sec:risk_trans}
----------------
In modelling the risk transition probability $$({\mathcal{P}}')_{i_\ast j_\ast,i^\ast}^{j}(\rho)={\operatorname{Prob}}(u_{j_\ast}\to u_j\,\vert\,v_{i_\ast},v_{i^\ast},\,\rho)$$ we consider two cases, depending on whether the vehicle with state $(v_{i_\ast},\,u_{j_\ast})$ interacts with a faster or a slower leading vehicle with speed $v_{i^\ast}$.
- If $v_{i_\ast}\leq v_{i^\ast}$ we set $$({\mathcal{P}}')_{i_\ast j_\ast,i^\ast}^{j}(\rho)=\alpha\rho\delta_{j,\max\{1,\,j_\ast-1\}}+(1-\alpha\rho)\delta_{j,j_\ast},$$ the symbol $\delta$ denoting here the Kronecher’s delta. In practice, we assume that the interaction with a faster leading vehicle can reduce the personal risk with probability $\alpha\rho$, which raises in high traffic congestion and good environmental conditions. The rationale is that the headway from a faster leading vehicle increases, which reduces the risk of collision especially when vehicles are packed (high $\rho$) or when speeds are presumably high (good environmental conditions, i.e., high $\alpha$). Alternatively, after the interaction the personal risk remains the same with the complementary probability.
- If $v_{i_\ast}>v_{i^\ast}$ we set $$({\mathcal{P}}')_{i_\ast j_\ast,i^\ast}^{j}(\rho)=\delta_{j,\min\{j_\ast+1,\,m\}},$$ i.e., we assume that the interaction with a slower leading vehicle can only increase the personal risk because the headway is reduced or overtaking is induced (see below).
Speed transitions {#sec:speed_trans}
-----------------
In modelling the speed transition probability $$({\mathcal{P}}'')_{i_\ast i^\ast}^{i}(\rho)={\operatorname{Prob}}(v_{i_\ast}\to v_i\,\vert\,v_{i^\ast},\,\rho)$$ we refer to [@puppo2016CMS], where the following three cases are considered:
- If $v_{i_\ast}<v_{i^\ast}$ then $$({\mathcal{P}}'')_{i_\ast i^\ast}^{i}(\rho)=\alpha(1-\rho)\delta_{i,i_\ast+1}+(1-\alpha(1-\rho))\delta_{i,i_\ast},$$ i.e., the vehicle with speed $v_{i_\ast}$ emulates the leading one with speed $v_{i^\ast}$ by accelerating to the next speed with probability $\alpha(1-\rho)$. This probability increases if environmental conditions are good and traffic is not too much congested. Otherwise, the speed remains unchanged with complementary probability.
- If $v_{i_\ast}>v_{i^\ast}$ then $$({\mathcal{P}}'')_{i_\ast i^\ast}^{i}(\rho)=\alpha(1-\rho)\delta_{i,i_\ast}+(1-\alpha(1-\rho))\delta_{i,i^\ast},$$ i.e., the vehicle with speed $v_{i_\ast}$ maintains its speed with probability $\alpha(1-\rho)$. The rationale is that if environmental conditions are good enough or traffic is sufficiently uncongested then it can overtake the slower leading vehicle with speed $v_{i^\ast}$. Otherwise, it is forced to slow down to the speed $v_{i^\ast}$ and to queue up, which happens with the complementary probability.
- If $v_{i_\ast}=v_{i^\ast}$ then $$\begin{aligned}
({\mathcal{P}}'')_{i_\ast i^\ast}^{i}(\rho) &= (1-\alpha)\rho\delta_{i,\max\{1,\,i_\ast-1\}}+\alpha(1-\rho)\delta_{i,\min\{i_\ast+1,\,n\}} \\
& \phantom{=} +(1-\alpha-(1-2\alpha)\rho)\delta_{i,i_\ast}.\end{aligned}$$ In this case there are three possible outcomes of the interaction: if environmental conditions are poor and traffic is congested the vehicle with speed $v_{i_\ast}$ slows down with probability $(1-\alpha)\rho$; if, instead, environmental conditions are good and traffic is light then it accelerates to the next speed (because e.g., it overtakes the leading vehicle) with probability $\alpha(1-\rho)$; finally, it can also remain with its current speed with a probability which complements the sum of the previous two.
Case studies {#sec:simulations}
============
Fundamental diagrams of traffic {#sec:funddiag}
-------------------------------
Model can be used to investigate the long-term macroscopic dynamics resulting from the small-scale interactions among vehicles discussed in the previous section. Such dynamics are summarised by the well-known *fundamental* and *speed diagrams* of traffic, see e.g., [@li2011TRR], which express the average flux and mean speed of the vehicles at equilibrium, respectively, as functions of the vehicle density along the road. This information, typically obtained from experimental measurements [@bonzani2003MCM; @kerner2004BOOK], is here studied at a theoretical level in order to discuss qualitatively the impact of the driving style on the macroscopically observable traffic trends.
In Appendix \[app:basictheo\] we give sufficient conditions for the existence, uniqueness, and global attractiveness of equilibria $f_\infty\in{\mathcal{M}_+^{\rho}(I^2)}$ of , cf. Theorems \[theo:equilibria\], \[theo:attractiveness\]. Here we claim, in particular, that if the transition probability distribution ${\mathcal{P}}$ has the special form discussed in Theorem \[theo:cont-discr\_link\] then $f_\infty$ is actually a discrete-state distribution function.
\[theo:discr\_equilibria\] Fix $\rho\in [0,\,1]$ and let ${\mathcal{P}}({\mathbf{w}}_\ast\to{\mathbf{w}}\,\vert\,{\mathbf{w}}^\ast,\,\rho)$ be like in Theorem \[theo:cont-discr\_link\]. Assume moreover that ${\operatorname{Lip}}({\mathcal{P}})<\frac{1}{2}$ (cf. Assumption \[ass:Lip\_P\]). Then the unique equilibrium distribution function $f_\infty\in{\mathcal{M}_+^{\rho}(I^2)}$, which is also globally attractive, has the form $f_\infty({\mathbf{w}})=\sum_{i=1}^{n}\sum_{j=1}^{m}f^\infty_{ij}\delta_{{\mathbf{w}}_{ij}}({\mathbf{w}})$ with the coefficients $f^\infty_{ij}$ satisfying $$f^\infty_{ij}=\frac{1}{\rho}\sum_{i_\ast,i^\ast=1}^{n}\sum_{j_\ast,j^\ast=1}^{m}{\mathcal{P}}_{i_\ast j_\ast,i^\ast j^\ast}^{ij}(\rho)
f^\infty_{i_\ast j_\ast}f^\infty_{i^\ast j^\ast},
\quad
\begin{array}{l}
i=1,\,\dots,\,n \\
j=1,\,\dots,\,m.
\end{array}
\label{eq:discr_equilibria}$$
We consider directly the case $\rho>0$, for $\rho=0$ implies uniquely $f_\infty\equiv 0$.
Since ${\operatorname{Lip}}({\mathcal{P}})<\frac{1}{2}$, we know from Theorems \[theo:equilibria\], \[theo:attractiveness\] that admits a unique and globally attractive equilibrium distribution $f_\infty\in{\mathcal{M}_+^{\rho}(I^2)}$, which is found as the fixed point of the mapping $f\mapsto\frac{1}{\rho}Q^{+}(f,\,f)$. In particular, defining the subset $$D:=\left\{f\in{\mathcal{M}_+^{\rho}(I^2)}\,:\,f({\mathbf{w}})=\sum_{i=1}^{n}\sum_{j=1}^{m}f_{ij}\delta_{{\mathbf{w}}_{ij}}({\mathbf{w}}),\
f_{ij}\geq 0,\ \sum_{i=1}^{n}\sum_{j=1}^{m}f_{ij}=\rho\right\},$$ it is easy to see that if ${\mathcal{P}}$ has the form indicated in Theorem \[theo:cont-discr\_link\] then the operator $\frac{1}{\rho}Q^{+}$ maps $D$ into itself. In fact, for $f\in D$ we get $$\frac{1}{\rho}Q^{+}(f,\,f)=\sum_{i=1}^{n}\sum_{j=1}^{m}\left(\frac{1}{\rho}\sum_{i_\ast,i^\ast=1}^{n}\sum_{j_\ast,j^\ast=1}^{m}
{\mathcal{P}}_{i_\ast j_\ast,i^\ast j^\ast}^{ij}(\rho)f_{i_\ast j_\ast}f_{i^\ast j^\ast}\right)\delta_{{\mathbf{w}}_{ij}}({\mathbf{w}}).$$ From here we also deduce formally . Therefore, in order to get the thesis, it is sufficient to prove that $D$ is closed in ${\mathcal{M}_+^{\rho}(I^2)}$. In fact this will imply that $(D,\,W_1)$ is a complete metric space, and Banach contraction principle will then locate the fixed point of $\frac{1}{\rho}Q^{+}$ in $D$.
Let $(f^k)\subseteq D$ be a convergent sequence in ${\mathcal{M}_+^{\rho}(I^2)}$ with respect to the $W_1$ metric. It is then Cauchy, hence given $\epsilon>0$ we find $N_\epsilon\in\mathbb{N}$ such that if $h,\,k>N_\epsilon$ then ${W_1\left(f^h,\,f^k\right)}<\epsilon$. This condition means $$\abs{\int_{I^2}\varphi({\mathbf{w}})\left(f^k({\mathbf{w}})-f^h({\mathbf{w}})\right)\,d{\mathbf{w}}}=
\abs{\sum_{i=1}^{n}\sum_{j=1}^{m}\varphi({\mathbf{w}}_{ij})\left(f^k_{ij}-f^h_{ij}\right)}<\epsilon$$ for every $\varphi\in C_{b,1}(I^2)\cap{\operatorname{Lip}}_1(I^2)$. In particular, taking a function $\varphi$ which vanishes at every ${\mathbf{w}}_{ij}$ but one, say ${\mathbf{w}}_{\bar{\imath}\bar{\jmath}}$, we discover $\abs{\varphi({\mathbf{w}}_{\bar{\imath}\bar{\jmath}})}\abs{f^h_{\bar{\imath}\bar{\jmath}}-f^k_{\bar{\imath}\bar{\jmath}}}<\epsilon$ for all $h,\,k>N_\epsilon$. Thus we deduce that $(f^k_{ij})_k$ is a Cauchy sequence in ${\mathbb{R}}$, hence for all $i,\,j$ there exists $f_{ij}\in{\mathbb{R}}$ such that $f^k_{ij}\to f_{ij}$ ($k\to\infty$). Clearly $f_{ij}\geq 0$ because the $f^k_{ij}$’s are all non-negative by assumption; moreover, $\sum_{i=1}^{n}\sum_{j=1}^{m}f_{ij}=\lim_{k\to\infty}\sum_{i=1}^{n}\sum_{j=1}^{m}f^k_{ij}=\rho$. Therefore $f:=\sum_{i=1}^{n}\sum_{j=1}^{m}f_{ij}\delta_{{\mathbf{w}}_{ij}}\in D$.
We now claim that $f^k\to f$ in the $W_1$ metric: $$\begin{aligned}
{W_1\left(f^k,\,f\right)} &= \sup_{\varphi\in C_{b,1}(I^2)\cap{\operatorname{Lip}}_1(I^2)}\int_{I^2}\varphi({\mathbf{w}})\left(f({\mathbf{w}})-f^k({\mathbf{w}})\right)\,d{\mathbf{w}}\\
&= \sup_{\varphi\in C_{b,1}(I^2)\cap{\operatorname{Lip}}_1(I^2)}\sum_{i=1}^{n}\sum_{j=1}^{m}\varphi({\mathbf{w}}_{ij})\left(f_{ij}-f^k_{ij}\right) \\
&\leq \sum_{i=1}^{n}\sum_{j=1}^{m}\abs{f_{ij}-f^k_{ij}}\xrightarrow{k\to\infty}0.\end{aligned}$$ This implies that $D$ is closed and the proof is completed.
Under the assumptions of Theorem \[theo:discr\_equilibria\], defines a mapping $[0,\,1]\ni\rho\mapsto\{f^\infty_{ij}(\rho)\}$, i.e., for every $\rho\in [0,\,1]$ there exist unique coefficients $f^\infty_{ij}$ solving such that $\{f^\infty_{ij}\}$ is the equilibrium of system with moreover $\sum_{i=1}^{n}\sum_{j=1}^{m}f^\infty_{ij}=\rho$.
Owing to the argument above, for each $\rho$ it is possible to compute the corresponding average flux and mean speed at equilibrium by means of formulas , with the kinetic distribution $f_\infty$. This generates the two mappings $$\rho\mapsto q(\rho):=\sum_{i=1}^{n}v_i\sum_{j=1}^{m}f^\infty_{ij}(\rho),
\qquad \rho\mapsto V(\rho):=\frac{q(\rho)}{\rho},
\label{eq:funddiag}$$ which are the theoretical definitions of the fundamental and speed diagrams, respectively, of traffic. Furthermore, it is possible to estimate the dispersion of the microscopic speeds at equilibrium by computing the standard deviation of the speed: $$\sigma_V(\rho):=\sqrt{\frac{1}{\rho}\sum_{i=1}^{n}{\left(v_i-V(\rho)\right)}^2\sum_{j=1}^{m}f^\infty_{ij}(\rho)},$$ that of the flux being $\rho\sigma_V(\rho)$, which gives a measure of the homogeneity of the driving styles of the drivers.
Figure \[fig:funddiag\] shows the diagrams , with the corresponding standard deviations, for different values of the constant $\alpha$ parameterising the transition probabilities, cf. Sections \[sec:risk\_trans\], \[sec:speed\_trans\]. Each pair $(\rho,\,q(\rho))$, $(\rho,\,V(\rho))$ has been computed by integrating numerically up to a sufficiently large final time, such that the equilibrium was reached.
![Average flux (left column) and mean speed (right column) vs. traffic density for different levels of the quality of the environment. Red dashed lines are the respective standard deviations. $n=6$ uniformly spaced speed classes have been used.[]{data-label="fig:funddiag"}](funddiag){width="90.00000%"}
For $\alpha=1$ (best environmental conditions) the diagrams are the same as those studied analytically in [@fermo2014DCDSS]. In particular, they show a clear separation between the so-called *free* and *congested phases* of traffic: the former, taking place at low density ($\rho<0.5$), is characterised by the fact that vehicles travel at the maximum speed with zero standard deviation; the latter, taking place instead at high density ($\rho>0.5$), is characterised by a certain dispersion of the microscopic speeds with respect to their mean value. In [@fermo2014DCDSS] the critical value $\rho=0.5$ has been associated with a supercritical bifurcation of the equilibria, thereby providing a precise mathematical characterisation of the phase transition.
For $\alpha<1$, analogously detailed analytical results are not yet available and, to our knowledge, Theorems \[theo:equilibria\], \[theo:attractiveness\] in Appendix \[app:basictheo\] are the first results giving at least sufficient conditions for the qualitative characterisation of equilibrium solutions to in the general case. According to the graphs in Figure \[fig:funddiag\], the model predicts lower and lower critical values for the density threshold triggering the phase transition. In addition to that, coherently with the experimental observations, cf. e.g., [@kerner2004BOOK], some scattering of the diagrams appears also for low density, along with a *capacity drop* visible in the average flux (i.e., the fact that the maximum flux in the congested phase is lower than the maximum one in the free phase), which separates the free and congested phases as described in [@zhang2005TRB].
Compared to typical experimental data, the most realistic diagrams seem to be those obtained for $\alpha=0.8$, which denotes suboptimal though not excessively poor environmental conditions. It is worth stressing that such a realism of the theoretical diagrams is not only relevant for supporting the derivation of macroscopic kinematic features of the flow of vehicles at equilibrium out of microscopic interaction rules far from equilibrium. It constitutes also a reliable basis for interpreting, in a similar multiscale perspective, the link with risk and safety issues, for which synthetic and informative empirical data similar to the fundamental and speed diagrams are, to the authors knowledge, not currently available for direct comparison.
Risk diagrams of traffic {#sec:riskdiag}
------------------------
Starting from the statistical distribution of the risk given in , we propose the following definition for the probability of accident along the road:
\[def:p\_acc\] Let $\bar{u}\in (0,\,1)$ be a risk threshold above which the personal risk for a representative vehicle is considered too high. We define the instantaneous *probability of accident* $P=P(t)$ (associated with $\bar{u}$) as the normalised number of vehicles whose personal risk is, at time $t$, greater than or equal to $\bar{u}$: $$P(t):=\frac{1}{\rho}\int_{\bar{u}}^1\varphi(t,\,u)\,du=\frac{1}{\rho}\int_{\bar{u}}^1\int_0^1 f(t,\,v,\,u)\,dv\,du.$$
Asymptotically, using the discrete-state equilibrium distribution function $f_\infty=f_\infty(\rho)$ found in Theorem \[theo:discr\_equilibria\], we obtain the mapping $$\rho\mapsto P(\rho):=\frac{1}{\rho}\sum_{j\,:\,u_j\geq\bar{u}}\sum_{i=1}^{n}f^\infty_{ij}(\rho),
\label{eq:p_acc}$$ which shows that the probability of accident is, in the long run, a function of the traffic density. We call the *accident probability diagram*.
Definition \[def:p\_acc\] and, in particular, depend on the threshold $\bar{u}$, which needs to be estimated in order for the model to serve quantitative purposes. If, for a given road, the empirical probability of accident is known (for instance, from time series on the frequency of accidents, see e.g., [@abdel-aty2000AAP; @miaou1993AAP; @oppe1989AAP]) then it is possible to find $\bar{u}$ by solving an inverse problem which leads the theoretical probability to match the experimental one. This way, the road under consideration can be assigned the *risk threshold* $\bar{u}$.
The question then arises how to use the information provided by the risk threshold $\bar{u}$ for the assessment of safety standards. In fact, the personal risk $u$, albeit a primitive variable of the model, is not a quantity which can be really measured for each vehicle: a macroscopic synthesis is necessary. Quoting from [@KiwiRAP2012]:
> Personal risk is most of interest to the public, as it shows the risk to road users, as individuals.
To this purpose, we need to further post-process the statistical information brought by the kinetic model. Taking inspiration from the fundamental and speed diagrams of traffic discussed in Section \[sec:funddiag\], a conceivable approach is to link the personal risk, conveniently understood in an average sense, to the macroscopic traffic density along the road. For this we define:
\[def:riskdiag\] The *risk diagram* of traffic is the mapping (cf. ) $$\rho\mapsto U(\rho):=\frac{1}{\rho}\sum_{j=1}^{m}u_j\sum_{i=1}^{n}f^\infty_{ij}(\rho),
\label{eq:riskdiag}$$ $f_\infty=f_\infty(\rho)$ being the equilibrium kinetic distribution function of Theorem \[theo:discr\_equilibria\]. The related standard deviation is $$\sigma_U(\rho):=\sqrt{\frac{1}{\rho}\sum_{j=1}^{m}{\left(u_j-U(\rho)\right)}^2\sum_{i=1}^{n}f^\infty_{ij}(\rho)}.$$
Using the tools provided by Definition \[def:riskdiag\], we can finally fix a *safety criterion* which discriminates between safety and risk regimes of traffic depending on the traffic loads:
\[def:safety\_crit\] Let $\bar{u}\in (0,\,1)$ be the risk threshold fixed by Definition \[def:p\_acc\]. The *safety regime* of traffic along a given road corresponds to the traffic loads $\rho\in [0,\,1]$ such that $$U(\rho)+\sigma_U(\rho)<\bar{u}.$$ The complementary regime, i.e., the one for which $U(\rho)+\sigma_U(\rho)\geq\bar{u}$, is the *risk regime*.
An alternative, less precautionary, criterion might identify the safety regime with the traffic loads such that $U(\rho)<\bar{u}$ and the risk regime with those such that $U(\rho)\geq\bar{u}$.
![Average risk (left column), with related standard deviation (red dashed lines), and probability of accident (right column) vs. traffic density for different levels of the quality of the environment. $n=6$ and $m=3$ uniformly spaced speed and risk classes, respectively, have been used[]{data-label="fig:riskdiag"}](riskdiag){width="90.00000%"}
Figure \[fig:riskdiag\] shows the risk diagram and the probability of accident at equilibrium obtained numerically for various environmental conditions as described in Section \[sec:funddiag\], using the heuristic risk threshold $\bar{u}=0.7$.
For $\alpha=1$ the average risk and the probability of accident are deterministically zero for all values of the traffic density in the free phase ($\rho<0.5$). This is a consequence of the fact that, as shown by the corresponding diagrams in Figure \[fig:funddiag\], at low density in optimal environmental conditions vehicles virtually do not interact, all of them travelling undisturbed at the maximum speed. In practice, vehicles behave as if the road were empty and consequently the model predicts maximal safety with no possibility of collisions. In the congested phase ($\rho>0.5$), instead, the average risk and the probability of accident rise suddenly to a positive value, following the emergence of the scattering of the microscopic speeds, see again the corresponding panels in Figure \[fig:funddiag\]. Then they decrease monotonically to zero when $\rho$ approaches $1$, for in a full traffic jam vehicles do not move. Notice that the maximum of the average risk and of the probability of accident is in correspondence of the critical density value $\rho=0.5$.
For $\alpha<1$, the main features of the diagrams $\rho\mapsto U(\rho)$ and $\rho\mapsto P(\rho)$ described in the ideal prototypical case above remain unchanged. In particular, a comparison with Figure \[fig:funddiag\] shows that the maximum of both diagrams is still reached in correspondence of the density value triggering the transition from free to congested traffic, see also Figure \[fig:postproc\]a. This is clearly in good agreement with the intuition, indeed it identifies the phase transition as the most risky situation for drivers. It is worth remarking that such a macroscopically observable fact has not been postulated in the construction of the model but has emerged as a result of more elementary microscopic interaction rules. For $\alpha=0.6,\,0.8$, the average risk and the probability of accident take, in the free phase of traffic, a realistic nonzero value, which first increases before the phase transition and then decreases to zero in the congested phase. Notice that the standard deviation of the risk is higher in the free than in the congested phase, which again meets the intuition considering that at low density the movement of single vehicles is less constrained by the global flow. For $\alpha=0.5$, environmental conditions are so poor that $U(\rho),\,P(\rho)\to 1$ when $\rho\to 0^+$, namely in correspondence of the maximum mean speed (cf. the corresponding panels of Figure \[fig:funddiag\]).
![**a.** Comparison of the curves , , in the case $\alpha=0.8$. **b.** Determination of safety and risk regimes of traffic for $\alpha=0.8$, with corresponding probabilities of accident.[]{data-label="fig:postproc"}](postproc){width="\textwidth"}
According to [@KiwiRAP2012]:
> Personal risk is typically higher in more difficult terrains, where traffic volumes and road standards are often lower.
By looking at the graphs in the first column of Figure \[fig:riskdiag\], we see that the results of the model match qualitatively well this experimental observation: for low $\rho$ and decreasing $\alpha$ the average personal risk tends indeed to increase.
Again, the most realistic (namely suboptimal, though not excessively poor) scenario appears to be the one described by $\alpha=0.8$. In this case, cf. Figure \[fig:postproc\]b, the safety criterion of Definition \[def:safety\_crit\], i.e., $$U(\rho)+\sigma_U(\rho)<0.7,$$ individuates two safety regimes for traffic loads $\rho\in [0,\,\rho_1)$ and $\rho\in (\rho_2,\,1]$, with $\rho_1\approx 0.275$ and $\rho_2\approx 0.51$, respectively. The first one corresponds to a probability of accident $P(\rho)\lesssim 27\%$, the second one to $P(\rho)\lesssim 38\%$. Not surprisingly, the maximum admissible probability of accident in free flow ($\rho<\rho_1$) is lower than the one in congested flow ($\rho>\rho_2$), meaning that the safety criterion of Definition \[def:safety\_crit\] turns out to be more restrictive in the first than in the second case. This can be understood thinking of the fact that in free flow speeds are higher and the movement of vehicles is less constrained by the global flow, which imposes tighter safety standards.
Conclusions and perspectives {#sec:conclusions}
============================
In this paper we have proposed a Boltzmann-type kinetic model which describes the influence of the driving style on the personal driving risk in terms of microscopic binary interactions among the vehicles. In particular, speed transitions due to encounters with other vehicles, and the related changes of personal risk, are described in probability, thereby accounting for the interpersonal variability of the human behaviour, hence ultimately for the *subjective* component of the risk. Moreover, they are parameterised by the local density of vehicles along the road and by the environmental conditions (for instance, type of road, weather conditions), so as to include in the mathematical description also the *objective* component of the risk.
By studying the equilibrium solutions of the model, we have defined two macroscopic quantities of interest for the global assessment of the risk conditions, namely the *risk diagram* and the *accident probability diagram*. The former gives the average risk along the road and the latter the probability of accident both as functions of the density of vehicles, namely of the level of traffic congestion. These diagrams compare well with the celebrated fundamental and speed diagrams of traffic, also obtainable from the equilibrium solutions of our kinetic model, in that they predict the maximum risk across the phase transition from free to congested flow, when several perturbative phenomena are known to occur in the macroscopic hydrodynamic behaviour of traffic (such as e.g., capacity drop [@zhang2005TRB], scattering of speed and flux and appearance of a third phase of “synchronised flow” [@kerner2004BOOK]). Moreover, within the free and congested regimes they are in good agreement with the experimental findings of accident data collection campaigns: for instance, they predict that the personal risk rises in light traffic and poor environmental conditions, coherently with what is stated e.g., in [@KiwiRAP2012].
By using the aforesaid diagrams we have proposed the definition of a *safety criterion*, which, upon assigning to a given road a risk threshold based on the knowledge of real data on accidents, individuates *safety* and *risk regimes* depending on the volume of traffic. Once again, it turns out that the risk regime consists of a range of vehicle densities encompassing the critical one at which phase transition occurs. This type of information is perhaps more directly useful to the public than to traffic controlling authorities, because it shows the average risk that a representative road user is subject to. Nevertheless, by identifying traffic loads which may pose safety threats, it also indicates which densities should be preferably avoided along the road and when risk reducing measures should be activated.
This work should be considered as a very first attempt to formalise, by a mathematical model, the risk dynamics in vehicular traffic from the point of view of *simulation* and *prediction* rather than simply of statistical description. Several improvements and developments are of course possible, which can take advantage of some existing literature about kinetic models of vehicular traffic: for instance, one may address the spatially inhomogeneous problem [@delitala2007M3AS; @fermo2013SIAP] to track “risk waves” along the road; or the problem on networks [@fermo2015M3AS] to study the propagation of the risk on a set of interconnected roads; or even the impact of different types of vehicles, which form a “traffic mixture” [@puppo2016CMS], on the risk distribution. On the other hand, the ideas presented in this paper may constitute the basis for modelling risk and safety aspects also of other systems of interacting agents particularly interested by such issues. It is the case of e.g., human crowds, for which a quite wide, though relatively recent, literature already exists (see [@cristiani2014BOOK Chapter 4] for a survey), that in some cases [@agnelli2015M3AS] uses a kinetic formalism close to the one which inspired the present work.
Basic theory of the kinetic model in Wasserstein spaces {#app:basictheo}
=======================================================
Equation , complemented with a suitable initial condition, produces the following Cauchy problem: $$\begin{cases}
\partial_t f=Q^{+}(f,\,f)-\rho f, & t>0,\,{\mathbf{w}}\in I^2 \\[1mm]
f(0,\,{\mathbf{w}})=f_0({\mathbf{w}}), & {\mathbf{w}}\in I^2
\end{cases}
\label{eq:cauchy}$$ with the compatibility condition $\int_{I^2}f_0({\mathbf{w}})\,d{\mathbf{w}}=\rho$. Recall that $I^2=[0,\,1]^2\subset{\mathbb{R}}^2$ is the space of the microscopic states. The problem can be rewritten in mild form by multiplying both sides of the equation by $e^{\rho t}$ and integrating in time: $$\begin{aligned}
\begin{aligned}[t]
f(t,\,{\mathbf{w}}) &= e^{-\rho t}f_0({\mathbf{w}}) \\
&\phantom{=} +\int_0^t e^{\rho(s-t)}\iint_{I^4}{\mathcal{P}}({\mathbf{w}}_\ast\to{\mathbf{w}}\,\vert\,{\mathbf{w}}^\ast,\,\rho)f(s,\,{\mathbf{w}}_\ast)f(s,\,{\mathbf{w}}^\ast)\,d{\mathbf{w}}_\ast\,d{\mathbf{w}}^\ast\,ds,
\end{aligned}
\label{eq:mild}\end{aligned}$$ where we have used that, in view of , $\rho=\int_{I^2}f(t,\,{\mathbf{w}})\,d{\mathbf{w}}$ is constant in $t$.
In order to allow for measure-valued kinetic distribution functions, as it happens in the model discussed from Section \[sec:discrete\] onwards, an appropriate space in which to study is $X:=C([0,\,T];\,{\mathcal{M}_+^{\rho}(I^2)})$, where $T>0$ is a final time and ${\mathcal{M}_+^{\rho}(I^2)}$ is the space of positive measures on $I^2$ having mass $\rho\geq 0$. An element $f\in X$ is then a continuous mapping $t\mapsto f(t)$, where, for all $t\in [0,\,T]$, $f(t)$ is a positive measure with $\int_{I^2}f(t,\,{\mathbf{w}})\,d{\mathbf{w}}=\rho$.
$X$ is a complete metric space with the distance $\sup\limits_{t\in [0,\,T]}{W_1\left(f(t),\,g(t)\right)}$, where $${W_1\left(f(t),\,g(t)\right)}=\sup_{\varphi\in{\operatorname{Lip}}_1(I^2)}\int_{I^2}\varphi({\mathbf{w}})(g(t,\,{\mathbf{w}})-f(t,\,{\mathbf{w}}))\,d{\mathbf{w}}.
\label{eq:wass}$$ is the *1-Wasserstein distance* between $f(t),\,g(t)\in{\mathcal{M}_+^{\rho}(I^2)}$. In particular, $${\operatorname{Lip}}_1(I^2)=\{\varphi\in C(I^2)\,:\,{\operatorname{Lip}}(\varphi)\leq 1\},$$ ${\operatorname{Lip}}(\varphi)$ denoting the Lipschitz constant of $\varphi$.
The definition of $W_1$ follows from the Kantorovich-Rubinstein duality formula, see e.g., [@ambrosio2008BOOK Chapter 7]. However, since in ${\mathcal{M}_+^{\rho}(I^2)}$ all measures carry the same mass and, furthermore, the domain $I^2$ is bounded with $\operatorname{diam}(I^2)\leq 2$, the supremum at the right-hand side is actually the same as that computed over the smaller set $C_{b,1}(I^2)\cap{\operatorname{Lip}}_1(I^2)$, where $$C_{b,1}(I^2)=\{\varphi\in C(I^2)\,:\,\norm{\varphi}_\infty\leq 1\},$$ see [@ulikowska2013PhDTHESIS Chapter 1]. Hence we also have: $${W_1\left(f(t),\,g(t)\right)}=\sup_{\varphi\in C_{b,1}(I^2)\cap{\operatorname{Lip}}_1(I^2)}\int_{I^2}\varphi({\mathbf{w}})(g(t,\,{\mathbf{w}})-f(t,\,{\mathbf{w}}))\,d{\mathbf{w}}.
\label{eq:wass_Cb1}$$ In the following we will use both and interchangeably.
\[rem:wass\_lambda\] Let $\lambda>0$ and let $\varphi\in C(I^2)$ with ${\operatorname{Lip}}(\varphi)\leq\lambda$. Then $$\begin{aligned}
\int_{I^2}\varphi({\mathbf{w}})(g(t,\,{\mathbf{w}})-f(t,\,{\mathbf{w}}))\,d{\mathbf{w}}&= \lambda\int_{I^2}\frac{\varphi({\mathbf{w}})}{\lambda}(g(t,\,{\mathbf{w}})-f(t,\,{\mathbf{w}}))\,d{\mathbf{w}}\\
&\leq \lambda{W_1\left(f(t),\,g(t)\right)},\end{aligned}$$ considering that $\frac{\varphi}{\lambda}\in{\operatorname{Lip}}_1(I^2)$.
We will occasionally use this property in the proofs of the forthcoming theorems. Notice that, if $\lambda<1$, this estimate is stricter than that obtained by using directly the fact that $\varphi\in{\operatorname{Lip}}_1(I^2)$ (i.e., the one without $\lambda$ at the right-hand side).
To establish the next results, we will always assume that the transition probability distribution ${\mathcal{P}}$ satisfies the following Lipschitz continuity property:
\[ass:Lip\_P\] Let ${\mathcal{P}}({\mathbf{w}}_\ast\to\cdot\,\vert\,{\mathbf{w}}^\ast,\,\rho)\in{\mathscr{P}}(I^2)$ for all ${\mathbf{w}}_\ast,\,{\mathbf{w}}^\ast\in I^2$, $\rho\in [0,\,1]$, where ${\mathscr{P}}(I^2)$ is the space of probability measures on $I^2$. We assume that there exists ${\operatorname{Lip}}({\mathcal{P}})>0$, *which may depend on* $\rho$ (although we do not write such a dependence explicitly), such that $${W_1\left({\mathcal{P}}({\mathbf{w}}_{\ast 1}\to\cdot\,\vert\,{\mathbf{w}}^\ast_1,\,\rho),\,{\mathcal{P}}({\mathbf{w}}_{\ast 2}\to\cdot\,\vert\,{\mathbf{w}}^\ast_2,\,\rho)\right)}
\leq{\operatorname{Lip}}({\mathcal{P}})\left(\abs{{\mathbf{w}}_{\ast 2}-{\mathbf{w}}_{\ast 1}}+\abs{{\mathbf{w}}^\ast_2-{\mathbf{w}}^\ast_1}\right)$$ for all ${\mathbf{w}}_{\ast 1},\,{\mathbf{w}}_{\ast 2},\,{\mathbf{w}}^\ast_1,\,{\mathbf{w}}^\ast_2\in I^2$ and all $\rho\in [0,\,1]$.
Existence and uniqueness of the solution
----------------------------------------
Taking advantage of the mild formulation of the problem, we apply Banach fixed-point theorem in $X$ to prove:
\[theo:exist\_uniqueness\] Fix $\rho\in [0,\,1]$ and let $f_0\in{\mathcal{M}_+^{\rho}(I^2)}$. There exists a unique $f\in C([0,\,+\infty);\,{\mathcal{M}_+^{\rho}(I^2)})$ which solves .
We assume $\rho>0$ for, if $\rho=0$, the unique solution to is clearly $f\equiv 0$ and we are done. We fix $T>0$ and we introduce the operator ${\mathscr{S}}$ defined on $X$ as $$\begin{aligned}
{\mathscr{S}}(f)(t,\,{\mathbf{w}}) &:= e^{-\rho t}f_0({\mathbf{w}}) \\
&\phantom{:=} +\int_0^t e^{\rho(s-t)}\iint_{I^4}{\mathcal{P}}({\mathbf{w}}_\ast\to{\mathbf{w}}\,\vert\,{\mathbf{w}}^\ast,\,\rho)f(s,\,{\mathbf{w}}_\ast)f(s,\,{\mathbf{w}}^\ast)\,d{\mathbf{w}}_\ast\,d{\mathbf{w}}^\ast\,ds.\end{aligned}$$ Then we restate as $f={\mathscr{S}}(f)$, meaning that solutions to are fixed points of ${\mathscr{S}}$ on $X$. Now we claim that:
- ${\mathscr{S}}(X)\subseteq X$.\
Let $f\in X$. The non-negativity of $f_0$ (by assumption) and that of ${\mathcal{P}}$ (by construction) give immediately that ${\mathscr{S}}(f)(t)$ is a positive measure for all $t\in [0,\,T]$. Moreover, a simple calculation using property shows that the mass of ${\mathscr{S}}(f)(t)$ is $$\int_{I^2}{\mathscr{S}}(f)(t,\,{\mathbf{w}})\,d{\mathbf{w}}=\rho e^{-\rho t}+\rho^2\int_0^t e^{\rho(s-t)}\,ds=\rho.$$ Therefore we conclude that ${\mathscr{S}}(f)(t)\in {\mathcal{M}_+^{\rho}(I^2)}$ for all $t\in [0,\,T]$.
To check the continuity of the mapping $t\mapsto{\mathscr{S}}(f)(t)$ we define $${\mathscr{I}}(\varphi)(s):=\iint_{I^4}\left(\int_{I^2}\varphi({\mathbf{w}}){\mathcal{P}}({\mathbf{w}}_\ast\to{\mathbf{w}}\,\vert\,{\mathbf{w}}^\ast,\,\rho)\,d{\mathbf{w}}\right)
f(s,\,{\mathbf{w}}_\ast)f(s,\,{\mathbf{w}}^\ast)\,d{\mathbf{w}}_\ast\,d{\mathbf{w}}^\ast,$$ for $\varphi\in C_{b,1}(I^2)\cap{\operatorname{Lip}}_1(I^2)$, then we take $t_1,\,t_2\in [0,\,T]$ with, say, $t_1\leq t_2$ and we compute: $$\begin{aligned}
{W_1\left({\mathscr{S}}(f)(t_1),\,{\mathscr{S}}(f)(t_2)\right)} &= \sup_{\varphi\in C_{b,1}(I^2)\cap{\operatorname{Lip}}_1(I^2)}\Biggl[\left(e^{-\rho t_2}-e^{-\rho t_1}\right)\int_{I^2}\varphi({\mathbf{w}})f_0({\mathbf{w}})\,d{\mathbf{w}}\\
&\phantom{=} +\int_0^{t_2}e^{\rho(s-t_2)}{\mathscr{I}}[\varphi](s)\,ds-\int_0^{t_1}e^{\rho(s-t_1)}{\mathscr{I}}[\varphi](s)\,ds\Biggr] \\
&\leq \rho\abs{e^{-\rho t_2}-e^{-\rho t_1}}\left(1+\rho\int_0^{t_1}e^{\rho s}\,ds\right)+\rho^2e^{-\rho t_2}\int_{t_1}^{t_2}e^{\rho s}\,ds \\
&\leq 3\rho^2\abs{t_2-t_1},\end{aligned}$$ where we have used the Lipschitz continuity of the exponential function and the fact that $\abs{{\mathscr{I}}(\varphi)(s)}\leq\rho^2$. Finally, this says that ${\mathscr{S}}(f)\in X$.
- If $T>0$ is sufficiently small then ${\mathscr{S}}$ is a contraction on $X$.\
Let $f,\,g\in X$, $\varphi\in{\operatorname{Lip}}_1(I^2)$, and define $$\psi({\mathbf{w}}_\ast,\,{\mathbf{w}}^\ast):=\int_{I^2}\varphi({\mathbf{w}}){\mathcal{P}}({\mathbf{w}}_\ast\to{\mathbf{w}}\,\vert\,{\mathbf{w}}^\ast,\,\rho)\,d{\mathbf{w}}.
\label{eq:psi}$$ Notice preliminarily that, owing to Assumption \[ass:Lip\_P\], $$\abs{\psi({\mathbf{w}}_{\ast 2},\,{\mathbf{w}}^\ast_2)-\psi({\mathbf{w}}_{\ast 1},\,{\mathbf{w}}^\ast_1)}\leq{\operatorname{Lip}}({\mathcal{P}})\left(\abs{{\mathbf{w}}_{\ast 2}-{\mathbf{w}}_{\ast 1}}+\abs{{\mathbf{w}}^\ast_2-{\mathbf{w}}^\ast_1}\right).$$ Then: $$\begin{aligned}
\int_{I^2} & \varphi({\mathbf{w}})({\mathscr{S}}(g)(t,\,{\mathbf{w}})-{\mathscr{S}}(f)(t,\,{\mathbf{w}}))\,d{\mathbf{w}}\\
&= \int_0^t e^{\rho(s-t)}\iint_{I^4}\psi({\mathbf{w}}_\ast,\,{\mathbf{w}}^\ast)(g_\ast g^\ast-f_\ast f^\ast)\,d{\mathbf{w}}_\ast\,d{\mathbf{w}}^\ast\,ds
\intertext{($f_\ast$, $f^\ast$ being shorthand for $f(s,\,{\mathbf{w}}_\ast)$, $f(s,\,{\mathbf{w}}^\ast)$ and analogously $g_\ast$, $g^\ast$)}
&= \int_0^t e^{\rho(s-t)}\int_{I^2}\left(\int_{I^2}\psi({\mathbf{w}}_\ast,\,{\mathbf{w}}^\ast)g_\ast\,d{\mathbf{w}}_\ast\right)(g^\ast-f^\ast)\,d{\mathbf{w}}^\ast\,ds \\
&\phantom{=} +\int_0^t e^{\rho(s-t)}\int_{I^2}\left(\int_{I^2}\psi({\mathbf{w}}_\ast,\,{\mathbf{w}}^\ast)f^\ast\,d{\mathbf{w}}^\ast\right)(g_\ast-f_\ast)\,d{\mathbf{w}}_\ast\,ds.\end{aligned}$$ Notice that $$\abs{\int_{I^2}\Bigl(\psi({\mathbf{w}}_\ast,\,{\mathbf{w}}^\ast_2)-\psi({\mathbf{w}}_\ast,\,{\mathbf{w}}^\ast_1)\Bigr)g_\ast\,d{\mathbf{w}}_\ast}\leq
\rho{\operatorname{Lip}}({\mathcal{P}})\abs{{\mathbf{w}}^\ast_2-{\mathbf{w}}^\ast_1}$$ and that the same holds also for the mapping ${\mathbf{w}}_\ast\mapsto\int_{I^2}\psi({\mathbf{w}}_\ast,\,{\mathbf{w}}^\ast)f^\ast\,d{\mathbf{w}}^\ast$. Hence we continue the previous calculation by appealing to Remark \[rem:wass\_lambda\] (at the right-hand side) and to the arbitrariness of $\varphi$ to discover: $$\begin{aligned}
{W_1\left({\mathscr{S}}(f)(t),\,{\mathscr{S}}(g)(t)\right)} &\leq 2\rho{\operatorname{Lip}}{({\mathcal{P}})}\int_0^t e^{\rho(s-t)}{W_1\left(f(s),\,g(s)\right)}\,ds \\
&\leq 2{\operatorname{Lip}}{({\mathcal{P}})}\left(1-e^{-\rho t}\right)\sup_{t\in [0,\,T]}{W_1\left(f(t),\,g(t)\right)},\end{aligned}$$ whence finally $$\sup_{t\in [0,\,T]}{W_1\left({\mathscr{S}}(f)(t),\,{\mathscr{S}}(g)(t)\right)}\leq 2{\operatorname{Lip}}{({\mathcal{P}})}\left(1-e^{-\rho T}\right)\sup_{t\in [0,\,T]}{W_1\left(f(t),\,g(t)\right)}.$$ From this inequality we see that:
1. if ${\operatorname{Lip}}{({\mathcal{P}})}>\frac{1}{2}$ then it suffices to take $T<\frac{1}{\rho}\log{\frac{2{\operatorname{Lip}}{({\mathcal{P}})}}{2{\operatorname{Lip}}{({\mathcal{P}})}-1}}$ to obtain that ${\mathscr{S}}$ is a contraction on $X$;
2. if ${\operatorname{Lip}}{({\mathcal{P}})}\leq\frac{1}{2}$ then ${\mathscr{S}}$ is a contraction on $X$ for every $T>0$.
Owing to the properties above, Banach fixed-point theorem implies the existence of a unique fixed point $f\in X$ of ${\mathscr{S}}$ which solves . If ${\operatorname{Lip}}{({\mathcal{P}})}\leq\frac{1}{2}$ this solution is global in time, whereas if ${\operatorname{Lip}}{({\mathcal{P}})}>\frac{1}{2}$ it is only local. However, a simple continuation argument, based on taking $f(T)\in{\mathcal{M}_+^{\rho}(I^2)}$ as new initial condition for $t=T$ and repeating the procedure above, shows that we can extend it uniquely on the interval $[T,\,2T]$. Proceeding in this way, we do the same on all subsequent intervals of the form $[kT,\,(k+1)T]$, $k=2,\,3,\,\dots$, and we obtain also in this case a global-in-time solution.
Continuous dependence
---------------------
By comparing two solutions to carrying the same mass $\rho$ we can establish:
\[theo:cont\_dep\] Fix $\rho\in [0,\,1]$ and two initial data $f_{01},\,f_{02}\in{\mathcal{M}_+^{\rho}(I^2)}$. Let $f_1,\,f_2\in C([0,\,+\infty);\,{\mathcal{M}_+^{\rho}(I^2)})$ be the corresponding solution to . Then: $${W_1\left(f_1(t),\,f_2(t)\right)}\leq e^{-\rho(1-2{\operatorname{Lip}}({\mathcal{P}}))t}{W_1\left(f_{01},\,f_{02}\right)} \quad \forall\,t\geq 0.$$
[^2] Let $\varphi\in{\operatorname{Lip}}_1(I^2)$. Using we compute: $$\begin{gathered}
\int_{I^2}\varphi({\mathbf{w}})(f_2(t,\,{\mathbf{w}})-f_1(t,\,{\mathbf{w}}))\,d{\mathbf{w}}\leq e^{-\rho t}{W_1\left(f_{01},\,f_{02}\right)} \\
+\int_0^t e^{\rho(s-t)}\iint_{I^4}\psi({\mathbf{w}}_\ast,\,{\mathbf{w}}^\ast)\left(f_{2\ast}f_2^\ast-f_{1\ast}f_1^\ast\right)\,d{\mathbf{w}}_\ast\,d{\mathbf{w}}^\ast,\end{gathered}$$ where $\psi({\mathbf{w}}_\ast,\,{\mathbf{w}}^\ast)$ is the function defined in the proof of Theorem \[theo:exist\_uniqueness\]. By means of analogous calculations we discover $$\begin{aligned}
{W_1\left(f_1(t),\,f_2(t)\right)} &\leq e^{-\rho t}{W_1\left(f_{01},\,f_{02}\right)} \\
&\phantom{\leq} +2\rho{\operatorname{Lip}}({\mathcal{P}})\int_0^t e^{\rho(s-t)}{W_1\left(f_1(s),\,f_2(s)\right)}\,ds,\end{aligned}$$ whence the thesis follows by applying Gronwall’s inequality.
From Theorem \[theo:cont\_dep\] we infer that if ${\operatorname{Lip}}({\mathcal{P}})<\frac{1}{2}$ then $$\lim_{t\to+\infty}{W_1\left(f_1(t),\,f_2(t)\right)}=0.$$ That is, all solutions to approach asymptotically one another. This fact preludes to the result about the equilibria of proved in Theorem \[theo:attractiveness\] below.
Asymptotic analysis {#app:equilibria}
-------------------
In this section we study the asymptotic trends of , in particular we give sufficient conditions for the existence, uniqueness, and attractiveness of equilibria. It is worth stressing that equilibria of the kinetic model are at the basis of the computation of fundamental and risk diagrams of traffic discussed in Section \[sec:simulations\].
Besides the methods presented here, we refer the reader to [@herty2010KRM] and references therein for other ways to study the trend towards equilibrium of space homogeneous kinetic traffic models and for the identification of exact or approximated steady states.
### Existence and uniqueness of equilibria
Equilibria of are time-independent distribution functions $f_\infty=f_\infty({\mathbf{w}})\in{\mathcal{M}_+^{\rho}(I^2)}$ such that $$Q^+(f_\infty,\,f_\infty)-\rho f_\infty=0.$$ If $\rho=0$ then it is clear that the unique equilibrium is the trivial distribution function $f_\infty\equiv 0$. Assuming instead $\rho>0$, from the previous equation we see that equilibria satisfy $$f_\infty=\frac{1}{\rho}Q^+(f_\infty,\,f_\infty),
\label{eq:equilibria}$$ i.e., they are fixed points of the mapping $f\mapsto\frac{1}{\rho}Q^+(f,\,f)$. The next theorem gives a sufficient condition for their existence and uniqueness, relying on the Banach contraction principle in ${\mathcal{M}_+^{\rho}(I^2)}$.
\[theo:equilibria\] Let ${\operatorname{Lip}}({\mathcal{P}})<\frac{1}{2}$. For all $\rho\in [0,\,1]$, admits a unique equilibrium distribution function $f_\infty\in{\mathcal{M}_+^{\rho}(I^2)}$.
Throughout the proof we will assume $\rho>0$.
The operator $\frac{1}{\rho}Q^+$ maps ${\mathcal{M}_+^{\rho}(I^2)}$ into itself, in fact, given $f\in{\mathcal{M}_+^{\rho}(I^2)}$, it is clear that $\frac{1}{\rho}Q^+(f,\,f)$ is a positive measure and moreover $$\int_{I^2}\frac{1}{\rho}Q^+(f,\,f)({\mathbf{w}})\,d{\mathbf{w}}=\frac{1}{\rho}\iint_{I^4}f({\mathbf{w}}_\ast)f({\mathbf{w}}^\ast)\,d{\mathbf{w}}_\ast\,d{\mathbf{w}}^\ast=\rho.$$
Moreover we claim that, under the assumptions of the theorem, it is a contraction on ${\mathcal{M}_+^{\rho}(I^2)}$. Indeed, let $f,\,g\in{\mathcal{M}_+^{\rho}(I^2)}$ and $\varphi\in{\operatorname{Lip}}_1(I^2)$, then: $$\begin{aligned}
\frac{1}{\rho}\int_{I^2}\varphi({\mathbf{w}}) & \left(Q^+(g,\,g)({\mathbf{w}})-Q^+(f,\,f)({\mathbf{w}})\right)\,d{\mathbf{w}}\\
&= \frac{1}{\rho}\int_{I^2}\left(\int_{I^2}\psi({\mathbf{w}}_\ast,\,{\mathbf{w}}^\ast)g({\mathbf{w}}_\ast)\,d{\mathbf{w}}_\ast\right)(g({\mathbf{w}}^\ast)-f({\mathbf{w}}^\ast))\,d{\mathbf{w}}^\ast \\
&\phantom{=} +\frac{1}{\rho}\int_{I^2}\left(\int_{I^2}\psi({\mathbf{w}}_\ast,\,{\mathbf{w}}^\ast)f({\mathbf{w}}^\ast)\,d{\mathbf{w}}^\ast\right)(g({\mathbf{w}}_\ast)-f({\mathbf{w}}_\ast))\,d{\mathbf{w}}_\ast,\end{aligned}$$ where $\psi$ is the function . From the proof of Theorem \[theo:exist\_uniqueness\], we know that both mappings ${\mathbf{w}}^\ast\mapsto\int_{I^2}\psi({\mathbf{w}}_\ast,\,{\mathbf{w}}^\ast)g({\mathbf{w}}_\ast)\,d{\mathbf{w}}_\ast$ and ${\mathbf{w}}_\ast\mapsto\int_{I^2}\psi({\mathbf{w}}_\ast,\,{\mathbf{w}}^\ast)f({\mathbf{w}}^\ast)\,d{\mathbf{w}}^\ast$ are Lipschitz continuous with Lipschitz constant bounded by $\rho{\operatorname{Lip}}({\mathcal{P}})$, hence from the previous expression we deduce $$\frac{1}{\rho}\int_{I^2}\varphi({\mathbf{w}})\left(Q^+(g,\,g)({\mathbf{w}})-Q^+(f,\,f)({\mathbf{w}})\right)\,d{\mathbf{w}}\leq 2{\operatorname{Lip}}({\mathcal{P}}){W_1\left(f,\,g\right)}.$$ Taking the supremum over $\varphi$ at the left-hand side yields finally $${W_1\left(\frac{1}{\rho}Q^+(f,\,f),\,\frac{1}{\rho}Q^+(g,\,g)\right)}\leq 2{\operatorname{Lip}}({\mathcal{P}}){W_1\left(f,\,g\right)},$$ which, in view of the hypothesis ${\operatorname{Lip}}({\mathcal{P}})<\frac{1}{2}$, implies that $\frac{1}{\rho}Q^+$ is a contraction. Banach fixed-point theorem gives then the thesis.
### Attractiveness of equilibria
Under the same assumption of Theorem \[theo:equilibria\], the equilibrium distribution function $f_\infty$ is globally attractive. This means that all solutions to converge to $f_\infty$ asymptotically in time. The precise statement of the result is as follows:
\[theo:attractiveness\] Let ${\operatorname{Lip}}({\mathcal{P}})<\frac{1}{2}$. Any solution $f\in C([0,\,+\infty);\,{\mathcal{M}_+^{\rho}(I^2)})$ of converges to $f_\infty$ in the $W_1$ metric when $t\to+\infty$.
Set $f_0({\mathbf{w}})=f(0,\,{\mathbf{w}})\in{\mathcal{M}_+^{\rho}(I^2)}$. Since $f_\infty$ is the solution to when the initial datum is $f_\infty$ itself, Theorem \[theo:cont\_dep\] implies $${W_1\left(f(t),\,f_\infty\right)}\leq e^{-\rho(1-2{\operatorname{Lip}}({\mathcal{P}}))t}{W_1\left(f_0,\,f_\infty\right)}$$ and the thesis follows.
[^1]: For the definition of mild solution to we refer the reader to in Appendix \[app:basictheo\].
[^2]: Throughout the proof, we will adopt the shorthand notations $f_\ast=f(t,\,{\mathbf{w}}_\ast)$, $f^\ast=f(t,\,{\mathbf{w}}^\ast)$.
|
---
abstract: 'We report on a method of light–shift engineering where an auxiliary laser is used to tune the atomic transition frequency. The technique is used to selectively load a specific region of an optical lattice. The results are explained by calculating the differential light–shift of each hyperfine state. We conclude that the remarkable spatial selectivity of light–shift engineering using an auxiliary laser provides a powerful technique to prepare ultra-cold trapped atoms for experiments on quantum gases and quantum information processing.'
author:
- 'P. F. Griffin, K. J. Weatherill, S. G. MacLeod, R. M. Potvliege, and C. S. Adams'
title: 'Spatially selective loading of an optical lattice by light–shift engineering using an auxiliary laser field'
---
Optical dipole traps and optical lattices are finding an ever increasing range of applications in experiments on Bose-Einstein condensation (BEC) [@barrett; @Weber; @Ybbec; @weiss], optical clocks [@katori], single–atom manipulation [@schl02; @mcke03], and quantum information processing (QIP) [@bloch; @meschede]. In many applications, one is interested not only in the light–shift of the ground state, which determines the trap depth, but also the relative shift of a particular excited state. One has some control over this differential shift as the ground and excited states have different resonances, and the laser can be tuned to a ‘magic’ wavelength where the ground and excited state polarizabilities are the same [@katori; @mcke03; @sterr]. However, using a ‘magic’ wavelength is not always appropriate either because one is no longer free to select the laser wavelength to minimize spontaneous emission, or because of the unavailability of suitable light sources.
In this paper, we report on a different method of light–shift engineering where a second laser field is used to control the excited state polarizability. One such method for controlling ground–state hyperfine polarizabilities is discussed in Ref. [@kaplan]. Light–shift engineering using an independent laser could have significant advantages over the use of a ‘magic’ wavelength for some applications, as it can be applied to specific regions of a trap. For example, we show how the technique can be used to selectively load a well defined region of an optical lattice. Specifically, we consider the case of loading $^{85}$Rb atoms into a deep CO$_2$ laser lattice. A quasi-electrostatic lattice based on a CO$_2$ laser (wavelength 10.6 $\mu$m) is particulary attractive for trapping cold atoms or molecules as it combines low light scattering [@thomas] with a lattice constant sufficiently large to allow single-site addressability [@sche00]. We show that by focussing an additional laser (a Nd:YAG laser with wavelength 1.064 $\mu$m) on a specific region of the lattice we can selectively load only into sites in this region. This spatially selective loading is most effective when the cooling light is blue–detuned relative to the unperturbed atomic resonance. We explain the effect by calculating the differential light-shifts between the ground and excited states in the presence of two light fields. We show that only in the light–shift engineered region where the CO$_2$ and Nd:YAG laser beams overlap, is the differential shift negative allowing efficient laser cooling. In addition, we show that for red-detuned cooling light, light–shift engineering produces a significant enhancement in the number of atoms loaded into a deep optical trap.
The experimental set-up employed to demonstrate controlled loading by light–shift engineering is shown in Fig. 1. An octagonal vacuum chamber, fitted with home-made Zinc Selenide (ZnSe) UHV viewports [@cox] to accommodate the CO$_2$ laser beams, provides a background pressure of 1.2$\times 10^{-10}$ Torr. A focussed Nd:YAG laser beam, with variable power, locally heats an alkali metal dispenser to provide a controllable source of thermal $^{85}$Rb atoms [@grif05]. A CO$_2$ laser beam, (propagating along the $z$–axis in Fig. 1) with power 45 W, is focussed to form a waist ($1/{\rm e}^2$ radius) of 70 $\mu$m at the center of the chamber. The beam is collimated and retro–reflected to form a 1D optical lattice. The intensity of the CO$_2$ laser is controlled using an acousto-optic modulator (AOM). The intensity at the center of the lattice, $I_0=2.3\times10^{6}$ Wcm$^{-2}$, gives a ground state light–shift $U_0 = -\textstyle{1\over
2}\alpha_0I_0/(\epsilon_0c) =h(-36~{\rm MHz})$, where $\alpha_0=335~a_0^3$ is the ground–state polarizability at 10.6 $\mu$m in atomic units. A Nd:YAG laser beam (propagating at $+45^\circ$ to the $y$ axis in the $xy$ plane) with power 7.8 W is focussed by a $f=150$ mm lens to overlap with the CO$_{2}$ lattice in the trapping region. The Nd:YAG laser has a circular focus with a beam waist of $30~\mu$m in the overlap region. This gives an intensity of $I_0=5.5\times10^5$ Wcm$^{-2}$ leading to a ground state light–shift $U_0 =h(-18.6~{\rm MHz})$ using our calculated value of $\alpha_0=722~a_0^3$ at 1.064 $\mu$m. The CO$_2$ and Nd:YAG laser beams are linearly polarized along the $x$ and $z$ axes, respectively.
![(a) Experimental arrangement showing the intersection of the CO$_{2}$ and Nd:YAG laser beams. Inset: Schematic indicating the relative scales of the CO$_{2}$ and Nd:YAG spot sizes in the interaction region. (b) Images and line profiles without (left) and with (right) the Nd:YAG laser. The molasses detuning (-50 MHz) is chosen to optimize the total number of atoms rather than the number in the overlap region. The viewing direction is approximately perpendicular to both the CO$_2$ and Nd:YAG laser propagation directions.[]{data-label="fig:1"}](grif1){width="8.5cm"}
Loading of a CO$_{2}$ laser lattice is carried out as follows: The CO$_{2}$ and Nd:YAG laser beams are left on throughout the loading stage. We load a magneto-optical trap (MOT), centered on the dipole trap, with $2\times 10^{7}$ $^{85}$Rb atoms in typically 3 seconds. After the magnetic field is switched off, the cooling laser beam intensities are reduced from 55 mWcm$^{-2}$ to 10 mWcm$^{-2}$ and the detuning is increased to $\Delta=-8\Gamma$, where $\Gamma=2\pi(6~{\rm MHz})$ is the natural linewidth of the transition, to create an optical molasses. After 10 ms of molasses, the atom cloud has a typical temperature of 40 $\mu$K, measured by time–of–flight. During the molasses phase the hyperfine repumping laser intensity is lowered from 6 mWcm$^{-2}$ to 200 $\mu $Wcm$^{-2}$ and then switched off completely with a shutter for the final 5 ms such that atoms are pumped in the lower hyperfine state [@adam95]. After the molasses phase, the cooling light and the Nd:YAG laser are extinguished for a few hundred milliseconds, then the CO$_2$ laser is turned off and the MOT beams (tuned to resonance) are turned back on to image the cloud. A CCD camera collects the fluorescence to give a spatial profile of the trapped atoms. A typical atom distribution viewed approximately perpendicular to the CO$_2$ and Nd:YAG beam axes is shown in Fig. 1(b). One sees that the CO$_2$ lattice loads efficiently out in the wings where the trap depth is smaller. This effect has been widely observed in experiments on far-off resonance optical dipole traps [@kupp00; @sche00; @cenn03] and arises due to the smaller differential light–shift between the ground and excited states away from the focus. We also see that the loading is greatly enhanced in the region where the Nd:YAG laser intersects the CO$_2$ laser lattice.
Remarkably, if we detune the cooling light slightly to the blue of the unperturbed atomic resonance such that neither the CO$_2$ nor the Nd:YAG laser beams alone trap any atoms, then we still observe that the region where the two beams intersect is efficiently loaded, see Fig. 2(b). This spatial selectivity provides a very clear demonstration of the power of light–shift engineering using an auxiliary laser field. In addition, it demonstrates that the enhanced loaded observed in Fig. 1 and Fig. 2(a) cannot be explained by a ‘dimple’ effect [@dimple], where atoms from the wings of a trap rethermalise in the deeper overlap region [@taku03].
![A surface plot of the column density for cooling laser detunings (a) $\Delta=2\pi(-20$ MHz) and (b) $\Delta=2\pi(+2$ MHz). The on–axis density is shown on the back plane. For blue–detuning (b) only the light–shift engineered region, where the CO$_2$ and Nd:YAG laser beams overlap, is loaded.[]{data-label="fig:2"}](grif2){width="8.5cm"}
Finally, we should add that the enhanced loading observed in the overlap region cannot be explained simply by the fact that the trap is deeper in this region. To demonstrate this we have reduced the CO$_2$ laser power by a factor of four such that the depth in the combined CO$_2$ plus Nd:YAG trap is less than a CO$_2$ lattice alone at full power. Typical column densities are shown in Fig. 3. We see that the density in the combined trap is still significantly higher than for a deeper CO$_2$ lattice.
![Column density for a CO$_2$ laser lattice without the Nd:YAG laser (dashed line), and for a shallower CO$_2$ laser lattice with the Nd:YAG laser (solid line). The overall ground state light–shift in the overlap region of the shallow combined trap ($-27$ MHz) is less than the maximum light-shift for the CO$_2$ laser lattice alone ($-36$ MHz), but loading into the combined trap is still significantly more efficient. Both profiles are for a molasses detuning of $-20$ MHz.[]{data-label="fig:3"}](grif3){width="7.5cm"}
To explain the spatially selective loading for blue–detuned cooling light, we have calculated the polarizability of the $5s$ ground and $5p$ excited states as a function of wavelength. The details of the calculation will be explained elsewhere [@potv05]. Briefly, the scalar polarizability $\alpha_0$ is the average of the dipole polarizabilities $\alpha_{xx}$, $\alpha_{yy}$ and $\alpha_{zz}$ for an atom exposed to a laser field polarized, respectively, in the $x$, $y$, and $z$-directions: $\alpha_0=(\alpha_{xx}+\alpha_{yy}+\alpha_{zz})/3$. The scalar polarizability is the same for all $m$-components of the $5p$ state. In addition, there is a tensor polarizabilty $\alpha
_{2}=\left(\alpha _{xx}-\alpha _{zz}\right)/{3}$ which lifts the degeneracy of different $m$-states. In order to obtain these quantities, we represent the interaction of the valence electron with the core by the model potential proposed by Klapisch [@Klapisch67]. The polarizabilities are calculated by the implicit summation method [@implicit]. Thus $\alpha_{xx}$ (and similarly for $\alpha_{yy}$ and $\alpha_{zz}$) is obtained as $\alpha_{xx} = - e(\langle 0 \vert x \vert 1 \rangle +\langle 0
\vert x \vert -1 \rangle)/\mathcal{F}$, where $\vert 0 \rangle$ represents the state vector of the unperturbed $5s$, or $5p_{-1,0,1}$ states, and $\vert\pm 1 \rangle$ are such that $$(E_0 \pm \hbar\omega - H_0)\vert\pm 1 \rangle = e\mathcal{F}x \vert 0 \rangle.$$ Here $H_0$ is the Hamiltonian of the field-free model atom and $E_0$ is the eigenenergy of the unperturbed state, i.e. $H_0 \vert0 \rangle = E_0 \vert0 \rangle$, and $\mathcal{F}$ is an arbitrary electric field. These equations are solved in position space by expanding the wave functions on a discrete basis of radial Sturmian functions and spherical harmonics [@Potvliege98]. In the zero-frequency limit, the resulting values of $\alpha_0[5s]$, $\alpha_0[5p]$ and $\alpha_2[5p]$ converge towards $333 a_0^3$, $854 a_0^3$, and $-151a_0^3$, respectively, in satisfactory agreement with previous experimental and theoretical work [@Safronova99]. The dynamic polarizabilities as functions of wavelength are shown in Fig. 4. We find that $\alpha_0=722a_0^3$ at 1.064 $\mu$m, which agrees well with experiment and other theoretical work [@Bonin93; @Marinescu94; @Safronova04].
![Calculated polarizabilities of the $5s$ and $5p$ states of Rb. For the $p$ state we show the scalar and tensor polarizabilities, $\alpha _{0}=\left( \alpha _{xx} + \alpha
_{yy}+\alpha _{zz}\right)/{3}$ and $\alpha _{2}=\left(\alpha
_{xx}-\alpha _{zz}\right)/{3}$, respectively. For the $s$ state, $\alpha _0= \alpha _{xx}= \alpha _{yy}= \alpha _{zz}$.[]{data-label="fig:4"}](grif4){width="7.5cm"}
![The differential light–shifts as a function of position along an axis perpendicular to both the CO$_2$ and Nd:YAG laser propagation directions. The differential light–shift corresponds to the additional detuning of the cooling laser seen by ground–state atoms. It is equal to the light–shifts of the $m_F=-4,\ldots,+4$ magnetic sub-levels of the $5p^2P_{3/2}(F=4)$ minus that of the ground state state in $^{85}$Rb for atoms in (a) the CO$_2$ laser lattice only, and (b) in the combined CO$_2$ plus Nd:YAG trap. []{data-label="fig:5"}](grif5){width="8cm"}
For our purposes the most important result of Fig. 4 is that the polarizabilities of the $5s$ state at the CO$_2$ laser wavelength ($\lambda=10.6~\mu$m) and the Nd:YAG wavelength ($\lambda=1.064~\mu$m) have the same sign, whereas the polarizabilities of the $5p$ state have opposite signs. It follows that one can use a combination of CO$_2$ and Nd:YAG lasers to tune the differential light–shift between the $5s$ and $5p$ states through zero. To calculate the light–shift experienced by atoms in the combined CO$_2$ plus Nd:YAG trap we calculate the eigenvalues of the matrix [@ange68; @schm73] $$\begin{aligned}
\mathsf{U} & = & \mathsf{U}_0 -\frac{1}{2\epsilon_0c}
\sum_{i=1,2}
(\alpha_0^i \mathbb{1} + \alpha_2^i \mathsf{Q}^i)I_i~,~\end{aligned}$$ where $\mathsf{U}_0$ is a diagonal matrix with components corresponding to the hyperfine splitting, the index $i$ denotes the CO$_2$ and Nd:YAG lasers, and $\mathsf{Q}^i$ is a matrix with components $\langle F,m_F\vert Q_\mu\vert F',m_F'\rangle$ with $Q_\mu=[3\hat{J}^2_\mu-J(J+1)]/J(2J-1)$ and $J_\mu$ being the electronic angular momentum operator in the direction of laser field $i$. The differential light–shift between the ground state and the $5p^2P_{3/2}(F=4)$ state for the CO$_2$ laser alone is shown in Fig. 5(a). As a single laser beam splits the states according to the magnitude of $m_F$, there are five curves corresponding to $\vert m_F\vert=0,\ldots,4$. We see that all the levels are far blue-detuned (positive differential shift) at the centre of the lattice, making laser cooling ineffective unless the cooling light is detuning to the red by an amount larger than the differential light–shift. Adding the Nd:YAG laser produces the shifts shown in Fig. 5(b). The Nd:YAG laser lifts the degeneracy between the $\pm m_F$ components, although two pairs of states remain close to degenerate. More importantly, one pair of states is pulled down into the region of negative differential shift. This allows efficient laser cooling in the center of the overlap region, even when the cooling light is slightly blue–detuned relative to the unperturbed resonance frequency. Note that, efficient loading for blue–detuning can only be explained if one includes the tensor polarizability term $\alpha_2$. Although $\alpha_2$ is smaller than the scalar polarizability (by a factor of 4 or 5), it dramatically alters whether states see the cooling light as red or blue detuned and therefore completely determines whether the trap is loaded or not.
As light–shift engineering allows laser cooling to work as efficiently as in free space one might expect to load atoms at lower temperature than in conventional loading schemes. To address this issue we need to increase the sensitivity and the resolution of our imaging system to allow accurate density and temperature measurements. This will be the focus of future work.
To conclude, we have shown how light-shift engineering using an auxiliary laser field can be used to implement spatially selective loading of deep far-off resonance optical lattices. We have performed theoretical calculations of the atomic polarizabilities and have shown that the addition of a second laser field induces a splitting of the excited state which is crucial in determining the efficiency of loading into the combined trap. The technique could be applied to load a single-site in 3D CO$_2$ lattice, with the interesting prospect of BEC in the limit of high trap frequency. In addition one could adapt the technique to perform patterned loading of optical lattices [@peil03] for applications in QIP experiments.
We thank E. Riis, I. G. Hughes, and S. L. Cornish for stimulating discussions, M. J. Pritchard for experimental assistance and EPSRC for financial support.
[99]{} M. Barrett, J. Sauer, and M. S. Chapman, Phys. Rev. Lett. **87**, 010404 (2001).
T. Weber, J. Herbig, M. Mark, H.-C. Nägerl, and R. Grimm, Science **299** 232 (2003).
Y. Takasu, K. Maki, K. Komori, T. Takano, K. Honda, M. Kumakura, T. Yabuzaki, and Y. Takahashi, Phys. Rev. Lett. **91**, 040404 (2003).
T. Kinoshita, T. Wenger, and D. S. Weiss, Phys. Rev. A **71**, 011602(R) (2005).
H. Katori, T. Ido, amd M. K. Gonokami, J. Phys. Soc. Jap. **68**, 2479 (1999); H. Katori, M. Takamoto, V. G. Pal’chikov, and V. D. Ovsiannikov, Phys. Rev. Lett. [**91**]{}, 173005 (2003).
N. Schlosser, G. Reymond, P. Grangier, Phys. Rev. Lett. [**89**]{}, 023005 (2002).
J. McKeever, J. R. Buck, A. D. Boozer, A. Kuzmich, H.-C. Nägerl, D. M. Stamper-Kurn, and H. J. Kimble, Phys. Rev. Lett. [**90**]{}, 133602 (2003).
O. Mandel, M. Greiner, A. Widera, T. Rom, T. W. Hänsch, and I. Bloch, Nature **425**, 937 (2003).
D. Schrader, I. Dotsenko, M. Khudaverdyan, Y. Miroshnychenko, A. Rauschenbeutel, and D. Meschede, Phys. Rev. Lett. **93**, 150501 (2004).
C. Degenhardt, H. Stoehr, U. Sterr, F. Riehle, and C. Lisdat, Phys. Rev. A. **70**, 023414 (2004).
A. Kaplan, M. F. Andersen, and N. Davidson, Phys. Rev. A **66**, 045401 (2002)
S. R. Granade, M. E. Gehm, K. M. O’Hara, and J. E. Thomas, Phys. Rev. Lett. **88**, 120405 (2002).
R. Scheunemann, F. S. Cataliotti, T. W. Hänsch, and M. Weitz, J. Opt. B **2** 645 (2000).
S. G. Cox, P. F. Griffin, C. S. Adams, D. DeMille, and E. Riis, Rev. Sci. Instrum. **74**, 3185 (2003).
P. F. Griffin, K. J. Weatherill, and C. S. Adams, Rev. Sci. Instrum. submitted.
C. S. Adams, H. J. Lee, N. Davidson, M. Kasevich, and S. Chu, Phys. Rev. Lett. [**74**]{}, 3577 (1995).
S. J. M. Kuppens, K. L. Corwin, K. W. Miller, T. E. Chupp, and C. E. Wieman, Phys. Rev. A [**62**]{}, 013406 (2000).
G. Cennini, G. Ritt, C. Geckeler and M. Weitz, Appl. Phys. B [**77**]{}, 773 (2003).
D. M. Stamper-Kurn, H.-J. Miesner, A. P. Chikkatur, S. Inouye, J. Stenger, and W. Ketterle, Phys. Rev. Lett. **81** 2194 (1998); Z.-Y. Ma, C. J. Foot, and S. L. Cornish, J. Phys. B. **37** 3187 (2004).
Y. Takasu, K. Honda, K. Komori, T. Kuwamoto, M. Kumakura, Y. Takahashi, and T. Yabuzaki, Phys. Rev. Lett. [**90**]{}, 023003 (2003).
R. M. Potvliege and C. S. Adams, in preparation.
M. Klapisch, C. R. Acad. Sci. Ser. B [**265**]{}, 914 (1967).
A. Dalgarno and J. T. Lewis, Proc. R. Soc. A [**233**]{}, 70 (1955); C. Schwartz, Ann. Phys. (NY) [**6**]{}, 156 (1959).
R. M. Potvliege, Comput. Phys. Commun. [**114**]{}, 42 (1998).
M. S. Safronova, W. R. Johnson, and A. Derevianko, Phys. Rev. A [**60**]{}, 4476 (1999); S. Magnier and M. Aubert-Frécon, J. Quant. Spectrosc. Ra. **75**, 121 (2002); C. Zhu, A. Dalgarno, S. G. Porsev, and A. Derevianko, Phys. Rev. A [**70**]{}, 032722 (2004).
M. S. Safronova, C. J. Williams, and C. W. Clark, Phys. Rev. A [**69**]{}, 022509 (2004).
K. D. Bonin and M. A. Kadar-Kallen, Phys. Rev. A [**47**]{}, 944 (1993).
M. Marinescu, H. R. Sadeghpour, and A. Dalgarno, Phys. Rev. A [**49**]{}, 5103 (1994).
J. R. P. Angel and P. G. H. Sandars, Proc. Roy. Soc. A [**305**]{}, 125 (1968).
R. W. Schmieder, Am. J. Phys. [**40**]{}, 297 (1973).
S. Peil, J. V. Porto, B. L. Tolra, J. M. Obrecht, B. E. King, M. Subbotin, S. L. Rolston, and W. D. Phillips, Phys. Rev. A [**67**]{}, 051603(R) (2003).
|
---
author:
- 'Shinji <span style="font-variant:small-caps;">Watanabe</span> and Masatoshi <span style="font-variant:small-caps;">Imada</span>'
title: |
On Proximity of 4/7 Solid Phase of $^3$He Adsorbed on Graphite\
-Origin of Specific-Heat Anomalies in Hole-Doped Density-Ordered Solid-
---
Layered $^3$He systems adsorbed on graphite offer unique playgrounds as an ideal prototype of strongly correlated Fermion systems in purely two dimensions. At the 4/7 density of the 2nd-layer $^3$He relative to the 1st-layer density, $^3$He atoms are solidified to form a triangular lattice, which is referred to as the 4/7 phase [@Elser]. The 4/7 phase has attracted much attention recently, since specific heat [@ishida] and susceptibility measurements down to $10~\mu$K [@masutomi] have suggested the emergence of a gapless quantum spin liquid.
Theoretically, the multiple-spin-exchange model [@Roger] has been used for the analysis [@misguich; @momoi], which, however, has left fundamental questions about the 4/7 phase: (1) How to realize the gapless ground state? (2) How to explain the large saturation field of about 10 T [@ishimoto]? To resolve these issues, we have pointed out the importance of density fluctuations between the 2nd and 3rd layers [@WI2007]. By constructing a lattice model to describe the 4/7 solid phase, we have shown that strong density fluctuations indeed stabilize a gapless quantum spin liquid, which can be regarded as that caused essentially by the same mechanism found in the Hubbard model [@KI; @MWI; @Watanabe; @Mizusaki]. Furthermore, the density fluctuation accounts for the enhanced saturation field as observed [@WI2007].
In this letter, we report our analysis of the stability of the 4/7 phase on the basis of the lattice model, taking account of density fluctuations. The model is derived from a refined configuration of the 2nd-layer $^3$He recently revealed by path-integral Monte Carlo (PIMC) simulations [@takagi]. Our mean-field (MF) results show that a density-ordered fluid emerges when holes are doped into the 4/7 phase and that the evolution of the hole pockets explains measured specific-heat anomalies. Our model gives a unified explanation of unusual temperature and doping dependences of specific heat over a range from the 4/7 solid phase to the uniform-fluid phase at low densities. We also discuss the validity of the present picture beyond the MF approximation and possible relevance to the specific-heat anomalies measured recently in the double-layered $^3$He system [@saunders].
Our analysis starts from the result of recent PIMC simulations [@takagi], which have revealed a more stable configuration for the 4/7 phase than that first proposed by Elser [@Elser; @PIMC] as shown in Fig. 1(a). Here, the open circles represent the atoms on the 1st layer, and the shaded circles, the locations on the 2nd layer when solidified. If $^3$He ($^4$He) atoms are adsorbed on the 1st layer, it forms a triangular lattice with the lattice constant $a$=3.1826 (3.1020) Å at the saturation density $\rho_1$=0.114 (0.120) atom/Å$^2$ [@Elser].
![ (Color online) (a) Lattice structure of the 4/7 phase of $^3$He shown by recent PIMC simulations [@takagi]. Both 1st-layer atoms (open circles) and 2nd-layer atoms (shaded circles) form triangular lattices in the solid phase. The area enclosed by the solid line represents the unit cell for the solid of the 2nd-layer atoms (see text). The lattice constant of the 1st layer is $a$. (b) Possible stable location of the 2nd-layer atoms are shown by circles on top of a $a \times a$ parallelogram constructed from the 4 neighboring 1st-layer atoms. (c) Structure of discretized lattice for the 2nd-layer model. Lattice points are shown by circles. []{data-label="fig:lattice77"}](15829Fig1.eps){width="7.2cm"}
The location of the 2nd-layer atoms is in principle determined as stable points in continuum space under the periodic potential of the 1st layer. In the present treatment, we simplify the continuum by discretizing it with the largest possible number of lattice points kept as candidates of stable points in the solid. To illustrate discretization, we cut out from Fig. 1(a) a unit cell of the 1st layer, namely, a parallelogram whose corners are the locations on top of the neighboring 4 atoms on the 1st layer as in Fig. 1(b). Possible stable locations of $^3$He atoms on the 2nd layer are points on top of (1) the centers of the 1st-layer triangle (A$_1$), (2) the midpoint of two-neighboring 1st-layer atoms (A$_2$), and (3) a nearby point of the 1st-layer atom (A$_3$). Therefore, we employ totally 11 points as discretized lattice points in a parallelogram, shown as circles in Fig. 1(b). Since a unit cell in Fig. 1(a) contains 7 parallelograms, it contains 77 lattice points in total, as illustrated as circles in Fig. 1(c). Now the 4/7 solid phase is regarded as a regular alignment of 4 atoms on 77 available lattice points in the unit cell shown in Fig. 1(c).
For inter-helium interaction, we employ the Lennard-Jones potential $
V_{\rm LJ}(r)=4\epsilon \left[
\left(
\sigma/r
\right)^{12}
-
\left(
\sigma/r
\right)^{6}
\right]$, where $\epsilon=10.2$ K and $\sigma=2.56$ Å [@Boer]. A more refined Aziz potential is expected to give similar results under this discretization. The interaction between $^3$He atoms on the 2nd layer is given by $H_{V}=\sum_{ij}V_{ij}n_i n_j$ ($n_i$ is the number operator of a Fermion on the $i$-th site) with $V_{ij}$ taken from the spatial dependence of $V_{\rm LJ}(r)$ on the lattice points. In the actual $^3$He system, the chemical potential of the 3rd layer is estimated to be 16 K higher than that of the 2nd layer [@Whitlock]. $^3$He atoms may fluctuate into the 3rd layer over this chemical potential difference and it is signaled by a peak of the specific heat at $T\sim 1$ K [@Sciver; @Graywall; @Fukuyama2007]. To take account of this density fluctuation, we here mimic the allowed occupation on the 3rd layer by introducing a simple finite cutoff $V_{\rm cutoff}$ for $V_{ij}$ within the same form of Hamiltonian: When $V_{\rm LJ}(r_{ij})$ for $r_{ij}\equiv |{\bf r}_i-{\bf r}_j|$ exceeds $V_{\rm cutoff}$, we take $V_{ij}=V_{\rm cutoff}$, otherwise $V_{ij}=V_{\rm LJ}(r_{ij})$. This allows us to take account of the qualitative but essential part of the possible occupation on the 3rd layer by atoms overcoming $V_{\rm cutoff}$.
We consider the spinless-Fermion model on the lattice $H=H_{\rm K}+H_{\rm V}$ where the kinetic energy is given by $H_{\rm K}=-\sum_{\langle ij\rangle}(t_{ij}c_{i}^{\dagger}c_j
+{\rm H.C.})$. We ignore the effects of corrugation potential from the 1st layer for the moment and discuss it later. By using the unit-cell index $s$ and the site index $l$ in the unit cell, we have ${\bf r}_i={\bf r}_s+{\bf r}_l$. After the Fourier transform, $c_{i}=c_{s,l}=\sum_{\bf k}c_{{\bf k},l}e^{{\rm i}{\bf k}\cdot{\bf r}_s}/\sqrt{N_{\rm u}}$, the MF approximation with the diagonal order parameter $\langle n_{{\bf k},l}\rangle$ leads to $$\begin{aligned}
H_{\rm V}\sim H_{\rm V}^{\rm MF}
\mbox{\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad}
\nonumber \\
=
\frac{1}{N_{\rm u}}
\sum_{l,m=1}^{77}
\sum_{s'}V^{lm}(s')
\sum_{{\bf k}, {\bf p}}
\left[
\langle n_{{\bf k},l} \rangle n_{{\bf p},m}
%\right.
%\nonumber
%\\
%& &
-
%\left.
\frac{1}{2}
\langle n_{{\bf k},l} \rangle \langle n_{{\bf p},m}\rangle
\right],
\nonumber\end{aligned}$$ where the inter-atom interaction is expressed as $V_{ij}=V_{st}^{lm}=V^{lm}(s')$ with ${\bf r}_{s'}={\bf r}_{s}-{\bf r}_{t}$. Then, we have the MF Hamiltonian $H_{\rm MF}=H_{\rm K}+H_{\rm V}^{\rm MF}$. By diagonalizing the $77 \times 77$ Hamiltonian matrix for each ${\bf k}$, we obtain the energy bands $
H_{\rm MF}=\sum_{\bf k}\sum_{l=1}^{77}
E_{l}({\bf k})c_{{\bf k},l}^{\dagger}c_{{\bf k},l}.
$
For the kinetic energy, several choices of $t_{ij}$ are examined. As noted in ref. , if $t_{0}$ is determined so as to reproduce the total kinetic energy $E_{\rm K}^{\rm PIMC}$, the main result below, measured in the unit of K, is quite insensitive to the choice of $t_{ij}$. Hence, we here show the results for $t_{ij}=t_{0}$ for the $ij$ pairs up to the shortest-19th $r_{ij}$, i.e., for $|r_{ij}|\le 2a$. The interaction $V_{ij}$ is taken for $r_{ij}\le 2a$, since the longer $r_{ij}$ part is ineffective [@WI2007]. The recent PIMC simulation estimates the kinetic energy of the 2nd-layer $^3$He in the 4/7 phase as $E_{\rm K}^{\rm PIMC}=14$ K (17 K) for the $^4$He ($^3$He) 1st layer [@takagi]. Thus, we evaluate $t_0$ by imposing the condition, $
\sum_{\langle ij \rangle}t_{0}\langle c_{i}^{\dagger}c_{j}+{\rm H.C.}\rangle
/(4N_{\rm u})=E_{\rm K}^{\rm PIMC}
$ as $t_0=413$ mK (502 mK) for the $^4$He ($^3$He) 1st layer.
By solving the MF equations for $H_{\rm MF}$, we have the solution of the $\sqrt{7}\times \sqrt{7}$ commensurate structure shown in Fig. \[fig:lattice77\](c) with opening the “charge gap" as shown by filled squares (diamonds) in Fig. \[fig:cgp\](c) when $^4$He ($^3$He) is adsorbed on the 1st layer. Here, the “charge gap" is defined by $\Delta_{\rm c}\equiv E_5^{\rm min}({\bf k})-E_4^{\rm max}({\bf k})$, where $E_l^{\beta}({\bf k})$ denotes the minimum or maximum energy of the $l$-th band from the lowest. We show here the well-converged results for large $\Delta_{\rm c}$ and the small-$\Delta_{\rm c}$ region will be discussed later for detailed comparison with experiments. We now discuss the effects of corrugation potential. PIMC [@PIMC; @takagi] suggests that the 1st-layer atoms make the corrugation potential on the 2nd layer even within the discretized lattice points. It suggests $\Delta E_1=-3.0$ K and $\Delta E_2=-1.5$ K relative to $\Delta E_3$ in the notation in Fig. \[fig:cgp\](b). This effect merely shifts the $\Delta_{\rm c}$-$V_{\rm cutoff}$ line toward a larger $V_{\rm cutoff}$, as shown by open triangles (circles) for the $^4$He ($^3$He) 1st layer in Fig. \[fig:cgp\](c). Hence, we show our results below for $\Delta E_1=\Delta E_2=\Delta E_3=0$ and for $^4$He adsorbed on the 1st layer as a representative case [@memo_dE]. We note that Fig. \[fig:cgp\](c) indicates that the 4/7 phase is more stable when $^4$He is adsorbed on the 1st layer rather than the $^3$He 1st layer.
![ (Color online) (a) Parallelogram formed by 4-neighbouring 1st-layer atoms. (b) The sites for 2nd-layer atoms in the parallelogram under corrugation potentials $\Delta E_1$ (solid circle), $\Delta E_2$ (shaded circle) and $\Delta E_3$ (open circle). (c) $V_{\rm cutoff}$ dependence of the “charge gap” $\Delta_{\rm c}$ for $\Delta E_1$=$\Delta E_2$=$\Delta E_3$=0 (filled square) and for $\Delta E_1=-3.0$ K, $\Delta E_2=-1.5$ K and $\Delta E_3=0$ K (open triangle) in the $^3$He/$^4$He/Gr system, and for $\Delta E_1$=$\Delta E_2$=$\Delta E_3$=0 (filled diamond) and for $\Delta E_1=-3.0$ K, $\Delta E_2=-1.5$ K and $\Delta E_3=0$ K (open circle) in the $^3$He/$^3$He/Gr system. The bandwidth of the 4th band $W_4$ is illustrated for $\Delta E_1$=$\Delta E_2$=$\Delta E_3$=0 (filled circle) in the $^3$He/$^4$He/Gr system. Shaded lines at $V_{\rm cutoff}=22.5$ K and 60 K are guides for the eyes (see text). []{data-label="fig:cgp"}](15829Fig2.eps){width="7.5cm"}
The energy band $E_{4}({\bf k})$ for $V_{\rm cutoff}=60$ K at $n_{0}=1$ with $n\equiv \sum_{l=1}^{77}\sum_{\bf k}n_{{\bf k},l}/(77 N_{\rm u})=4n_{0}/77$ and its contour plot in the folded Brillouin zone [@memoBZ] are shown in Figs. \[fig:FermiSF\](a) and \[fig:FermiSF\](b), respectively. When holes are doped into the 4/7 phase, in our MF results, the Fermi surface appears at the 4th band as hole pockets for $n_{0}=0.99$ (Fig. \[fig:FermiSF\](c)) and $n_{0}=0.97$ (Fig. \[fig:FermiSF\](d)) with the density order retained.
![ (Color) (a) The 4th energy band in the folded Brillouin zone for $-\pi/\bar{a}\le k_x \le \pi/\bar{a}$ and $-\pi/\bar{a}\le k_y \le \pi/\bar{a}$ with $\bar{a}=2\sqrt{7}a$ [@memoBZ] for $V_{\rm cutoff}=60$ K at $n_0=1$. Contour plot of the 4th band for (b) $n_0=1$, (c) $n_0=0.99$ and (d) $n_0=0.97$. Hole pockets are represented by white regions in (c) and (d). []{data-label="fig:FermiSF"}](15829Fig3.eps){width="5.5cm"}
This evolution of hole pockets is reflected in thermodynamic quantities: As holes are doped into the 4/7 solid, a remarkable peak in the specific heat $C(T)$ develops at temperatures lower than the density-order transition temperature $T_{\rm c}$, as shown in Fig. \[fig:CTST\](a). At the same time, $C(T)$ at temperatures right below $T_{\rm c}$ decreases. The temperature $T^{*}$ at which $C(T)$ has the highest peak for $T<T_{\rm c}$ increases as $n_0$ decreases from 1, as indicated by arrows in Fig. \[fig:CTST\](a). This tendency was actually observed in the past [@Sciver] and recent [@Fukuyama2007] measurements: The $\lambda$-like anomaly at $T=T_{\rm c}\sim 1$ K simultaneously with the hump at $T\sim 40$ mK in $C(T)$ observed for $n_{0}=0.96$ [@Sciver] and for $n_{0}=0.971$ [@Fukuyama2007] is an indication of the coexisting density order and fluid. Since our model $H$ is the spinless-Fermion model, the entropy per site $S(T)$ at the high-temperature limit is given by $k_{\rm B}$ln2. When $T$ decreases, $S(T)$ sharply drops at the density order $T_{\rm c}$ and the remaining entropy is released at $T\sim T^{*}<T_{\rm c}$ as shown in Fig. \[fig:CTST\](b). In experiments, sharp decreases in $S(T)$ at much lower temperatures, i.e., $T\sim 0.2$ and $1$ mK, are observed for $0.9\lsim n_{0} \lsim 1.02$ [@ishida; @Fukuyama2007], which result in a double peak in $C(T)$. This spin entropy is ignored in the present model. These sharp drops correspond to the release of the spin entropy at the energy scale of the spin exchange interaction $J$ [@WKB]. Under hole doping, the double peak at $T\sim 0.2$ and 1 mK is suppressed, whereas the hump $C(T^{*})$ [@Fukuyama2007] at 40 mK grows. This is naturally understood by the suppression of the density order simultaneously with an increase in the fluid contributions induced by doping. Actually, a double-peak structure in $C(T)$ around $T\sim J$ has been shown by our exact-diagonalization calculations [@WI2007] for the density-ordered solid phase in a minimal lattice model, which was introduced to mimic the 4/7 solid.
![ (Color) (a) Specific heats for $n_0=1.0$ (light-blue diamond), 0.99 (green circle), 0.98 (blue open triangle), and 0.97 (red square) for $V_{\rm cutoff}=60$ K. $W_4^{\rm top}-\mu$ for $n_0=0.99$ (green dotted line), 0.98 (blue dashed line), and 0.97 (red line). (b) Entropy per site for $n_0=1.0$ (light blue), 0.99 (green), 0.98 (blue), and 0.97 (red) for $V_{\rm cutoff}=60$ K. Arrows indicate the temperatures $T^{*}$ at which $C(T)$ has the highest peak for $T<T_{\rm c}$. []{data-label="fig:CTST"}](15829Fig4.eps){width="8.7cm"}
To further clarify the origin of $T^{*}$, we calculated the density of states for $V_{\rm cutoff}=60$ K at $n_{0}=1$, as in Fig. \[fig:DOS\_Tstar\](a). Since the energy gap opens in the 4/7 phase, $\Delta_{\rm c}\sim 1$ K, the hump structure in $C(T)$ at $T\sim T^{*}\ll\Delta_{\rm c}$ should arise from the 4th band. As holes are doped into the 4/7 phase, the chemical potential shifts to lower energies inside the 4th band, as shown by vertical lines in the inset of Fig. \[fig:DOS\_Tstar\](a). In Fig. \[fig:CTST\](a), we plot the energy difference between the top of the 4th band, $W_{4}^{\rm top}\equiv E_{4}^{\rm max}({\bf k})$, and the chemical potential $\mu$, $W_{4}^{\rm top}-\mu$, by vertical lines. We see that $W_{4}^{\rm top}-\mu$ is located at the central position of the hump of $C(T)$ for each $n_{0}$. This confirms that the characteristic energy of the fluid in the density-ordered-fluid phase expressed by $W_{4}^{\rm top}-\mu$ corresponds to $T^{*}$. The filling dependences of $T^{*}$ (open square) and $W_{4}^{\rm top}-\mu$ (open diamond) are shown in Fig. \[fig:DOS\_Tstar\](b).
To make a detailed comparison with experiments, we extrapolate the “charge gap" $\Delta_c$ by the least-squares fit of the data for $35~{\rm K}\le V_{\rm cutoff}\le 45$ K assuming the form $\sum_{n=0}^{2}a_{n}V_{\rm cutoff}^{n}$ [@notegap], which is shown by the dashed line in Fig. \[fig:cgp\](c). We also extrapolate the width of the 4th band, $W_{4}\equiv E_{4}^{\rm max}({\bf k})-E_{4}^{\rm min}({\bf k})$ by the least-squares fit of the data for $35~{\rm K}\le V_{\rm cutoff}\le 80$ K (filled circles in Fig. \[fig:cgp\](c)) assuming the form $\sum_{n=0}^{3}b_{n}V_{\rm cutoff}^{n}$ and the result is represented by the dashed line in Fig. \[fig:cgp\](c). Since the “charge gap" of the 4/7 phase is expected to be $\Delta_{\rm c}\sim 1$ K [@Sciver; @WI2007], the corresponding $V_{\rm cutoff}$ is estimated to be 22.5 K, which is not inconsistent with the chemical potential difference between the 2nd and 3rd layers, 16 K [@Whitlock] (the gray line in Fig. \[fig:cgp\](c)). Then, $W_4$ at $V_{\rm cutoff}=22.5$ K is evaluated to be 174 mK. Since $W_{4}(V_{\rm cutoff}=22.5~{\rm K})/W_{4}(V_{\rm cutoff}=60~{\rm K})
\sim 0.43$, the density of states of the 4th band at $V_{\rm cutoff}=22.5$ K is inferred to be enhanced $1/0.43$ times more than that at $V_{\rm cutoff}=60$ K, as shown in the inset of Fig. \[fig:DOS\_Tstar\](a). Then, $W_{4}^{\rm top}-\mu$ and $T^{*}$ at $V_{\rm cutoff}=22.5$ K are estimated to be $0.43$ times smaller than those at $V_{\rm cutoff}=60$ K, which are shown by filled diamonds and filled squares, respectively, in Fig. \[fig:DOS\_Tstar\](b). The slope of the resultant $T^{*}$ is evaluated to be $T^{*}\sim 400\delta$ mK with $\delta \equiv 1-n_0$, which is quite consistent with the experimental observation $T^{*}\sim 430\delta$ mK [@Fukuyama2007]. This analysis shows that $T^{*}$ may be regarded as the effective bandwidth of the holes by quantum-mechanical zero-point motions in the solid, which substantiates the hypothesized idea of [*zero-point vacancy*]{} [@ZPV; @Fukuyama2007].
![ (Color) (a) Density of states for the 1st (blue), 2nd (pink), 3rd (green), 4th (red), and 5th (light brown) bands for $V_{\rm cutoff}=60$ K at $n_0=1$. The inset shows the enlargement of the 4th band with $W_4^{\rm top}$ (gray line) and the chemical potential $\mu$ for $n_{0}=0.99$ (dotted line), 0.98 (dashed line), and 0.97 (solid line). (b) Filling $n_0$ dependence of $T^{*}$ (square) and $W_4^{\rm top}-\mu$ (diamond) for $V_{\rm cutoff}=60$ K (open symbols with dashed lines) and $V_{\rm cutoff}=22.5$ K (filled symbols with solid lines). []{data-label="fig:DOS_Tstar"}](15829Fig5.eps){width="7.5cm"}
We note that the filling dependence of the entropy $S(T)$ for $T^{*}<T\ll T_{\rm c}$ (for example, $S(T=100~{\rm mK})$, not shown) shows nearly the same $n_0$ dependence as $S=-n_0{\rm ln}n_0-(1-n_0){\rm ln}(1-n_0)$, as experimentally observed [@Fukuyama2007]. This implies that the entropy for distributing $N\delta$ holes in the $N$-site system can be accounted for by the hole-pockets contribution in the density-ordered-fluid phase.
We note that a broad shoulder structure of $C(T)$ in the uniform fluid phase for $n_0=0.3$ (not shown) evolves into the $C(T^{*})$ hump as $n_0$ increases, which finally shrinks toward $n_0=1$ as in Fig. \[fig:CTST\](a). A similar $C(T)$ evolution was observed in the layered $^3$He system on the two $^4$He layers adsorbed on graphite by Neumann et al. [@saunders] when the $^3$He density $n$ increases and approaches $n_{\rm c}=9.9$ nm$^{-2}$. Since $C(T)$ has common features in both systems [@saunders; @Fukuyama2007], it is natural to interpret the intervening phase observed for $n_{\rm I}<n<n_{\rm c}$ with $n_{\rm I}=9.2{\rm nm}^{-2}$ in ref. as a density-ordered fluid stabilized near the density-ordered solid at $n=n_{\rm c}$. This offers a clear and alternative interpretation of the sharp transition or crossover around $n=9.2$ nm$^{-2}$ in ref. .
The density-ordered fluid whose ground-state energy is lower than that of the uniform fluid, $E_{\rm DOF}(n_0)<E_{\rm fluid}^{\rm uni}(n_0)$, is confirmed at least up to 7% hole doping for $V_{\rm cutoff}=60$ K. The poor convergence of the MF solution upon further doping prevents us from determining the accurate phase boundary. Here, we also stress an alternative possibility, namely, the emergence of a uniform fluid phase with small Fermi surfaces when holes are doped into the density-ordered phase, even when fluctuations beyond the MF theory destroy the density order in the absence of spin order. The Fermi surface is defined by poles of the single-particle Green function $G({\bf k},\omega)$ at a frequency $\omega=0$ [@memo_w]. While Re$G$ changes its sign across a pole through $\pm\infty$, Re$G$ can also change the sign across a $zero$ defined by $G=0$. In the solid phase, only the zero surface exists in the $\bf k$ space at $\omega =0$ while only the poles exist at $\omega=0$ for heavily doped uniform fluids. When holes are slightly doped, the reconstruction of $G({\bf k},\omega)$ yields the interference between the zeros and the poles, which creates the resultant Fermi surface with the coexistence of zeros and poles. Since the interference has a significant $\bf k$ dependence at $\omega =0$, small Fermi pockets may appear after the truncation of the large Fermi surface [@sakai] even when the density order is destroyed upon hole doping in the absence of the spin order. It is remarkable that not only doped Mott insulators but also doped density-order phases show such differentiation as observed in the 2D $^3$He system. This differentiation in ${\bf k}$ space can also be the origin of the peak (small cusp) at $T\sim J$ and the hump (peak) at $T\sim T^{*}$ in $C(T)$ for $n_0<1$ [@Fukuyama2007] ($n<n_{\rm c}$ [@saunders]), since the former and latter are attributed to the contributions from the truncated and remaining parts of the Fermi surface in $\bf k$ space, respectively. A multiscale measurement of $C(T)$ ranging from 1 mK to 1 K is desired to resolve this issue in layered $^3$He systems.
Acknowledgment {#acknowledgment .unnumbered}
==============
We thank Hiroshi Fukuyama for supplying us with experimental data and T. Takagi for showing us his PIMC data prior to publication with enlightening discussions on their analyses. This work is supported by Grants-in-Aid for Scientific Research on Priority Areas under grant numbers 17071003, 16076212 and 18740191 from MEXT, Japan.
[99]{} V. Elser: Phys. Rev. Lett. [**62**]{} (1989) 2405. K. Ishida, M. Morishita, K. Yawata, and H. Fukuyama: Phys. Rev. Lett. [**79**]{} (1997) 3451. R. Masutomi, Y. Karaki, and H. Ishimoto: Phys. Rev. Lett. [**92**]{} (2004) 025301. M. Roger, C. Bauerle, Yu. M. Munkov, A.-S. Chen, and H. Godfrin: Phys. Rev. Lett. [**80**]{} (1998) 1308. G. Misguich, B. Bernu, C. Lhuillier, and C. Waldmann: Phys. Rev. Lett. [**81**]{} (1998) 1098. T. Momoi, H. Sakamoto, and K. Kubo: Phys. Rev. B [**59**]{} (1999) 9491. H. Nema, A. Yamaguchi, T. Hayakawa, and H. Ishimoto: unpublished. S. Watanabe and M. Imada: J. Phys. Soc. Jpn. [**76**]{} (2007) 113603. T. Kashima and M. Imada: J. Phys. Soc. Jpn. [**70**]{} (2001) 3052. H. Morita, S. Watanabe, and M. Imada: J. Phys. Soc. Jpn. [**71**]{} (2002) 2109. S. Watanabe: J. Phys. Soc. Jpn. [**72**]{} (2003) 2042. T. Mizusaki and M. Imada: Phys. Rev. B [**74**]{} (2006) 014421. T. Takagi: private communications. M. Neumann, J. Ny[é]{}ki, B. Cowan, and J. Saunders: Science [**317**]{} (2007) 1356. F. F. Abraham, J. Q. Broughton, P. W. Leung, and V. Elser: Europhys. Lett. [**12**]{} (1990) 107. J. de Boer and A. Michels: Physica [**5**]{} (1938) 945. P. A. Whitlock, G. V. Chester, and B. Krishnamachari: Phys. Rev. B [**58**]{} (1998) 8704. S. W. Van Sciver and O. E. Vilches: Phys. Rev. B [**18**]{} (1978) 285. D. S. Greywall: Phys. Rev. B [**41**]{} (1990) 1842. Y. Matsumoto, D. Tsuji, S. Murakawa, C. B[ä]{}uerle, H. Kambara, and H. Fukuyama: unpublished. Similar temperature and doping dependences of specific heat appear for $\Delta E_1=-3.0$ K, $\Delta E_2=-1.5$ K and $\Delta E_3=0.0$ K, since nearly the same structure of the DOS around the top of the 4th band is realized in the MF framework, which is responsible for the low-temperature part of specific heat. For simplicity of analysis, primitive translation vectors ${\bf R}_1=(5a,-\sqrt{3}a)$ and ${\bf R}_2=(4a,2\sqrt{3}a)$ are transformed as ${\bf R}_i^{(1)}=\hat{R}{\bf R}_i$, ${\bf R}_i^{(2)}=\hat{D}{\bf R}_i^{(1)}$ and ${\bf R}_i^{(3)}=\hat{S}{\bf R}_i^{(2)}$, where $\hat{R}$, $\hat{D}$ and $\hat{S}$ are given by $$\left[ \begin{array}{cc}
\cos\theta & -\sin\theta \\
\sin\theta & \cos\theta
\end{array}
\right],
\left[ \begin{array}{cc}
1 & -R_{x2}^{(1)}/R_{y2}^{(1)} \\
0 & 1
\end{array}
\right],
\left[ \begin{array}{cc}
1 & 0 \\
0 & R_{x1}^{(2)}/R_{y2}^{(2)}
\end{array}
\right],$$ respectively, with $\theta=-\arctan(R_{y1}/R_{x1})$. This ${\bf R}_i^{(3)}$ configuration makes the folded Brillouin zone a $2\pi/\bar{a}$ square with $\bar{a}=2\sqrt{7}a$. M. Roger: Phys. Rev. B [**30**]{} (1984) 6432. Since the “charge gap" shows a linear $V_{\rm cutoff}$ dependence for large $V_{\rm cutoff}$, we used small-$V_{\rm cutoff}$ data for the least-squares fit. A. F. Andreev and I. M. Lifshitz: Sov. Phys. JETP [**29**]{} (1969) 1107. $\omega=0$ corresponds to $E=\mu$ in the notation of the inset of Fig. \[fig:DOS\_Tstar\](a) for the hole-doped case. S. Sakai, Y. Motome, and M. Imada: unpublished (arXiv: 0809.0950v1).
|
---
author:
- 'Peter Diener , Nina Jansen , Alexei Khokhlov , Igor Novikov'
title: 'Adaptive mesh refinement approach to construction of initial data for black hole collisions.'
---
[2]{}
**<span style="font-variant:small-caps;">Introduction</span>** {#sec:1}
==============================================================
This paper deals with the construction of initial data for black hole collisions on a high resolution Cartesian adaptive mesh. The problem of black hole collisions is an important problem of the dynamics of spacetime, and has applications to future observations of gravitational waves by gravitational observatories on Earth and in space.
The problem of black hole collisions is highly nonlinear and can only be solved numerically. A solution must be obtained within a large computational domain in order to follow the outgoing gravitational waves far enough from the source. At the same time, very high resolution is required near the black holes to describe the nonlinear dynamics of spacetime. Integration of the collision problem on a three-dimensional uniform mesh requires enormous computational resources, and this remains one of the major obstacles to obtaining an accurate solution.
Adaptive mesh refinement (AMR) can be used to overcome these problems by introducing high resolution only where and when it is required. AMR is widely used in many areas of computational physics and engineering. It has been applied in a more limited way in general relativity [@bib:1]. There are several types of AMR. In a grid-based AMR, a hierarchy of grids is created, with finer grids overlayed on coarser grids if a higher resolution is required [@bib:2]. An unstructured mesh approach uses meshes consisting of cells of arbitrary shapes and various sizes [@bib:3]. A cell-based approach to AMR uses rectangular meshes that are refined at the level of individual cells. This approach combines high accuracy of a regular mesh with flexibility of unstructured AMR [@bib:4]. The new introduced fully threaded tree (FTT) structure, which we use here, leads to an efficient, massively parallel implementation of a cell-based AMR [@bib:5].
The first step in solving the black hole collision problem is to construct the initial data. The widely used conformal-imaging approach has been proposed in [@bib:6],[@bib:7],[@bib:8] and developed in [@bib:9],[@bib:10]. Another approach to the construction of initial data was recently proposed in [@bib:11]. Approaches for constructing initial data for certain specific cases of black hole collisions were proposed in [@bib:12],[@bib:13], [@bib:14]. The conformal-imaging approach [@bib:6],[@bib:7],[@bib:8],[@bib:9],[@bib:10] consists of constructing the extrinsic curvature (momentum constraints equations) using an imaging technique and then solving a nonlinear elliptic equation for the conformal factor (energy constraint) with an appropriate mirror-image boundary condition. This approach is adopted in this paper.
A numerical technique for obtaining initial data for black hole collisions on a uniform Cartesian grid using conformal-imaging approach is described in [@bib:10]. Two major problems with this approach mentioned in [@bib:10] are the low resolution of a uniform grid near black holes, and low-order accuracy and programming complexity of the inner boundary conditions at the black hole throats. In [@bib:10], first-order accurate boundary conditions were implemented. The goal of this paper is to construct initial data for black hole collisions on a high resolution, Cartesian FTT adaptive mesh. In the process of realizing this goal, we found that the accuracy of the solution critically depend on the accuracy of the numerical implementation of the inner boundary condition. We developed a simple second-order accurate algorithm for boundary conditions to deal with this difficulty.
The paper is organized as follows. The next Section \[sec:2\] presents the formulation of the problem and the equations solved. Section \[sec:3\] describes the FTT technique, the finite-difference discretization of the problem, and the numerical solution techniques. Section \[sec:4\] presents the results of the solutions for various configurations of two black holes configurations and compares these with existing solutions.
**<span style="font-variant:small-caps;">Formulation of the problem</span>** {#sec:2}
=============================================================================
The ADM or 3+1 formulation of the equations of general relativity works with the metric $\gamma^{ph}_{ij}$ and extrinsic curvature $K^{ph}_{ij}$ of three-dimensional spacelike hypersurfaces embedded in the four-dimensional space-time, where $i,j = 1,2,3$, and the superscript $ph$ denotes the physical space. On the initial hypersurface, $\gamma^{ph}_{ij}$ and $K^{ph}_{ij}$ must satisfy the constraint equations [@bib:6]. The conformal-imaging approach assumes that the metric is conformally flat, $$\label{eq:1}
\gamma^{ph}_{ij} = \phi^4 \gamma_{ij}\quad,$$ where $\gamma_{ij}$ is the metric of a background flat space. This conformal transformation induces the corresponding transformation of the extrinsic curvature $$\label{eq:2}
K^{ph}_{ij} = \phi^{-2} K_{ij} \quad.$$ With the additional assumption of $$\label{eq:3}
tr K = 0 \quad,$$ the energy and momentum constraints are $$\label{eq:4}
\nabla^2 \phi + {1\over 8} \phi^{-7} K_{ij} K^{ij} = 0 \quad,$$ and $$\label{eq:5}
D_j K^{ij} = 0\quad,$$ respectively, where $\nabla^2$ and $D_j$ are the laplacian and covariant derivative in flat space.
A solution to (\[eq:5\]) for two black holes with masses $M_\delta$, linear momenta ${\bf P}_\delta$ and angular momenta ${\bf S}_\delta$, where $\delta=1,2$ is the black hole index, is [@bib:9] $$\label{eq:6}
K_{ij}({\bf r}) = K^{lin}_{ij}({\bf r}) + K^{ang}_{ij}({\bf r})\quad,$$ where $$\label{eq:7}
\begin{split}
K^{lin}_{ij}({\bf r}) &=
3 \sum_{\delta=1}^2 \left(
{1\over 2R_\delta^2} \left( P_{\delta,i} n_{\delta,j} +
P_{\delta,j} n_{\delta,i} - \right. \right.\\ &\quad \left. \left.
\left( \gamma_{ij} - n_{\delta,i} n_{\delta,j} \right)
P_{\delta,k} n_\delta^k \right)
\right)
\end{split}$$ and $$\label{eq:8}
\begin{split}
K^{ang}_{ij}({\bf r}) &=
3 \sum_{\delta=1}^2 \left(
{1\over R_\delta^3}
\left( \epsilon_{kil} S_\delta^l n_\delta^k n_{\delta,j} +
\right. \right. \\
&\quad
\left. \left. \epsilon_{kjl} S_\delta^l n_\delta^k n_{\delta,i} \right)
\right) \quad.
\end{split}$$
In (\[eq:7\]) and (\[eq:8\]), the comma in the subscripts separates the index of a black hole from the coordinate component indices and is not a symbol of differentiation, $R_\delta=M_\delta/2$ is the black hole throat radius, and ${\bf n}_\delta = ( {\bf r} - {\bf r}_\delta )
/ \vert {\bf r} - {\bf r}_\delta \vert$ is the unit vector directed from the center of the $\delta$-th black hole ${\bf r}_\delta$ to the point $\bf r$. We work in units where $G=1,~c=1$.
The inversion-symmetric solution to (\[eq:5\]) can be obtained from (\[eq:6\]) by applying an infinite series of mirror operators to (\[eq:6\]), as described in [@bib:9]. Note, that before applying the mirror operators to $ K^{ang}_{ij}$, this term must be divided by 2 since the image operators will double its value. The series converges rapidly, and in practice only a few terms are taken. In this paper we take first five terms (for details see [@bib:15]).
After the isometric solution for $K_{ij}$ is found, (\[eq:4\]) must be solved subject to the isometry boundary condition at the black hole throats $$\label{eq:9}
n_\delta^i D_i \phi = -{\phi \over 2 R_\delta}\quad,$$ and the outer boundary condition $\phi \rightarrow 1$ when $r \rightarrow \infty$. This boundary condition is represented by [@bib:9] $$\label{eq:10}
{\partial \phi\over \partial r} = {1-\phi\over r}$$ where $r$ is the distance from the center of the computational domain to the boundary.
**<span style="font-variant:small-caps;">Numerical method</span>** {#sec:3}
==================================================================
We work in the Cartesian coordinate system in the background flat space and find the solution within a cubic computational domain of size L. Figure \[fig:1\] shows the schematic representation of the computational domain. Centers of the throats of black holes are located in the XY plane $z=0$ on the line $y=0$ at equal distances from the origin. The space inside the black hole throats is cut out. The inner boundary condition (\[eq:9\]) is imposed at the surface of the two spheres, and the outer boundary condition is imposed at the border of the computational domain.
a)
b)
Figure \[fig:2\] shows the cut of the computational domain through the $z=0$ plane and gives an example of an adaptive mesh used in computations. The values of all variables are defined at cell centers. Figure \[fig:2\] shows that coarse cells are used at large distances from the black holes, and the finest cells are used near the throats where the gradient of $\phi$ is large. The grid is refined to achieve a desired accuracy of the solution as described below.
There are three types of cells. The first type are internal cells that are actually used in computations (these cells are white in Figure \[fig:2\]). The layer of boundary cells which is one cell wide along the outer border of the computational domain is used to define the outer boundary conditions. The layers of boundary cells inside the throats, two cells wide, are used to define the inner boundary conditions. A cell is considered as located inside a throat if its center is located inside. Boundary cells are indicated as shaded on Figure \[fig:2\].
We solve (\[eq:4\]) as follows. Similar to [@bib:11], we introduce a new unknown variable $$\label{eq:11}
u = \phi - \alpha^{-1}\quad,$$ where $$\label{eq:12}
\alpha^{-1} = \sum_{\delta=1}^2
\left( R_\delta\over\vert {\bf r} - {\bf r}_\delta \vert \right) \quad.$$ The reason for using this transformation is that $u$ varies slower than $\phi$ near the throats, and is more convenient for numerical calculations (see Section \[sec:41\]). In Cartesian coordinates, (\[eq:4\]) then becomes $$\label{eq:13}
\nabla^2 u + F(u) = 0\quad,$$ where $$\label{eq:14}
F(u) = { \beta \over \left( 1 + \alpha u \right)^7 }\quad,$$ and $$\label{eq:15}
\beta = {1\over 8}\, \alpha^7 K_{ij} K^{ij}\quad.$$
Equation (\[eq:13\]) is a nonlinear elliptic equation. Below we describe the numerical procedure of finding its solution. First consider a cell of size $\Delta$ which has six neighbors of the same size. Let us number this cell and its neighbors with integers from 0 to 6, respectively. Then the discretized form of (\[eq:13\]) is $$\label{eq:16}
{u_1 + u_2 + u_3 + u_4 + u_5 + u_6 -6 u_0 \over \Delta^2} + F(u_0) = 0$$ A finite-difference form of (\[eq:13\]) is more complicated for cells that have neighbors of different sizes and may involve larger number of neighbors in order to maintain second order accuracy. This is described in Appendix \[ap:A\]. In general, for every internal cell, the finite-difference discretization may be written as $$\label{eq:17}
f(u_0,u_1,u_2,..., u_n ) = 0 \quad ,$$ where $u_1,...,u_n$ are the values of $u$ in $n$ neighboring points chosen to represent the finite-difference stencil of a cell. In the particular case of a cell with all equal neighbors, $f$ is defined by the left-hand side of (\[eq:16\]).
We solve the set of (\[eq:17\]) by the Newton Gauss-Seidel method [@bib:16], that is, we obtain a new guess of $u_0^{new}$ using Newton iteration with respect to the unknown $ u_0$ $$\label{eq:18}
u_0^{new} = u_0 - f(u_0,...)
\left( {\partial f(u_0,...)\over\partial u_0} \right)^{-1} \quad .$$ Then we accelerate the convergence by using a successive overrelaxation (SOR) $$\label{eq:19}
u_0^{new} = \omega u_0^{new} + (1 - \omega ) u_0 \quad,$$ where $\omega$ is the overrelaxation parameter. For a simple case of all equal neighbors, (\[eq:18\]) can be written as $$\begin{split}
\label{eq:20}
u_0^{new} &= u_0 +\\
&{\left( u_1 + u_2 + u_3 + u_4 + u_5 + u_6 - 6 u_0 \right)\over 6 -
\Delta^2 \left(dF(u_0)\over du_0\right) } +\\
&\qquad {\left(\Delta^2 F(u_0)\right)
\over 6 - \Delta^2 \left(dF(u_0)\over du_0\right) } \quad .
\end{split}$$ For stencils with non-equal neighbors the discretization of equation (\[eq:13\]) is given in Appendix \[ap:A\], from which the expressions for the Newton Gauss-Seidel iterations in those cases can be written explicitly.
We select the value of $\omega$ from the interval $[1,\omega_{max}]$ by the method described in [@bib:17]. The value of $\omega_{max}$ is initially set to $\omega_{max} = 1.995$. If the solution begins to diverge during the iterations, $\omega$ is reset to 1, $\omega_{max}$ is decreased by 2%, and the relaxation is continued allowing $\omega$ to increase up to the new $\omega_{max}$ or until the solution starts to diverge again. Iterations are terminated when $$\label{eq:21}
{u^{new}_0 - u_0 \over u_0} < \varepsilon$$ for all $u_0$, where $\varepsilon$ is a predefined small number. In this paper, we do not attempt to accelerate the iterations using, for example, a multigrid or other sophisticated techniques since the initial value problem must be solved only once.
The procedure described above assumes that the values of $u$ are known at all neighbors. For internal cells that are close to a boundary, these values are substituted with the values of $u$ in boundary cells. Now we describe the procedure of assigning values of $u$ to boundary cells. A similar technique was used for fluid flow simulations about complex bodies [@bib:18].
Figure \[fig:3\] illustrates the process for the outer boundary. In this figure, cell 1 is a boundary cell. We need to define a new value $u^{new}_C$ in its center, point C. We find another point, A, which is located at a distance $\Delta$ (equal to the size of cell 1) from point $C$ along the line that connects $C$ with the center of the computational domain. The value of $u_A$ is found by the second order interpolation using the values of $u$ in cell 2 and all of its neighbors. The interpolation involves old values of $u$ in both internal cells and boundary cells. Then $\phi_A$ is computed using equation (\[eq:11\]) with $\alpha^{-1}$ evaluated in point A. The finite-difference expression of (\[eq:10\]) can be written as $$\label{eq:22}
{\phi^{new}_C - \phi_A\over\Delta} =
{ 1 - ( \phi^{new}_C + \phi_A)/2 \over ( r_C - \Delta/2 ) } \quad ,$$ and then solved for $\phi^{new}_C$. The value of $u^{new}_C$ is then finally found using equation (\[eq:11\]) with $\alpha^{-1}$ calculated at point C.
Figure \[fig:4\] illustrates the process of defining $u$ for the inner boundary. Cell 1 is located inside a throat, and we need to define a new value $u^{new}_C$ in its center, point C. The point $A$ is a point outside the throat that has the same distance to the inner boundary as $C$, along the normal to the throat passing through point C. Let us denote the distance between C and A as $\Delta_{AC}$. Again, the value of $u_A$ is found by second-order interpolation using old values of $u$ in cell 2 and its neighbors, and $\phi_A$ is calculated using equation (\[eq:11\]). Then the boundary condition (\[eq:9\]) becomes $$\label{eq:23}
{ \phi^{new}_C - \phi_A \over \Delta_{AC} } =
{ \phi_A + \phi^{new}_C \over 4R_{\delta} } \quad,$$ which can be solved for $\phi^{new}_C$; $R_{\delta}$ is the radius of the throat that contains point C. As before the value of $u^{new}_C$ is then finally found using equation (\[eq:11\]) with $\alpha^{-1}$ evaluated at point C.
After all inner and outer boundary points are defined, the next iteration (\[eq:21\]) is performed for internal cells and so on, until the iterations converge. The advantage of the method described above is that the new method applies the same numerical algorithm to all cells, and that this algorithm is second-order accurate. The method described in [@bib:10] required 77 different numerical stencils corresponding to different relative positions of the boundary and interior cells, and was only first-order accurate.
The computational domain used by the FTT is a cube of size $L$. It can be subdivided to a number of cubic cells of various sizes $1/2,~1/4,~1/8,...$ of $L$. Cells are organized in a tree with the direction of thread pointers inverted. These pointers are directed from children to neighbors of parent cells, as described in [@bib:5]. The most important property of an FTT data structure is that all operations on it, including tree refinement and derefinement, can be performed in parallel. A computer memory overhead of FTT is extremely small: two integers per computational cell. All coding was done by using the FTTLIB software library [@bib:5] that contains functions for refinement, derefinement, finding neighbors, children, parents, coordinates of a cell, and performing parallel operations.
We characterize an FTT mesh by the minimum and maximum levels of leaves (unsplit cells) present in the tree, $l_{min}$ and $l_{max}$. We construct an adaptively refined mesh by starting with one computational cell representing the entire computational domain and then by subsequently subdividing it by a factor of two until we reach the level $l_{min}$. We find a coarse solution for two black holes at level $l_{min}$ using $u=1$ as an initial guess. After this solution is obtained, we identify the regions that require more refined cells. These regions are then refined once, and a new converged solution is obtained. The old coarse solution is used as an initial guess for the new one. The procedure is repeated until the level of refinement reaches $l_{max}$.
**<span style="font-variant:small-caps;">Test results</span>** {#sec:4}
==============================================================
***One Schwarzschild black hole: one-dimensional test*** {#sec:41}
---------------------------------------------------------
Before presenting the results of three-dimensional test computations on an FTT mesh, we will discuss the accuracy of our numerical method using a simpler one-dimensional test problem. A single Schwarzschild black hole at rest has an analytic solution for the conformal factor $$\label{eq:24}
\phi = 1 + { R\over r}\quad,$$ where $R$ is the throat radius and $r$ is the distance in the background space from the center of the black hole throat. The one-dimensional test was performed on a uniform grid in order to assess the influence of our treatment of boundary conditions (\[eq:22\]) and (\[eq:23\]), and the effects of changing the variable $\phi$ to a new variable $u$ (\[eq:11\]) on the accuracy of the solution. In these calculations, we used a uniform one-dimensional grid consisting of $n=16$, $32$, and $64$ cells. Cells $2$ through $n-1$ were interior cells. Cells $1$ and $n$ were the inner and outer boundary cells, respectively. The computations were performed for the grid size $L=10$, throat radius $R=1$, and using convergence criterion $\varepsilon= 6\times 10^{-15}$. The throat was located between the first and second cell centers at a distance $\Delta r$ from the center of the border cell 1. Three values of $\Delta r = 0.05\Delta$, $0.5\Delta$, and $0.95\Delta$ were considered, where $\Delta$ is the cell size. When $\Delta r = 0.5\Delta$, the throat is located exactly in the middle between the points A and C in (\[eq:22\]), and the inner boundary condition (\[eq:22\]) becomes second-order accurate regardless of the order of interpolation that is used for finding $u_A$. For $\Delta r = 0.05\Delta$ and $0.95\Delta$, the overall accuracy of the inner boundary condition depends on the interpolation used for finding $u_A$.
Numerical solutions were obtained using three different methods: [*(a)*]{} solving the finite-difference form of (\[eq:4\]) for the original unknown variable $\phi$ and using [*first order*]{} interpolation to find $\phi_A$ (the rest of the boundary condition procedure was identical to that described in Section \[sec:2\]), [*(b)*]{} solving for the original variable $\phi$ but using second order interpolation, and finally, [*(c)*]{} using both the second order interpolation and solving for the new unknown $u$ as described in Section \[sec:2\].
[*Accuracy of one-dimensional computations.*]{}\
------------------------------------------------------------------------------------------------------------------
Method $\Delta r / \Delta$ n= 16 n=32 n=64
-------- --------------------- --------------------------- --------------------------- ---------------------------
0.95 $-2.9\times 10^{-1}$ $-7.2\times 10^{-2}$ $-1.8\times 10^{-2}$
(a) 0.5 $-6.1\times 10^{-2}$ $-1.7\times 10^{-2}$ $-5.0\times 10^{-3}$
0.05 $-3.8\times 10^{-1}$ $-4.6\times 10^{-1}$ $-4.7\times 10^{-1}$
0.95 $-2.9\times 10^{-1}$ $-7.0\times 10^{-2}$ $-1.7\times
10^{-2}$
(b) 0.5 $-6.1\times 10^{-2}$ $-1.7\times 10^{-2}$ $-5.0\times 10^{-3}$
0.05 $-3.0\times 10^{-1}$ $-1.2\times 10^{-1}$ $-4.2\times
10^{-2}$
0.95 $-3.4(3.5)\times 10^{-1}$ $-8.7(8.8)\times 10^{-2}$ $-2.2(2.2)\times 10^{-2}$
(c) 0.5 $-7.4(9.8)\times 10^{-2}$ $-2.1(2.4)\times 10^{-2}$ $-5.7(6.1)\times 10^{-3}$
0.05 $-9.9(9.8)\times 10^{-4}$ $-1.4(2.4)\times 10^{-4}$ $-5.7(6.1)\times 10^{-5}$
------------------------------------------------------------------------------------------------------------------
Table \[tab:1\] compares the numerical and analytical solutions for these three cases by showing the maximum relative deviation of the numerical solution from the analytical solution for the interior points of the grid. As can be seen from Table \[tab:1\], the accuracy varies with the grid resolution ($n$), the method of interpolation, and the choice of the unknown variable ($\phi$ or $u$). It also depends on the exact location of the throat relative to grid points ($\Delta r/\Delta$). As we expect various relative locations of the throats relative to grid points in three-dimensional calculations, we need a numerical procedure that provides a second-order accuracy in all cases.
Results using method [*(a)*]{} show that the accuracy of the solution using first order interpolation for $\phi_A$ is unacceptable. The third row of Table \[tab:1\] shows that the accuracy does not improve with increasing number of cells. Results using method [*(b)*]{} show that second order interpolation for $\phi_A$ (method [*(b)*]{}) leads to the overall second-order algorithm. The accuracy of the solution increases roughly by a factor of four when the grid resolution is doubled. Results for method [*(c)*]{} shows that the accuracy is further improved for small $\Delta r$.
It is possible to give an analytical estimate for an error introduced by the numerical inner boundary condition (\[eq:22\]) for the Schwarzschild black hole case. In this case, the general solution of (\[eq:4\]), limited at infinity, is $\phi = c + b/r$. Let us assume that the numerical outer boundary condition does not introduce any error, and the solution in interior points is found exactly. Thus, the numerical approximation of the inner boundary condition is a unique source of numerical error. Then the numerical solution would have the form $\phi = 1 + b/r$. The difference between $b$ and $R$ in (\[eq:24\]) then will determine the overall error in the solution. We can find $b$ by substituting $\phi = 1 + b/r$ into (\[eq:22\]). The estimate of the relative error then is $$\label{eq:25}
{\rm Relative~error} = {R-b\over R} = \left(\Delta r \over 2
R\right)^2 \quad .$$ The estimate of the error using (\[eq:25\]) is given in Table \[tab:1\] in brackets for the method [*(c)*]{}. The comparison with the numerical error indicates that for method [*(c)*]{}, the error in the solution is second-order and is determined by the accuracy of the inner boundary condition rather than by errors of numerical calculations for internal cells.
Time-symmetric initial data for two black holes. {#sec:42}
------------------------------------------------
Next, we consider the case of two black holes with ${\bf P}_\delta = {\bf S}_\delta =0$ that have masses $M_1 = 1$, $M_2=2$ and located (positions of the centers of their throats) at ${\bf r}_1 = (-4,0,0)$, ${\bf r}_2 = (4,0,0)$ with finite separation $\vert {\bf r}_1 - {\bf r}_2 \vert = 8$. The size of the computational domain is $L=64$. Numerical solutions were obtained using FTT adaptive meshes with different increasing resolutions near the black hole throats. We characterize the resolution by specifying the minimum and maximum levels of cells in the tree, $l_{min}$ and $l_{max}$. The cell size at a given level $l$ is $\Delta_l = L \cdot 2^{-l}$. The computations were performed on meshes with $l_{min} = 4,5,6$ and $l_{max} = 6,7,8,9,10,11$. The refinement criterion for this case was the requirement that $$\begin{split}
\label{eq:26}
\eta &= {\Delta \over \phi^4} \left(
\left(\partial\phi^4\over\partial x\right)^2 +
\left(\partial\phi^4\over\partial y\right)^2 +
\left(\partial\phi^4\over\partial z\right)^2
\right)^{1/2} \\
&< 0.05
\end{split}$$ in every cell, where partial derivatives in (\[eq:26\]) are determined by the numerical differentiation. We used $\varepsilon = 6\times 10^{-7}$ in (\[eq:21\]) to terminate iterations.
The mirror-image symmetric analytic solution for two time-symmetric black holes is given in Appendix \[ap:B\] (for details of derivation see [@bib:15]). Table \[tab:2\] gives the comparison of the numerical solutions with fixed $l_{min} = 5$ and varying $l_{max} = 5-11$ with the analytical solution. The table shows the maximum deviation of a numerical solution from the analytical one. It also shows the level of a cell where the maximum error was found. From the table we see that the accuracy of the solution increases approximately linearly with increasing $l_{max}$, and that the maximum error is located at maximum level of refinement near the throats. When we compare solutions obtained on different meshes on which the resolution was increased on all levels simultaneously (Table \[tab:3\]), we observe better than linear convergence, as it should be expected.
The computations performed on an adaptive mesh allow us to save a significant amount of computational resources. For example, our solution obtained on the $l_{min} = 5$, $l_{max} = 11$ adaptive mesh used $6\times 10^5$ computational cells. An equivalent uniform-grid computation with the same resolution near the throats would have required using a $2048^3$ uniform Cartesian grid with $\simeq 8\times 10^9$ cells. That is, in this case the computational gain was $\sim 10^4$.
[*Accuracy of two black hole time-symmetric computations.*]{}\
$l_{min}~-~l_{max}$ Max. error $l_{err}$
--------------------- --------------------- -----------
5 - 5 $3.3\times 10^{-2}$ 5
5 - 6 $4.8\times 10^{-2}$ 6
5 - 7 $2.9\times 10^{-2}$ 7
5 - 8 $1.0\times 10^{-2}$ 8
5 - 9 $1.2\times 10^{-2}$ 9
5 - 10 $3.9\times 10^{-3}$ 10
5 - 11 $8.9\times 10^{-4}$ 11
[*Accuracy of two black hole time-symmetric computations.*]{}\
$l_{min}~-~l_{max}$ Max. error
--------------------- ---------------------
4 - 8 $3.6\times 10^{-2}$
5 - 9 $1.2\times 10^{-2}$
6 - 10 $3.7\times 10^{-3}$
[*Accuracy of computations of two black hole with non-zero linear and angular momenta.*]{}\
$l_{min}~-~l_{max}$ Max. error Avg. error
--------------------- --------------------- ---------------------
5 - 5 $1.1\times 10^{-1}$ $1.1\times 10^{-2}$
5 - 6 $6.5\times 10^{-2}$ $9.0\times 10^{-3}$
5 - 7 $1.6\times 10^{-2}$ $1.8\times 10^{-3}$
5 - 8 $1.7\times 10^{-2}$ $2.1\times 10^{-3}$
5 - 9 $5.3\times 10^{-3}$ $6.4\times 10^{-4}$
5 - 10 $7.5\times 10^{-4}$ $6.7\times 10^{-5}$
Two black holes with linear and angular momenta {#sec:43}
-----------------------------------------------
Cook et al. [@bib:10] considered the initial conditions for two black holes of equal mass $M_1=M_2=2$ with non-zero linear and angular momenta (case A1B8 in Table 3 of [@bib:10]). In our coordinate system (Figure \[fig:1\]), the components of linear and angular momenta of the black holes for the case A18B are ${\bf P}_1=-{\bf P}_2=(0,0,-14)$, and ${\bf
S}_1=(280,280,0)$ and ${\bf S}_2=(0,280,280)$, respectively. The throats are located at ${\bf r}_1 = (4,0,0)$ and ${\bf r}_2 = (-4,0,0)$ with a relative separation equal eight. We computed the A1B8 case using the same size of the computational domain, $L=28.8$ as that used in [@bib:10], and using a series of refined meshes with increasing resolution near the throats, $l_{min} = 5$, $l_{max} = 5-11$. We used the same value of $\varepsilon$ as in Section \[sec:42\] but used a modified mesh refinement criterion $$\label{eq:27}
\begin{split}
\eta &= \max \left(
{\Delta \over \phi^4} \left(
\left(\partial\phi^4\over\partial x\right)^2 +
\left(\partial\phi^4\over\partial y\right)^2 + \right.\right. \\
&\qquad \left.\left.
\left(\partial\phi^4\over\partial z\right)^2
\right)^{1/2}, \vert K_{ij}\vert \right)~< 0.05
\end{split}$$
Table \[tab:4\] compares coarse solutions $l_{min}=5$, $l_{max}=5-10$ with the finest solution obtained on the $l_{min}=5$, $l_{max}=11$ mesh. It shows that both the maximum and the average deviation of the solutions decreases by more than two orders of magnitude when the maximum resolution near the throats is increased by a factor of 32. The solutions in [@bib:10] did not show improvement with increasing resolution (see their Table \[tab:3\]).
a)
b)
Figure 5 shows the comparison of $\phi$ on the line passing through the centers of the throats computed in this paper with resolution $l_{min}=5$, $l_{max}=8$ with the results presented in [@bib:10] (their Figure 4). In our computations, finer cells cluster near the throats where the gradient in the solution is larger, whereas in [@bib:10], cells have the same size and are uniformly distributed in space. This combined with the overall second-order accuracy of our method is the reason why the adaptive mesh refinement solution improves when the mesh is refined (see Table \[tab:4\]).
Conclusions. {#sec:5}
============
In this paper, we applied a new adaptive mesh refinement technique, a fully threaded tree (FTT), for the construction of initial data for the problem of the collision of two black holes. FTT allows mesh to be refined on the level of individual cells and leads to efficient computational algorithms. Adaptive mesh refinement is very important to the problem of black hole collisions because a very high resolution is required for obtaining an accurate solution.
We have developed a second-order approach to representing both the inner boundary conditions at the throats of black holes and the outer boundary conditions. This allowed us to implement an approach to the solution of the energy constraint that is formally second-order accurate. We presented results of tests for two black holes that demonstrated a good improvement of the accuracy of the solution when the numerical resolution was increased.
The FTT-based AMR approach gives a gain of several orders of magnitude in savings of both memory and computer time (Section \[sec:42\]), and opens up the possibility of using Cartesian meshes for very high resolution simulations of black hole collisions. A second-order boundary condition technique similar to that developed in this paper can be applied for the integration of initial conditions in time. We plan to use these techniques for time integration of the black hole collision problem.
[*Acknowledgments.*]{} This paper is based in part on the Master Thesis of Ms. Nina Jansen [@bib:15]. We thank J. Craig Wheeler, Elaine S. Oran and Jay P. Boris for their support, encouragement, and discussions, and Almadena Yu. Chtchelkanova for help with FTT. The work was supported in part by the NSF grant AST-94-17083, Office of Naval Research, DARPA, Danish Natural Science Research Council grant No.9401635, and by Danmarks Grundforskningsfond through its support for the establishment of the Theoretical Astrophysics Center.
Appendix: Finite-difference formulas on the Fully Threaded Tree {#ap:A}
===============================================================
On the FTT mesh, we use the four different types of stencils shown in Figure \[fig:A\] when a cell has zero, one, two, or three neighbors that are two times larger.
a)
b)
c)
d)
These stencils involve nine neighbors of a cell and the cell itself. Let us introduce the vector of partial derivatives of $u$ at the center of cell $0$ $$\begin{split}
\label{eq:a1}
Du_k &= ( u, { \partial u\over\partial x},
{ \partial u\over\partial y},
{ \partial u\over\partial z},
{ \partial^2 u\over\partial x^2},
{ \partial^2 u\over\partial x\partial y},
{ \partial^2 u\over\partial x\partial z}, \\
&\quad
{ \partial^2 u\over\partial y^2},
{ \partial^2 u\over\partial y\partial z},
{ \partial^2 u\over\partial z^2})
\end{split}$$ which includes the value of the function itself as the zeroth component. We can express the values of $u_i$ in $i=1,...,9$ neighboring cells with the second-order accurate Taylor expansion using $Du_k$. This leads to a linear system of ten equations for the ten $Du_k$ unknowns. From this system, $Du_k$ can be expressed as a weighted sum of the values of $u_i$ in the cell $0$ and its neighbors $$\label{eq:a2}
Du_k = \sum_{i=0}^9 w_k^i u_i \quad .$$ Due to the limited number of stencils encountered in the FTT structure, this can be done once and for all and the weights can be stored in an array.
For three bigger neighbors in the positive $x$-, $y$- and $z$-directions the weights are: $$\label{eq:a3}
\begin{split}
w_1 &= {1\over 52\Delta}(-48,-14,54,-1,15,\\
&\qquad \qquad -1,15,-11,-11,2),\\
w_2 &= {1\over 52\Delta}(-48,-1,15,-14,54,\\
&\qquad \qquad -1,15,-11,2,-11),\\
w_3 &= {1\over 52\Delta}(-48,-1,15,-1,15,\\
&\qquad \qquad -14,54,2,-11,-11),\\
w_4 &= {1\over 26\Delta^{2}}(-4,14,-2,1,-15,\\
&\qquad \qquad 1,-15,11,11,-2), \\
w_5 &= {1\over \Delta^{2}}(1,0,-1,0,-1,0,0,1,0,0), \\
w_6 &= {1\over \Delta^{2}}(1,0,-1,0,0,0,-1,0,1,0), \\
w_7 &= {1\over 26\Delta^{2}}(-4,1,-15,14,-2, \\
&\qquad \qquad 1,-15,11,-2,11), \\
w_8 &= {1\over \Delta^{2}}(1,0,0,0,-1,0,-1,0,0,1), \\
w_9 &= {1\over 26\Delta^{2}}(-4,1,-15,1,-15, \\
&\qquad \qquad 14,-2,-2,11,11).
\end{split}$$ For two bigger neighbors in the positive $x$- and $y$-directions the weights are: $$\label{eq:a4}
\begin{split}
w_1 &= {1\over 56\Delta}(-48,-15,57,-1,15, \\
&\qquad \qquad -2,14,-12,-11,3),\\
w_2 &= {1\over 56\Delta}(-48,-1,15,-15,57, \\
&\qquad \qquad -2,14,-12,3,-11),\\
w_3 &= {1\over 2\Delta}(0,0,0,0,0,-1,1,0,0,0), \\
w_4 &= {1\over 28\Delta^{2}}(-8,15,-1,1,-15,\\
&\qquad \qquad 2,-14,12,11,-3), \\
w_5 &= {1\over \Delta^{2}}(1,0,-1,0,-1,0,0,1,0,0), \\
w_6 &= {1\over \Delta^{2}}(1,0,-1,0,0,0,-1,0,1,0), \\
w_7 &= {1\over 28\Delta^{2}}(-8,1,-15,15,-1,\\
&\qquad \qquad 2,-14,12,-3,11), \\
w_8 &= {1\over \Delta^{2}}(1,0,0,0,-1,0,-1,0,0,1), \\
w_9 &= {1\over \Delta^{2}}(-2,0,0,0,0,1,1,0,0,0).
\end{split}$$ For one big neighbor in the positive $x$ direction the weights are: $$\label{eq:a5}
\begin{split}
w_1 &= {1\over 30\Delta}(-24,-8,30,-1,7,-1,7,-6,-6,2),\\
w_3 &= {1\over 2\Delta}(0,0,0,-1,1,0,0,0,0,0), \\
w_3 &= {1\over 2\Delta}(0,0,0,0,0,-1,1,0,0,0), \\
w_4 &= {1\over 15\Delta^{2}}(-6,8,0,1,-7,1,-7,6,6,-2), \\
w_5 &= {1\over \Delta^{2}}(1,0,-1,0,-1,0,0,1,0,0), \\
w_6 &= {1\over \Delta^{2}}(1,0,-1,0,0,0,-1,0,1,0), \\
w_7 &= {1\over \Delta^{2}}(-2,0,0,1,1,0,0,0,0,0), \\
w_8 &= {1\over \Delta^{2}}(1,0,0,0,-1,0,-1,0,0,1), \\
w_9 &= {1\over \Delta^{2}}(-2,0,0,0,0,1,1,0,0,0).
\end{split}$$ For zero big neighbors the mixed second order derivatives are given by the same weights as for the other three cases, while all other derivatives are given by the standard formulas for central differences on a uniform grid. Equation (\[eq:13\]) then can be expressed as $$Du_4 + Du_7 + Du_9 + F(u_0) = 0 \quad .$$
Appendix: Conformal factor of two time-symmetric black holes. {#ap:B}
=============================================================
A solution for the conformal factor of two time-symmetric black holes [@bib:19], [@bib:9] can be written in the Cartesian coordinates as [@bib:15] $$\label{eq:b1}
\phi({\bf r}) = 1 + \sum_{n=1}^\infty \left( F_1^{n} + F_2^{n} \right)
\quad ,$$ with $$\begin{split}
F_1^n &= \begin{cases} F_1^{n-1} \left( R_1 / \rho_{11}^{n-1} \right)
&\text{for $n$ odd};\\
F_1^{n-1} \left( R_2 / \rho_{12}^{n-1} \right)~, &\text{for $n$ even}; \\
1~, &\text{for $n=0$}, \end{cases} \\
F_2^n &= \begin{cases} F_2^{n-1} \left( R_2 / \rho_{21}^{n-1}
\right)~, & \text{for $n$ odd}; \\
F_2^{n-1} \left( R_1 / \rho_{22}^{n-1} \right)~, &\text{for $n$ even}; \\
1~, &\text{for $n=0$}, \end{cases}
\end{split}$$ where $$\label{eq:b3}
\rho_{\alpha\delta}^{n-1} =
\vert {\bf x}^{n-1}_\alpha - {\bf r}_\delta \vert~,$$ $\alpha=1,2$, $\delta=1,2$, ${\bf r}_\delta$ are the positions of the centers of black hole throats, $R_\delta$ are the throat radii, $$\label{eq:b4}
\begin{split}
{\bf x}_1^n &= \begin{cases} {\bf J}_1({\bf x}_1^{n-1})~,
&\text{for $n$ odd}; \\
{\bf J}_2({\bf x}_1^{n-1})~,
&\text{for $n$ even}; \\
{\bf r}~, &\text{for $n=0$},
\end{cases} \\
{\bf x}_2^n &= \begin{cases} {\bf J}_2({\bf x}_2^{n-1})~,
&\text{for $n$ odd}; \\
{\bf J}_1({\bf x}_2^{n-1})~,
&\text{for $n$ even}; \\
{\bf r}, &\text{for $n=0$},
\end{cases}
\end{split}$$ and $$\label{eq:b5}
{\bf J}_\delta ({\bf x})
= \left( R_\delta^2 \over \vert {\bf x} - {\bf r}_\delta \vert^2
\right)
\left( {\bf x} - {\bf r}_\delta \right) + {\bf r}_\delta \quad .$$
[2]{}
[0.5cm]{} M. Ghoptuik, Phys. Rev. Lett., [**70**]{}, 9 (1993); B.M. Parashar and J.C. Browne, [*An Infrastructure for Parallel Adaptive Mesh-Refinement Techniques*]{} , Technical Report, Department of Computer Sciences, University of Texas at Austin, 2.400 Taylor Hall, Austin, TX 78712, 1995; Bruegmann, Int. J. Mod. Phys, D8,[**85**]{}, (1999); P. Papadopoulos, E. Seidel and L. Wild, gr-qc/9802069 and Phys. Rev. D, in press. For example, M.J. Berger and J. Oliger, J. Comput. Phys. [**53**]{}, 484 (1984). For example, D.J. Mavriplis, Ann. Rev. Fluid Mech., [**29**]{}, 473 (1997). For example, D. P. Young, R. G. Melvin, M. B. Bieterman, F. T. Johnson, S. S. Samant, and J. E. Bussoletti, J. Comput. Phys. [**92**]{}, 1 (1991). A.M. Khokhlov, J. Comput. Phys. [**143**]{}, 519 (1998); A.M. Khokhlov and A.Yu. Chtchelkanova, [*Fully Threaded Tree Algorithms for Massively Parallel Computations*]{}, Proceedings of the Ninth SIAM Conference on Parallel Processing, March 22-24, 1999, San Antonio, TX USA. J.M. Bowen and J.W. York, Jr., Phys. Rev. D [**21**]{}, 2047 (1980). J.W. York, Jr., J. Math. Phys. [**14**]{}, 456 (1973). J.W. York, Jr. and T. Piran, Spacetime and geometry, edited by R. Matzner and L. Shepley (University of Texas Press, Austin, 1982), pp. 147-176. G.B. Cook, Phys. Rev. D[**44**]{}, 2983 (1991). G.B. Cook, M.W. Choptuik, M.R. Dubal, S. Klasky, R.A. Matzner, and S.R. Oliveira, Phys. Rev. D[**47**]{}, 1471 (1993). S. Brandt and B. Brugmann, Phys. Rev. Lett. [**78**]{}, 3606 (1997). C.O. Lousta and R.H. Price, Phys. Rev. D[**57**]{}, 1073 (1998). J. Baker and R.S. Puzio, Phys. Rev. D[**59**]{}:044030 (1998). W. Krivan and R.H. Price, Phys. Rev. D[**58**]{}:104003 (1998). N. Jansen, [*The initial value problem of general relativity*]{}, Master Thesis, Theoretical Astrophysics Center, Copenhagen, Denmark; [http://www.tac.dk/$\sim$jansen/\
thesis.ps.gz]{} W.H. Press, B.P. Flannery, S.A. Teukolsky and W.T. Vetterling, Numerical Recipes in FORTRAN: The art of scientific computing, Cambridge, England: Cambridge University Press, 1992. L.A. Hageman and D.M. Young., Applied Iterative methods, New York, Academic Press, 1981. S. Hu, T. Aslam, and S. Stewart, Combust. Theory Modeling, [**1**]{} 113 (1997). C.W. Misner, Ann. Phys. (N.Y.) [**24**]{}, 102 (1963).
|
---
abstract: '.7cm The radiative decays $ B^{*} (D^{*})\rightarrow B(D) \gamma$ are investigated in the framework of light cone QCD sum rules. The transition amplitude and decay rates are estimated.It is shown that our results on branching ratios of D meson decays are in good agreement with the existing experimental data.'
author:
- |
[T.M.ALIEV , D. A. DEMIR , E.ILTAN ,and N.K.PAK]{}\
[*[Physics Department, Middle East Technical University]{}*]{}\
[*[Ankara,Turkey]{}*]{}
title:
---
.7cm
Introduction
============
One of the main goals of the future B meson and charm- $\tau$ factories is a deeper and more comprehensive investigation of the B- and D-meson physics.
The radiative decays constitute an important for a comprehensive study the properties of the new meson states containing heavy quark. However, for interpretation of the experimental results we immediately deal with large distance effects. It is well known that QCD sum rules method \[1\] take into account such large distance effects and a powerfull tool for investigating the properties of hadrons. Nowadays, in current literature, an alternative to “classical QCD sum rules method”, the QCD sum rules based on light cone expansion is widely exploited. Main features of this version is that it is based on the approximate conformal non-perturbative invariance of QCD, and instead of many process-dependent non-perturbative parameters in classical “QCD sum rules”, it involves a new universal non-perturbative parameter, namely the wave function \[3\]. This sum rules previously were succesful applied for calculation the decay amplitude $\Sigma\rightarrow p\gamma$ \[4\], the nucleon magnetic moment, $g_{\pi NN}$ and $g_{\rho\omega\pi}$ couplings \[5\], form factors of semileptonic and radiative decays \[6-9\], the $\pi A\gamma^{*}$ form factor \[10\], $B\rightarrow \rho\gamma$ and $D\rightarrow \rho\gamma$ decays \[11,12\], $B^{*}B\pi$ and $D^{*}D\pi$ coupling constants \[13\] etc. In this work we study the radiative $B^{*}(D^{*})\rightarrow B(D) \gamma$ decays in the framework of the light cone QCD sum rules. Note that these decays were investigated Ref \[14,15\], in the framework of classical QCD sum rules method.
The paper is organized as follows.In section 2, we derive the sum rules which describes $B^{*}(D^{*})\rightarrow B(D)\gamma$ in the framework of the light-cone sum rules. In the last section we present the numerical analysis.
The Radiative $B^{*}\rightarrow B\gamma$ decay
==============================================
According to the general strategy of QCD sum rules, we want to obtain the transition amplitude for $B^{*}\rightarrow B\gamma$ decay, by equating the representation of a suitable correlator function in hadronic and quark-gluon language. For this aim we consider the following correlator $$\begin{aligned}
\Pi_{\mu}(p,q) & = & i\int d^{4}x e^{ipx}
<0|T[\bar{q}(x)\gamma_{\mu} b(x),\bar{b}(0)i\gamma_{5}q(0)]|0>_{F}\end{aligned}$$ in the external electromagnetic field $$\begin{aligned}
F_{\alpha\beta}(x)=i(\epsilon_{\beta}q_{\alpha}-\epsilon_{\alpha}q_{\beta})
e^{iqx}\end{aligned}$$ Here q is the momentum and $\epsilon_{\mu}$ is the polarization vector of the electromagnetic field. The Lorentz decomposition of the correlator is $$\begin{aligned}
\Pi_{\mu}=i\epsilon_{\mu\nu\alpha\beta}p_{\nu}\epsilon_{\alpha}q_{\beta} \Pi\end{aligned}$$ Our main problem is to calculate $\Pi$ in Eq. (3). This problem can be solved in the Euclidean space where both, $p^{2}$ and $p'^{2}=(p+q)^{2}$ are negative and large. In this deep Euclidean region, photon interact with the heavy quark perturbatively. The various contributions to the correlator function Eq.(1) are depicted in Fiq. (1), where diagrams (a) and (b) represent the perturbative contributions, (c) is the quark condensate, (d) is the 5-dimensional operator, (e) is the photon interaction with soft quark line (i.e. large distance effects), and (f) is the three particle high twist contributions. A part of calculations of these diagrams was performed in \[14,15\].
First, let us calculate the perturbative contributions, namely diagram (b). After standard calculation of the bare loop we have $$\begin{aligned}
\Pi_{1} &=& \frac{Q_{q}}{4 \pi^{2}}N_{c}\int_{0}^{1} x dx
\int_{0}^{1} dy \frac{m_{b}\bar{x}+m_{a}x}{m_{a}^{2}x+m_{b}^{2}
\bar{x}-p^{2}x\bar{x}y- p'^{2}x\bar{x}\bar{y}}
% \\
% A_{2}^{a}&=&A_{1}(m_{c} \leftrightarrow m_{a}, Q_{a}\leftrightarrow
% Q_{c})
\end{aligned}$$ where $N_{c}=3$ is the color factor, $\bar{x}=1-x$ ,$\bar{y}=1-y$, $p'=p+q$, $Q_{q}$ and $m_{a}$ are the charge and the mass of the light quarks. Note that the contribution of the diagram (a) can be obtained by making the following replacements in Eq. (4) : $ m_{b} \leftrightarrow m_{a}, e_{q}\leftrightarrow e_{Q}$. The next step is to use the exponetial representation for the denominator $$\begin{aligned}
\frac{1}{C^{n}}=\frac{1}{(n-1)!}\int_{0}^{\infty} d\alpha \alpha^{n-1}
e^{-\alpha C} \nonumber\end{aligned}$$ Then $$\begin{aligned}
\Pi_{1}=\frac{Q_{q}N_{c}}{4\pi^{2}} \int_{0}^{1} x dx \int_{0}^{1} dy
[m_{b}\bar{x}+m_{a}x] \int_{0}^{\infty} d\alpha e^{(
m_{a}^{2}x+m_{b}^{2}\bar{x}-p^{2}x\bar{x}y-p'^{2}x\bar{x}\bar{y})}\end{aligned}$$ Application of the double Borel operator $\hat B(M_{1}^{2})
\hat B(M_{2}^{2})$ on $\Pi_{1}$ gives $$\tilde{\Pi}_{1} = \frac{Q_{q}N_{c}}{4\pi^{2}}
\frac{\sigma_{1}\sigma_{2}}{\sigma_{1}+\sigma_{2}}\int_{0}^{1} dx
\frac{1}{\bar{x}}
[m_{b}\bar{x}+m_{a}x] \, exp \{- \frac{m_{a}^{2}x+m_{b}^{2}\bar{x}}
{x\bar{x}}
(\sigma_{1}+\sigma_{2}) \}$$ where $\sigma_{1}= \frac{1}{M_{1}^{2}}$ and $\sigma_{2}=\frac{1}{M_{2}^{2}}$. In deriving Eq. (6) we use the definition $$\begin{aligned}
\hat{B}(M^{2})e^{-\alpha^{2}}=\delta (1-\alpha M^{2})\end{aligned}$$ Now consider the combination $$\frac{1}{st} \hat{B}(\frac{1}{s},\sigma_{1})\hat{B}(\frac{1}{s},
\sigma_{2})\frac{\tilde {\Pi}_{1}}{\sigma_{1}\sigma_{2}}$$ which just gives the spectral density \[16\]. Using Eq. (6) and Eq. (8), for the spectral density, we get $$\begin{aligned}
\rho_{1}(s,t)&=& \frac{Q_{q}N_{c}}{4\pi^{2}}\int_{x_{0}}^{x_{1}} dx
\delta(s-t)\theta(s-(m_{b}+m_{a})^{2})
\theta(t-(m_{b}+m_{a})^{2}) \nonumber \\ &.&\frac{(m_{b}\bar{x}+m_{a}x)}
{\bar{x}}\end{aligned}$$ where the integration region is determined by the inequality $$sx \bar{x}-(m_{b}^{2}\bar{x}+m_{a}^{2}x) \geq 0$$ Using Eq. (9) for the spectral density, we get $$\begin{aligned}
\rho_{1}^{a}(s,t)&=& \frac{Q_{q}N_{c}}{4\pi^{2}}\delta(s-t)\theta
(s-(m_{b}+m_{a})^{2})
\theta (t-(m_{b}+m_{a})^{2}) \nonumber \\ \{ &-&(m_{b}-m_{a})
\lambda (1,\kappa,l)+m_{b}ln\frac{1+\kappa-l+\lambda(1,\kappa,l)}
{1+\kappa-l-\lambda(1,\kappa,l)} \} \\
\rho_{2}(s,t)&=&\rho_{1}(a\leftrightarrow
b,m_{a}\leftrightarrow m_{b},Q_{a}\leftrightarrow Q_{b})\end{aligned}$$ where $ \kappa=\frac{m_{a}^{2}}{s}, l=\frac{m_{b}^{2}}{s}$ and $$\lambda(1,\kappa,l)=\sqrt{1+\kappa^{2}+l^{2}-2\kappa-2l-2\kappa l}$$ Finally for the perturbative part of the correlator we have $$\begin{aligned}
\Pi^{Per}=\frac{N_{c}}{4\pi^{2}}\int ds \frac{1}{(s-p^{2})(s-p'^{2})}
[(Q_{a}-Q_{b})(1-\frac{m_{b}^{2}}{s})+Q_{b}ln\frac{s}{m_{b}^{2}}]\end{aligned}$$ Here we have neglected the mass of the light-quark. Applying Eq. (7), the Borel transformation, we get $$\begin{aligned}
\Pi^{Per}=\frac{N_{c}}{M_{1}^{2}M_{2}^{2}4\pi^{2}}\int ds
e^{-s(\frac{1}{M_{2}^{2}}+\frac{1}{M_{2}^{2}})}
[(Q_{a}-Q_{b})(1-\frac{m_{b}^{2}}{s})+Q_{b}ln\frac{s}{m_{b}^{2}}]\end{aligned}$$
After simple calculation, for the double Borel transformed quark condensate contribution, we have: $$\begin{aligned}
\Pi^{\bar{q}q}=Q_{b}<\bar{q}q>\frac{1}{M_{1}^{2}M_{2}^{2}}
e^{-\frac{m_{b}^{2}}{M_{1}^{2}+M_{2}^{2}}}\end{aligned}$$ and the 5-dimensional operator contribution is $$\begin{aligned}
\Pi^{\bar{q}q}&=& Q_{b}<\bar{q}q>\frac{1}{M_{1}^{2}M_{2}^{2}}
e^{-\frac{m_{b}^{2}}{M_{1}^{2}+M_{2}^{2}}} \nonumber \\
&.& \{ -\frac{m_{0}^{2}m_{b}^{2}}{4}( \frac{1}{M_{1}^{2}}+\frac{1}
{M_{2}^{2}})^{2}+\frac{1}{3}\frac{m_{0}^{2}}{M_{2}^{2}} \}\end{aligned}$$
For the calculation of the diagram corresponding to the propogation of the soft quark in the external electromagnetic field, we use the light cone expansion for non-local operators. After contracting the b-quark line in Eq(1) we get $$\begin{aligned}
\Pi_{\mu}=i\int d^{4}x \frac{d^{4}k}{(2\pi)^{4}i}
\frac{e^{i(p-k)x}}{m_{b}^{2}-k^{2}}
<0|\bar{q}(x)\gamma_{\mu}(m_{b}+\not\!k)\gamma_{5}q(x)|0>_{F}\end{aligned}$$ Using the identity $\gamma_{\mu}\gamma_{\alpha}\gamma_{5}=g_{\mu\alpha}
\gamma_{5}+i\sigma_{\rho\beta}\epsilon_{\mu\alpha\rho\beta}$ Eq. (18) can be written as $$\begin{aligned}
\Pi_{\mu}=-\epsilon_{\mu\alpha\rho\beta}\int d^{4}x \frac{d^{4}k}
{(2\pi)^{4}i}\frac{e^{i(p-k)x}k_{\alpha}}{m_{b}^{2}-k^{2}}
<0|\bar{q}(x)\sigma_{\rho\beta}q|0>_{F}\end{aligned}$$ The leading twist-two contribution to this matrix element in the presence of the external electromagnetic field is defined as \[4\]: $$\begin{aligned}
<\bar{q}(x)\sigma_{\rho\beta}q>_{F}=Q_{q}<\bar{q}q>\int_{0}^{1}du \phi (u)
F_{\alpha\beta}(ux)\end{aligned}$$ Here the function $\phi (u)$ is the photon wave function. The asymptotic form of this wave function is well known \[4,17,18\] to be, $$\begin{aligned}
\phi (u)=6\xi u(1-u)\end{aligned}$$ where $\xi$ is the magnetic susceptibility.
The most general decomposition of the relevant matrix element, to the twist-4 accuracy, involves two new invariant functions (see for example \[11,12\]): $$\begin{aligned}
<\bar{q}(x)\sigma_{\rho\beta}q>_{F}&=&Q_{q}<\bar{q}q> \{ \int_{0}^{1}du
x^{2}\phi_{1} (u)F_{\rho\beta}(ux) \nonumber \\ &+&
\int_{0}^{1}du
\phi_{2} (u) [ x_{\beta}x_{\eta}F_{\rho\eta}(ux) \nonumber \\
&-& x_{\rho}x_{\eta}
F_{\beta\eta}(ux)-x^{2}F_{\rho\beta}(ux)] \}\end{aligned}$$
In \[11\] it was shown that $$\begin{aligned}
\phi_{1}(u)&=& -\frac{1}{8} (1-u)(3-u) \nonumber \\
\phi_{2}(u)&=& -\frac{1}{4} (1-u)^{2}\end{aligned}$$ So, for twist 2 and 4 contributions we get $$\begin{aligned}
\Pi^{twist 2 +twist 4}&=&Q_{q}<\bar{q}q> \{ \int _{0}^{1}\frac{\phi (u)
du}{m_{b}^{2}-
(p+uq)^{2}}\nonumber \\&-&
4\int _{0}^{1}\frac{(\phi_{1}(u)-\phi_{2}(u)) du}{(m_{b}^{2}-
(p+uq)^{2})^{2}}[1+\frac{2m_{b}^{2}}{(m_{b}^{2}-(p+uq)^{2})^{2}}] \}\end{aligned}$$ In order to perform the double Borel transformation we rewrite denominator in the following way: $$\begin{aligned}
m_{b}^{2}-(p+uq)^{2}=m_{b}^{2}-(1-u)p^{2}-(p+q)^{2}u \nonumber\end{aligned}$$ and applying the Wick rotation $$\begin{aligned}
m_{b}^{2}-(p+uq)^{2}\rightarrow m_{b}^{2}+(1-u)p^{2}+(p+q)^{2}u
\nonumber\end{aligned}$$ Using the exponential representation for the denominator we get $$\begin{aligned}
\Pi^{twist 2 +twist 4}&=& e_{q}< \bar{q}q>e^{-m_{b}^{2}(\frac{1}{M_{1}^{2}}+
\frac{1}{M_{2}^{2}})} [\phi (\frac{M_{1}^{2}}{M_{1}^{2}+M_{2}^{2}})
\frac{1}{M_{1}^{2}+M_{2}^{2}}\nonumber \\&-&
4(\phi_{1}(\frac{M_{1}^{2}}{M_{1}^{2}+M_{2}^{2}})
-\phi_{2}(\frac{M_{1}^{2}}{M_{1}^{2}+M_{2}^{2}}) )
\nonumber \\ &.& (\frac{1}{M_{1}^{2}M_{2}^{2}}+
\frac{m_{b}^{2} (M_{1}^{2}+M_{2}^{2})}{M_{1}^{4}M_{2}^{4}})]\end{aligned}$$ The mass of $B^{*}(D^{*})$ and $B(D)$ mesons are practically equal. So, it is natural to take $M_{1}^{2}=M_{2}^{2}$, and introduce new Borel parameter $M^{2}$ such that $M_{1}^{2}=M_{2}^{2}=2M^{2}$. In this case the theoretical part of the sum rules become $$\begin{aligned}
\Pi^{theor}&=&\frac{3}{4\pi^{2}}\int_{m_{b}^{2}}^{s_{0}} ds
e^{-s\frac{1}{M^{2}}}
[(Q_{q}-Q_{b})(1-\frac{m_{b}^{2}}{s})+Q_{b}ln\frac{s}{m_{b}^{2}}]
\frac{1}{4M^{4}} \nonumber\\
&+& Q_{b}<\bar{q}q>e^{-\frac{m_{b}^{2}}{M^{2}}}[1-
\frac{m_{0}^{2}m_{b}^{2}}{M^{4}}+\frac{m_{0}^{2}}{6M^{2}}]
\frac{1}{4M^{4}} \nonumber\\
&+& Q_{q}< \bar{q}q> (e^{-m_{b}^{2}/M^{2}}-e^{-s_{0}/M^{2}})
[M^{2}\phi (\frac{1}{2}) \nonumber \\
&-& 4(1+\frac{m_{b}^{2}}{M^{2}})
(\phi_{1}(1/2)-\phi_{2}(1/2) ] \frac{1}{4M^{4}}\end{aligned}$$ In the derivation of Eq. (26), we have subtracted the continuum and higher resonance states from the double spectral density. The details of this procedure are given in \[13\].
For constructing the sum rules we need the expression for the physical part as well. Saturating Eq.(1) by the lowest lying meson states, we have $$\begin{aligned}
\Pi_{\mu}^{phys}= \frac{<0|\bar{q}\gamma_{\mu}b|B^{*}><B^{*}|B\gamma>
<B|\bar{b}i\gamma_{5}q|0>}{(m^{2}_{B^{*}}-p^{2})(m^{2}_{B}-(p+q)^{2})}\end{aligned}$$ These matrix elements are defined as $$\begin{aligned}
<0|\bar{q}\gamma_{\mu}b|B^{*}> &=& \epsilon_{\mu} f_{B^{*}}m_{B^{*}} \\
<B|\bar{b}i\gamma_{5}q|0>&=& \frac{f_{B}m^{2}_{B}}{m_{b}} \\
<B^{*}|B\gamma>&=& \varepsilon_{\alpha\beta\rho\sigma}
p_{\alpha}q_{\beta}\epsilon_{\rho}^{*} \epsilon^{*(\gamma)}_{\rho}h/m_{B}\end{aligned}$$ Here $\it h$ is the dimensionless amplitude for the transition matrix element, $\epsilon_{\mu}$ , and $m_{B^{*}} $ are the polarization four-vector and the mass of the vector particle respectively, $f_{B}$ is the leptonic decay constant and $ m_{B}$ is the mass of the pseudoscalar particle, $q_{\beta}$ and $\epsilon_{\sigma}$ are the photon momentum and the polarization vector. Applying the double Borel transformation we get for the physical part of the sum rules $$\begin{aligned}
\Pi^{phys}=f_{B^{*}}m_{B^{*}}f_{B}m_{B}\frac{h}{m_{b}}
\frac{e^{-(m^{2}_{B^{*}}+m^{2}_{B})/2M^{2}}}{4M^{4}}\end{aligned}$$ Note that the contribution of three-particle twist-4 operators are very small \[4\], and thus we neglect them (Fig. (1)). From Eqs.(26-30) we get the dimensionless coupling constant as $$\begin{aligned}
h&=&\frac{m_{b}}{f_{B^{*}}m_{B^{*}}f_{B}m_{B}}
e^{(m^{2}_{B^{*}}+m^{2}_{B})/2M^{2}}\nonumber \\
&.& \{ \frac{3}{4\pi^{2}}\int_{m_{b}^{2}}^{s_{0}} ds
e^{-s\frac{1}{M^{2}}}
[(Q_{q}-Q_{b})(1-\frac{m_{b}^{2}}{s})+Q_{b}ln\frac{s}{m_{b}^{2}}]\nonumber\\
&+& <\bar{q}q>e^{-\frac{m_{b}^{2}}{M^{2}}}[Q_{b} (1-
\frac{m_{0}^{2}m_{b}^{2}}{8M^{4}}+\frac{m_{0}^{2}}{6M^{2}}]\nonumber\\
&+& (e^{-m_{b}^{2}/M^{2}}-e^{-s_{0}/M^{2}})
[Q_{q}\phi (\frac{1}{2})M^{2}\nonumber \\
&-&4Q_{q}(1+\frac{m_{b}^{2}}{M^{2}})
(\phi_{1}(1/2)-\phi_{2}(1/2))] \}\end{aligned}$$
Numerical Analysis of the Sum rules
===================================
The main issue concerning of Eq. (32) is the determination of the dimensionless transition amplitude $ \it h $. First,we give a summary of the parameters entering in the sum rules Eq. (32). The value of the magnetic susceptibility of the medium in the presence of external field was determined in \[19,20\] $$\begin{aligned}
\chi(\mu^{2}=1GeV^{2}) =-4.4GeV^{-2}\nonumber\end{aligned}$$ If we include the anomalous dimension of the current $\bar{q}\sigma_{\alpha\beta}q$, which is $(-4/27)$ at $\mu=m_{b}$ scale, we get $$\begin{aligned}
\chi (\mu^{2}=m_{b}^{2}=-3.4GeV^{-2}\nonumber\end{aligned}$$ and $$\begin{aligned}
<\bar{q}q>=-(0.26\, GeV)^{3}\nonumber\end{aligned}$$ The leptonic decay constants $f_{B(D)}$ and $f_{B^{*}(D^{*})}$ are known from two-point sum rules related to $B(D)$ meson channels: $f_{B(D)}=0.14\, (0.17)\,\, MeV$ \[13,21\] , $f_{B^{*}(D^{*})}=0.16\, (0.24)
\,\,GeV$ \[13,22,23,24\], $m_{b}=4.7 \,\,GeV$ , $m_{u}=m_{d}=0$, $m_{0}^{2}=(0.8\pm0.2)\,\,GeV^{2} $, $m_{B^{*}(D^{*})}=5.\,\, (2.007)\,\, GeV$ and $m_{B(D)}=5.\,\, (1.864)\,\, GeV \,$, and for continuum threshold we choose $s_{0}^{B}(s_{0}^{D}) =36\,(6)\,\,GeV^{2}$.
The value $\phi_{1}(u)-\phi_{2}(u)$ is calculated in \[11\]: $$\begin{aligned}
\phi_{1}(u)-\phi_{2}(u)=\frac{-1}{8}(1-u^{2})\nonumber\end{aligned}$$ From the asymptotic form of the photon wave function, given in Eq. (21), we get $$\begin{aligned}
\phi(1/2)=3/2\chi\end{aligned}$$
Having fixed the input parameters, it is necessary to find a range of $M^{2}$ for which the sum rule is reliable.The lowest value of $M^{2}$, according to QCD sum rules ideology, is determined by requirement that the power corrections are reasonably small. The upper bound is determined by the condition that the continuum and the higher states contributions remain under control.
In Fig. 2 we presented the dependence of $\it h$ on $M^{2}$. From this figure it follows that the best stability region for $\it h$ is $6\,\,GeV^{2}\leq M^{2} \leq 12\,\,GeV^{2}$ , and thus we obtain $$\begin{aligned}
\it f_{B^{0*}}f_{B^{0}} h&=& -0.1\, \pm 0.02 \nonumber \\
\it f_{B^{+*}}f_{B^{+}} h&=& 0.2 \, \pm 0.02\end{aligned}$$ Note that the variation of the threshold value from $36\,GeV^{2}$ to $40\,GeV^{2}$ changes the result by few percents. We see that the sign of the amplitudes for $B^{0}$ and $B^{+}$ are different. This is due to the fact that the main contribution to the theoretical part of the sum rules comes from the bare loop and the quark condensate in external field (last term in Eq. (32)). In $B^{0}(B^{+})$ cases, both contributions have negative (positive ) signs and therefore the sign of h is negative (positive). To get the dimensionless transition amplitude for the decay $D^{*}\rightarrow D\gamma$, it is sufficient to make the following replacements in Eq. (32): $$\begin{aligned}
m_{b}\rightarrow m_{c},\,\, f_{B^{*}(B)}\rightarrow f_{D^{*}(D)},
\,\,\,\ Q_{b}\rightarrow Q_{c}, \\
and \,\,\,\,s_{0\,B}\rightarrow s_{0D} \nonumber\end{aligned}$$ Performing the same calculations we get the best stability region for $\it h$ as $2\,\,GeV^{2}\leq M^{2} \leq 4\,\,GeV^{2}$ (Fig. (3)), and we find $$\begin{aligned}
\it f_{D^{0*}}f_{D^{0}} h&=& 0.12\,\pm 0.02 \nonumber \\
\it f_{D^{+*}}f_{D^{+}} h&=& -0.04 \,\pm 0.01\end{aligned}$$ The signs of the transition amplitudes for $D_{0}$ and $D^{+}$ meson decays are different as in the B-meson case.
Using the transiton amplitude “h”, one can calculate the decay rates for $B^{*}(D^{*})\rightarrow B(D)\gamma$, which can be tested experimentally. For the decay width for radiative decay $B^{*}(D^{*})\rightarrow B(D)\gamma$, we get $$\begin{aligned}
\Gamma(B_{0}^{*}\rightarrow B_{0}\gamma)&=& 0.28 \,KeV \\
\Gamma(B^{+\,*}\rightarrow B^{+}\gamma)&=& 1.20 \, KeV\end{aligned}$$ and $$\begin{aligned}
\Gamma(D_{0}^{*}\rightarrow D_{0}\gamma)&=& 14.40 \,KeV \\
\Gamma(D^{+\,*}\rightarrow D^{+}\gamma)&=& 1.50\, KeV \\\end{aligned}$$
In order to compare the theoretical results with experimental data for D-meson decays, we need the values of the $D^{*}\rightarrow D\pi$ decays widths.We take these values from ref. \[13\]: $$\begin{aligned}
\Gamma(D^{*\,+}\rightarrow D^{0}\pi^{+})&=& 32 \pm 5 \,\,\,KeV \\
\Gamma(D^{*\,+}\rightarrow D^{+}\pi^{0})&=& 15 \pm 2 \,\,\,KeV \\
\Gamma(D^{*\,0}\rightarrow D_{0}\pi^{0})&=& 22 \pm 2\,\,\,KeV\end{aligned}$$
From eqs.(39-43), for the BR, we obtain : $$\begin{aligned}
BR((D_{0}^{*}\rightarrow D_{0}\gamma) &=& 39 \% \nonumber \\
BR(D^{+\,*}\rightarrow D^{+}\gamma) &=& 3 \%\end{aligned}$$ These results are in agreement with the CLEO data \[25\], which are $$\begin{aligned}
%BR((D_{0}^{*}\rightarrow D_{0}\pi^{0}) &=& (63.6\pm 2.3\pm 3.3) \% \nonumber
%%\\
BR((D_{0}^{*}\rightarrow D_{0}\gamma) &=& (36.4\pm 2.3\pm 3.3) \% \nonumber
\\
%BR(D^{+\,*}\rightarrow D^{0}\pi^{+}) &=& (68.1\pm 1.0) \% \\
%BR(D^{+\,*}\rightarrow D^{+}\pi^{0}) &=& (30.8\pm 0.4) \% \nonumber \\
BR(D^{+\,*}\rightarrow D^{+}\gamma) &=& (1.1\pm 1.4\pm 1.6) \% \nonumber
\end{aligned}$$
We see that our predictions on branching ratio are in reasonable agreement with the experimental results.
Acknowledgement: A part of this work was performed under TUBITAK-DOPROG Program. One of authors (T. M. Aliev) thanks TUBITAK for financial support.
[99]{} M.A.Shifman, A.I.Vainstein, V.I.Zakharov, Nucl.Phys. B147(1979) 385. L.J.Reinders, H.R.Rubinstein, S.Yazaki, Phys. Rep., C127. (1985) 1. V. L. Chernyak, A. R. Zhitnitsky, Phys. Rep. 112 (1984) 173 I. I. Balitsky, V. M. Braun, A. V. Kolesnichenko, Nucl.Phys. B312 (1989) 509; V. M. Braun and I. B. Filyanov, Z. Phys. C. 48, 239 (1990) V. M. Belyaev, A. Khodjamirian, R. Ruckl, Z. Phys. C60 (1993) 349 V. L. Cheryak and I. R. Zhitnitsky, Nucl. Phys. B345 (1990) 137 P. Ball, V. M. Braun and M. G. Dosch, Phys. Rev. D44 (1991) 3567 A.Ali, V. M. Braun and H. Simma, Z. Phys. C63 437 (1994) V. M. Belyaev, Z. Phys. C. 65 (1995) 93 A.Ali, V. M. Braun, Phys. Lett. B359 (1995) A. Khodjamirian, G.Stoll and D. Weyler, Phys. Lett. B358 (1995) 129 V. M. Belyaev, V. M. Braun, Khodjamirian and R. Ruckl, Phys. Rev. D51 (1995) 6177 T. M. Aliev , E. Iltan and N. K. Pak, Phys. Lett. B334 (1994) 169 T. M. Aliev , E. Iltan and N. K. Pak (submitted for publication) V.A.Nesterenko and A.V.Radyushkin, Sov. J.Nucl.Phys. 39 (1984) 811 I. I. Balitsky, V. M. Braun, Nucl. Phys. B311 (1988) 541 V. M. Braun, I. E. Filyanov, Z. Phys. C44 (1989) 157 V. M. Belyaev, Ya. I. Kogan, Sov. J. Nucl. Phys. (1995) I. I. Balitsky, A. V. Kolesnickino and A. V. Yung, Sov. J. Nucl. Phys. (1985) T.M.Aliev, V.L.Eletsky, Sov. J. Nucl.Phys. 38 (1983) 936. P. Colangelo, G. Nardulli, A. A. Ovchinnikov and N. Paver, Phys. Lett. B269 (1991) 204 E. Bagan et. al., Phys. Lett. B278 (1992) 457 M. Neubert, Phys. Rev. D45 (1992) 2451 The CLEO Collaboration(F.Butler,et.al.),CLNS-92- 1193 (1992).
[**Figure 1**]{}:
: Diagrams contributing to the correlation function 1. Solid lines represent quarks, wave lines external currents.
[**Figure 2**]{}:
: The dependence of the transition amplitude h on the Borel parameter square $M^{2}$. Solid line corresponds $B^{0}$ and dashed line to $B^{+}$ meson cases.
[**Figure 3**]{}:
: The same as in Fig.2 but for D meson case.
|
---
abstract: |
The statistical properties of the dissipation process constrain the analysis of large scale numerical simulations of three dimensional incompressible magnetohydrodynamic (MHD) turbulence, such as those of Biskamp and Müller \[[*Phys. Plasmas*]{} **7**, 4889 (2000)\]. The structure functions of the turbulent flow are expected to display statistical self-similarity, but the relatively low Reynolds numbers attainable by direct numerical simulation, combined with the finite size of the system, make this difficult to measure directly. However, it is known that extended self-similarity, which constrains the ratio of scaling exponents of structure functions of different orders, is well satisfied. This implies the extension of physical scaling arguments beyond the inertial range into the dissipation range. The present work focuses on the scaling properties of the dissipation process itself. This provides an important consistency check in that we find that the ratio of dissipation structure function exponents is that predicted by the She and Leveque \[[*Phys. Rev. Lett*]{} **72**, 336 (1994)\] theory proposed by Biskamp and Müller. This supplies further evidence that the cascade mechanism in three dimensional MHD turbulence is non-linear random eddy scrambling, with the level of intermittency determined by dissipation through the formation of current sheets.\
[*Copyright (2005) American Institute of Physics. This article may be downloaded for personal use only. Any other use requires prior permission of the author and the American Institute of Physics.*]{}
author:
- 'J. A. Merrifield'
- 'W.-C. Müller'
- 'S. C. Chapman'
- 'R. O. Dendy'
bibliography:
- 'JMerrifPhysPlas14Sep.bib'
title: 'The scaling properties of dissipation in incompressible isotropic three-dimensional magnetohydrodynamic turbulence'
---
Introduction
============
This paper investigates the previously under explored topic of scaling in the local rate of dissipation in magnetohydrodynamic (MHD) flows. Turbulent fluids and plasmas display three properties that motivate development of statistical theories [@frisch]: (i) disorganisation, in the sense that structures arise on all scales; (ii) unpredictability of detailed behaviour, in the sense of inability to predict a signal’s future behaviour from knowledge of its past, implying links with deterministic chaos; and (iii) reproducibility of statistical measures, combined with the presence of statistical self-similarity. Much progress has been made by the heuristic treatment of scaling laws derived from energy cascade arguments, following Kolmogorov and Richardson, see for example Ref.[@frisch]. The basic idea is that energy-carrying structures (eddies) are injected on large scales, non-linear eddy interaction causes energy to cascade to smaller scales in a self-similar manner, and energy is finally dissipated by viscosity on small scales. A quasi-stationary state evolves where the rate of viscous dissipation matches the rate of energy injection. Scaling exponents $\zeta_p$ characterise the resulting statistical self-similarity found in structure functions $S_l^p$: $$\label{str_fnc}
S_l^p = \langle \left(\mathbf{v}(\mathbf{x}+\mathbf{l},t) \cdot \mathbf{l}/l-\mathbf{v}(\mathbf{x},t) \cdot \mathbf{l}/l\right)^p \rangle \sim l^{\zeta_p}$$ Here $\mathbf{v}$ is the fluid velocity, $\mathbf{x}$ is a position vector, $\mathbf{l}$ is a differencing vector, and the average is an ensemble average. The statistical self-similarity represented by the power-law in $l$ is only valid within the inertial range [$l_d \ll l \ll l_0$]{}; here $l_0$ is the characteristic macroscale, and $l_d$ is the dissipation scale at which the cascade terminates. The set of scaling exponents $\zeta_p$ in Eq.(\[str\_fnc\]) is expected to be universal since it characterises the generic cascade process. It is worth noting here that universality can only be expected in the isotropic case. When anisotropies are present, deviation from the isotropic case can be expected, and this will relate to the strength of the anisotropy. In MHD turbulence, anisotropy can be introduced in the form of an imposed magnetic field. The effect of this on the scaling exponents is investigated in Ref.[@MullerBiskampGrappin]. This reference also investigates anisotropy in terms of that introduced by the local magnetic field even when an applied field in absent. This stems from the Goldreich and Sridhar objection to the assumption of local isotropy in the MHD case [@GS3]. In Ref.[@MullerBiskampGrappin] structure functions are calculated with the differencing length perpendicular and parallel to the local magnetic field. The perpendicular structure functions were found to exhibit stronger intermittency than the parallel structure functions. Exponents calculated from the perpendicular structure functions were found to coincide with those calculated from the isotropic structure functions. Essentially dimensional arguments, in which the relevant physical parameters are identified heuristically, have been formulated to provide basic fluid scaling information. These arguments linearly relate $\zeta_p$ to $p$, for example the Kolmogorov 1941 phenomenology [@K41; @ref_sym] predicts $\zeta_p = p/3$. As such, basic fluid scaling can be characterised by one number $a$ such that $$S_l^p \sim l^{pa}
\label{basic_scaling}$$ To exploit these concepts, let us write the equations of incompressible MHD in Elsässer symmetric form [@BisBook]: $$\partial_t \mathbf{z}^\pm = -\mathbf{z}^\mp.\mathbf{\nabla z}^\pm - \mathbf{\nabla} \left(p+ B^2/2 \right) + \left(\nu/2 + \eta/2\right) \nabla^2 \mathbf{z}^\pm + \left(\nu/2 -
\eta/2\right) \nabla^2 \mathbf{z}^\mp
\label{Elsasser_sym}$$ $$\mathbf{\nabla}.\mathbf{z}^\pm = 0
\label{sol_cond}$$ Here the Elsässer field variables are $\mathbf{z}^\pm = \mathbf{v} \pm \mathbf{B} \left( \mu_0\rho\right)^{-\frac{1}{2}}$, where $p$ is the scalar pressure, $\nu$ is kinematic viscosity, $\eta$ is magnetic diffusivity and $\rho$ is fluid density. The symmetry of Eq.(\[Elsasser\_sym\]) suggests that statistical treatment of $\mathbf{z}^{\pm}$ may be more fundamental than separate treatments of $\mathbf{v}$ and $\mathbf{B}$. In light of this, longitudinal structure functions are constructed in terms of Elsässer field variables hereafter: $$\label{Elsasser_strfnc}
S_l^{p(\pm)} = \langle |\left(\mathbf{z}^{(\pm)}(\mathbf{x}+\mathbf{l},t) \cdot \mathbf{l}/l -\mathbf{z}^{(\pm)}(\mathbf{x},t) \cdot \mathbf{l}/l \right)|^p \rangle\sim l^{\zeta^{(\pm)}_p}$$ As mentioned above, heuristic arguments that make predictions about basic fluid scaling only linearly relate $\zeta_p$ to $p$. In reality $\zeta_p$ depends nonlinearly on $p$ due to the intermittent spatial distribution of eddy activity. Basic fluid scaling can be modified to take this into account by the application of an intermittency correction. A commonly applied class of intermittency correction describes statistical self-similarity in the local rate of dissipation $\epsilon_l$ by means of scaling exponents $\tau_p$: $$\label{eps_strfnc}
\langle \epsilon_l^p \rangle \equiv \langle \left(\frac{\nu}{4\pi l^3}\int_0^l\frac{1}{2}\left(\partial_iv_j(x+l',t)+\partial_jv_i(x+l',t)\right)^2dl'^3\right)^p\rangle \sim
l^{\tau_p}$$ For a review of the fractal nature of the local rate of dissipation for hydrodynamics, see for example Ref.[@Men].\
\
As we shall see, the intermittent nature of the system is captured by the nonlinear dependence of $\tau_p$ on $p$ in Eq.(\[eps\_strfnc\]). This nonlinearity can conveniently be expressed multiplicatively in relation to the basic linear fluid scaling of Eq.(\[basic\_scaling\]). Specifically we may write $$S_l^p \sim \langle \epsilon_l^{gp} \rangle l^{a p}
\label{gen_ref_sym}$$ Where $g$ is a constant whose value is inferred from model assumptions such as those of Kolmogorov (K41) and Iroshnikov-Kraichnan (IK) [@Irosh; @Kraich]. This is Kolmogorov’s refined similarity hypothesis [@ref_sym]. The scaling exponents $\zeta_p$ in Eq.(\[Elsasser\_strfnc\]) are inferred by Eq.(\[gen\_ref\_sym\]) to be $\zeta_p = \tau_{pg} + pa$. That is, the intermittency in the velocity field structure functions is achieved via reasoning concerning local rate of dissipation. One model that uses this hypothesis, and has proven successful in predicting the scaling exponents for hydrodynamic turbulence, is that from the 1994 paper of She and Leveque (SL) [@SL94]. Here physical assumptions are made regarding the scaling of the local rate of dissipation. Specifically: the hierarchical nature of Eq.(\[eps\_strfnc\]) above, as expressed in Eq.(6) of Ref.[@SL94]; the rate of dissipation by the most intensely dissipating structures is related only to the eddy turnover time as determined by the basic fluid scaling, as in Eq.(4) of Ref.[@SL94]; and the space filling nature of the most intensely dissipating structures can be described by one parameter (their Hausdorff dimension). These three assumptions can be combined to formulate a second order difference equation for the scaling exponents $\tau_p$ that has one non-trivial solution, as in Eq.(9) of Ref.[@SL94]. This solution can be formulated in terms of the two parameters: the co-dimension of the most intensely dissipating structures, $C = D - d_H$, where $D$ is the embedding dimension and $d_H$ is the Hausdorff dimension; and the basic fluid scaling number expressed by $a$ in Eq.(\[basic\_scaling\]) above. Following Ref.[@SL94], we may write $$\tau_p = -(1-a) p + C - C(1-(1-a)/C)^p
\label{gen_tau_p}$$ This two parameter formulation follows that previously noted by Dubrulle [@Dubrulle], whose parameters $\Delta$ and $\beta$ correspond to our $(1-a)$ and $1-\Delta/C$ respectively. The refined similarity hypothesis, as expressed in Eq.([\[gen\_ref\_sym\]), is then invoked to obtain the following expression for the structure function scaling exponents $\zeta_p$:]{} $$\zeta_p = pa -(1-a)pg + C - C(1-(1-a)/C)^{pg}
\label{gen_zeta_p}$$ Previously Elsässer field structure functions have been identified with an SL model of the type Eq.(\[gen\_zeta\_p\]), see, for example Refs.[@BiskMullPoP; @Axel]. In the present paper the refined similarity hypothesis for MHD is tested by applying a modified form of Eq.(\[gen\_ref\_sym\]), see Eq.(\[mod\_k\_ref\]), to the simulation data of Biskamp and Müller. This provides an important consistency check for previous studies. Equation (\[gen\_tau\_p\]), which probes the multifractal properties of the local rate of dissipation, but does not rely on the refined similarity hypothesis, can also be tested directly against the simulation results, as we discuss below.\
\
Direct numerical simulations must resolve the dissipation scale $l_d$ so that energy does not accumulate at large wavenumbers, artificially stunting the cascade. Most of the numerical resolution is therefore used on the dissipation range, whereas it is only on scales much larger than $l_d$ that dissipative effects are negligible, and scaling laws of the type discussed arise. Thus high Reynolds number simulations with an extensive inertial range are currently unavailable. However, the principle of extended self-similarity (ESS) [@Benz_ESS] can be used to extend the inertial range scaling laws into the range of length scales that is significantly affected by dissipation but still larger than $l_d$. Instead of considering the scaling of individual structure functions, the principle of ESS involves investigating the scaling of one order structure function against another, on the assumption that $$\label{ess}
S_l^{p(\pm)} \sim \left(S_l^{q(\pm)}\right)^{ \left( \zeta_p/\zeta_q \right)}$$ Here it can be seen that any set of structure functions will satisfy this relation providing $$S_l^p \sim G(l)^{\zeta_p}
\label{ess_recast}$$ where $G(l)$ can be any function of $l$ which is independent of $p$. Here we use the notation of S. C. Emily [*et al.*]{} that these authors used to describe the general properties of generalised extended self-similarity, as expressed in Eq.(8) of Ref.[@Emily] – though generalised ESS is not discussed in the present paper. When the objective of ESS is to extend scaling into the dissipation range, $G(l)$ can be rewritten as $l^{G'(l)}$, where $G'(l)$ is introduced to accommodate the non constant flux of energy through length scales in the dissipation range. As such, $G'(l)$ asymptotically approaches one as the length scale increases from the dissipative to the inertial range.\
\
The She-Leveque model as it has appeared so far, is only valid in the inertial range. Let us now discuss how this model can be interpreted in the framework of ESS. This problem has been tackled for hydrodynamic turbulence by Dubrulle [@Dubrulle] for example. In that paper the explicit inclusion of $l$ in the refined similarity hypothesis \[Eq.(\[gen\_ref\_sym\]) with $g=a=1/3$ for hydrodynamic turbulence\] is replaced by a generalised length scale, which is cast in terms of the third order structure function as expressed in Eq.(12) of Ref.[@Dubrulle]. This problem was addressed similarly by Benzi [*et al.*]{} where the scaling relation $$\label{mod_k_ref}
S_l^p \sim \langle \epsilon^{gp}_l \rangle\left( S_l^{1/a} \right)^{pa}$$ is explicitly formulated in Ref.[@Benz_ESS2]. The appropriate relation between $\zeta_p$ and $\tau_p$ is now $\zeta_{p} = \tau_{pg} + pa\zeta_{1/a}$. Using this relation combined with Eq.(\[ess\_recast\]) and Eq.(\[mod\_k\_ref\]) it can be seen that $\langle \epsilon_l^p \rangle$ must also have the form $\langle \epsilon_l^p \rangle \sim G(l)^{\tau_p}$. This implies ESS exists also in the local rate of dissipation, such that $$\label{ess_epsilon}
\langle \epsilon_l^p \rangle \sim \langle \epsilon_l^q \rangle ^{\left( \tau_p / \tau_q \right)}$$ It can then be seen that if a She-Leveque model of the general type Eq.(\[gen\_zeta\_p\]) is used to explain scaling exponents obtained via ESS, as expressed in Eq.(\[ess\]), then two consistency checks are appropriate. First Kolmogorov’s refined similarity hypothesis should be satisfied in the form Eq.(\[mod\_k\_ref\]), and second ESS should exist in the local rate of dissipation as in Eq.(\[ess\_epsilon\]).\
\
The present paper performs these checks for consistency for the simulation of Biskamp and Müller [@BiskMullPoP]. Here the scaling exponents $\zeta^{(\pm)}$ \[see Eq.(\[Elsasser\_strfnc\])\] were investigated via direct numerical simulation of the three dimensional (3D) incompressible MHD equations, with a spatial grid of $512^3$ points [@BiskMullPoP; @BiskMullPRL]. The simulation is of decaying turbulence with initially equal magnetic and kinetic energy densities and $\nu$ = $\eta$. A fuller discussion of the numerical procedure is present in the next section. Since the turbulence decays with time, structure functions are normalised by the total energy in the simulation (kinetic plus magnetic) before time averaging takes place. Biskamp and Müller [@BiskMullPoP] extracted the ratios of scaling exponents $\zeta_p / \zeta_3$ by ESS and directly determined $\zeta_p \simeq 1$. These exponents were found to match a variant of the She-Leveque 1994 model Eq.(\[gen\_zeta\_p\]) inserting Kolmogorov basic fluid scaling ($g = a = 1/3$) with the most intensely dissipating structures being sheet-like.($C = 1$). Early investigations of this type assumed Iroshnikov-Kraichnan fluid scaling where the most intensely dissipating structures are sheet-like [@pol_pouq; @grauer_krug:mhdsl] (see Ref. [@pol_pouq] for a derivation of $\zeta_p^{(\pm)}$ for this case). Sheet-like intensely dissipating structures are thought to exist in MHD turbulence because of the propensity of current sheets to form. We refer to Fig.5 of Ref.[@BiskMullPoP] for isosurfaces of current density squared, and to Fig.\[isozp\] for isosurfaces constructed from the shear in the $\mathbf{z}^{(+)}$ field $\left( \partial_i z^{(+)}_i \right)^2$. Both figures show the existence of two dimensional coherent structures; Fig.\[isozp\] is more directly related to the analyses presented in the present paper. Basic Kolmogorov fluid scaling for Alfvénic fluctuations has been verified for low Mach number ($\simeq 0.1$) compressible [@Axel; @Boldyrev] and incompressible [@BiskMullPoP; @BiskMullPRL] 3D MHD turbulence by power spectrum analysis, and by checking for scaling in the third order structure function such that $\zeta_3 = 1$. Extended self-similarity has also been utilised to extract ratios of scaling exponents related to an inverse cascade in laboratory plasmas [@Antar]. In other work, a generalised version of this SL model has been applied to compressible flows where $C$ is allowed to vary as a fitting parameter [@Axel; @Padoan], and in the case of Ref.[@Padoan] this dimension is interpreted as a function of the sonic Mach number. Figure \[zp5\_v\_zp3\] shows an example of the normalisation and ESS procedure for $\mathbf{z}^{(+)}$ structure functions from the data analysed here.
![Extended self-similarity for the Elsässer field variable $\mathbf{z}^{(+)}$ (order five against order three), compare Eq.(\[ess\]), for decaying MHD turbulence where structure functions are normalised by the total energy before time averaging. This normalisation reveals the same underlying scaling for points from different simulation times, as shown. After Biskamp and Müller [@BiskMullPoP].[]{data-label="zp5_v_zp3"}](Merrif_1.eps){width="50.00000%"}
![(Color online) Isosurfaces of sheet-like (2D) coherent structures of the squared gradient of the $\mathbf{z}^{(+)}$ Elsässer field variable from the 3D MHD turbulence simulation ofBiskamp and Müller.[]{data-label="isozp"}](Merrif_2.eps){width="48.00000%"}
\
\
The philosophy behind our investigation can now be summarised as follows. Given a simulation, the set of structure functions $S_l^p$ can be calculated. These are expected to display statistical self-similarity as expressed in Eq.(\[Elsasser\_strfnc\]), where the scaling exponents $\zeta_p$ give insight into the physics of the cascade process. The relatively low Reynolds numbers attainable by direct numerical simulation, combined with the finite size of the system, make this statistical self-similarity difficult to measure directly. However, it is found that extended self-similarity of the type expressed in Eq.(\[ess\]) is well satisfied, allowing the ratio of scaling exponents $\zeta_p/\zeta_3$ to be directly measured. There is a range of [*a priori*]{} views concerning these ratios, reflecting physical model assumptions. The ratios of scaling exponents recovered from ESS analysis of the 3D MHD simulation data are compared with these models, and the best fit is identified. Our investigation thus assists in validating the physical assumptions made in formulating the currently favoured model, namely Eq.(\[gen\_zeta\_p\] with $g=a=1/3$ and $C=1$ giving $\zeta_p=p/9+1-(1/3)^{p/3}$. In particular, we confirm the existence of a specific type of extended self-similarity in the local rate of dissipation, with exponents given by Eq.(\[gen\_tau\_p\]) with $a=1/3$ and $C=1$ giving $\tau_p = -2p/3 + 1 - (1/3)^p$. We also show that Kolmogorov’s refined similarity hypothesis, in the form Eq.(\[mod\_k\_ref\]), is satisfied.
Numerical Procedures
====================
The data analysed here stems from a direct numerical simulation of decaying isotropic turbulence (see Ref.[@BiskMullPoP] for additional details). The equations of incompressible MHD are written in terms of the vorticity, $\bf{\omega}=\mathbf{\nabla}\times\mathbf{v}$, in order to eliminate the pressure variable. These are solved by a pseudospectral scheme (see, for example, Ref.[@canuto:book]). Time integration is carried out by applying the trapezoidal leapfrog method [@kurihara:trapezleapfrog]. The aliasing error associated with this approach [@orszag:aliasing] is reduced to discretisation error level by spherical truncation of the Fourier grid [@vincent_meneguzzi:simul].\
\
The simulation domain comprises a periodic box in Fourier space with $512^3$ points. Initially the fields have random phases and amplitudes $\sim\exp(-k^2/(2k_0^2))$ with $k_0=4$. The ratio of total kinetic and magnetic energy of the fluctuations is set to one. Cross helicity, which is proportional to $\int_V \left( \left(\mathbf{z}^+\right)^2-\left(\mathbf{z}^-\right)^2 \right) dV$, is absent throughout the duration of the simulation. The magnetic helicity, $H^\mathbf{M}=\frac{1}{2}\int_V \mathbf{A}\cdot\mathbf{B} dV$, where $\mathbf{B}=\nabla\times\mathbf{A}$, is set to $0.7 H^\mathrm{M}_{\max}$. Here $H^\mathbf{M}_{\max}\simeq E^M/k_0$ where $E^M$ is the energy in the magnetic field. The diffusivities $\nu=\eta=4\times10^{-4}$ imply a magnetic Prandtl number $Pr_m=\nu/\eta$ of one.\
\
The run was performed over ten eddy turnover times, defined as the time required to reach maximum dissipation when starting from smooth initial fields. Structure functions and moments of dissipation are calculated in time intervals of $0.5$ between $t=3.5$ and $t=9.5$, during which the turbulence undergoes self-similar decay.
Results
=======
In the present paper, the gradient squared measure $(\partial_iz_i^{(\pm)})^2$ is used as a proxy [@ref_sym] for the local rate of dissipation $\left(\partial_i{B}_j-\partial_j{B}_i\right)^2\eta/2+\left(\partial_i{v}_j+\partial_j{v}_i\right)^2\nu/2$. This proxy has recently been employed to study turbulence in the thermal ion velocity of the solar wind as measured by the ACE spacecraft [@Bersh], giving results consistent with those presented below. This is particularly interesting insofar as the solar wind remains one of the few accessible system of quasistationary fully developed MHD turbulence [@BisBook] although we note that MHD intermittency studies have also been performed on the reversed field pinch experiment RFX [@RFXCarbone]. Figure \[isozp\] shows isosurface plots of the gradient squared measure for the simulation of Biskamp and Müller. Two dimensional coherent structures dominate the image, suggesting the dimension parameter entering an SL model should equal two, as in the model employed by Biskamp and Müller. Following Eq.(\[eps\_strfnc\]), statistical self-similarity in the dissipation measure is expressed as $$\label{grad_square}
\chi_l^{p(\pm)} \equiv \langle \left( \frac{1}{l} \int_0^l \left( \partial_i{z}^{(\pm)}_i({x}+{l}',t) \right)^2dl' \right)^p \rangle \sim l^{\tau_p^{(\pm)}}$$ This proxy, which involves a one dimensional integration rather than the full 3D integration of Eq.(\[eps\_strfnc\]), facilitates comparison with related experimental MHD [@Bersh; @Antar] and hydrodynamic [@Benz_ESS2; @Men; @Chavarria] studies, and also offers benefits in computation time.\
\
The SL model adopted by Biskamp and Müller predicts $$\tau_p^{(\pm)} = - 2p/3 + 1 - \left(1/3\right)^p
\label{SL_MHD}$$ This is simply Eq.(\[gen\_tau\_p\]) with $a = 1/3$ and $C = 1$. Gradients are calculated from the data of Biskamp and Müller [@BiskMullPoP] using a high order finite difference scheme, and the integral is performed by the trapezium method. Normalisation by the spatial average of viscous plus Ohmic rates of dissipation allows time averaging to be performed. Figure \[chi5\_v\_chi3\] shows an example of the ESS and normalisation procedure for $\chi_l^{p(+)}$ order $p=5$ against order $p=3$. Statistical self-similarity is recovered with roll-off from power law scaling as $l$ aproaches the system size. This roll-off behaviour at large $l$ may be due to the finite size of the system, since a more extensive part of the simulation domain is encompassed by the spatial average (the integral over $dl'$) as $l$ increases in Eq.(\[grad\_square\]). In Fig.\[chi5\_v\_chi3\] points identified with this roll-off are removed, and ratio of scaling exponents $(\tau_p/\tau_3)$ is calculated from the remaining points by linear regression. These ratios are shown in Fig.\[taup\_v\_p\]. No significent difference between the scaling recovered from $\mathbf{z}^{(+)}$ and $\mathbf{z}^{(-)}$ can be seen. This should be expected since no theoretical distinction needs to be drawn between $\mathbf{z}^{(+)}$ and $\mathbf{z}^{(-)}$ for the vanishing values of cross helicity present in this simulation. The solid line in Fig.\[taup\_v\_p\] shows the ratio predicted by Eq.(\[SL\_MHD\]), in contrast to the dashed line which shows the ratio predicted by the SL theory for hydrodynamic turbulence [@SL94]. Caution must be taken when calculating high order moments, since these are strongly influenced by the tails of their associated distributions. This can easily lead to bad counting statistics. The order $p$ is only taken up to $p=6.5$ for the dissipation measure (instead of $p=8$ as for the Elsässer field structure functions [@BiskMullPoP]) because of the extremely intermittent nature of the signal; large values affect the average \[the angular brackets in Eq.(\[grad\_square\])\] more as the order $p$ increases. This effect is evaluated using a similar methodology to that in Ref.[@NPG]. If a worst case scenario is imagined, where the average is strongly affected by one point in the signal, one would expect $l / \delta l$ members of the spatial average in Eq.(\[grad\_square\]) to be directly affected by this point, where $\delta l$ is the grid spacing. We can then define an event as incorporating $l / \delta l$ members of the spatial average. It is found that $\simeq5$ percent of the average is defined by only $\simeq10$ events for order $p=6.5$. This situation is of course worse for higher values of $p$.
![Extended self-similarity in the Elsässer field variable $\mathbf{z}^{(+)}$ gradient squared proxy for the local rate of dissipation (order five against order three), compare Eq.(\[ess\_epsilon\]) with the gradient squared proxy from Eq.(\[grad\_square\]) replacing $\epsilon_l^p$. Normalisation by the space averaged local rate of viscous and Ohmic dissipation allows time averaging in spite of the decay process. Deviation from power law scaling at large $l$ is probably a finite size effect. The solid line is the best fit in the linear region.[]{data-label="chi5_v_chi3"}](Merrif_3.eps){width="50.00000%"}
![Ratio of scaling exponents (order p over order three) obtained via extended self-similarity from the Elsässer field gradient squared proxy for the local rate of dissipation. Errors in these measurements lie within the marker symbols. Solid line shows ratios predicted by a She-Leveque theory based on Kolmogorov fluid scaling and sheet-like most intensely dissipating structures, Eq.(\[SL\_MHD\]). The dashed line shows ratios predicted by hydrodynamic She-Leveque [@SL94].[]{data-label="taup_v_p"}](Merrif_4.eps){width="50.00000%"}
\
\
Plots were constructed in order to test Eq.(\[mod\_k\_ref\]). This involves taking the product of structure functions of the field variables and the dissipative quantities, in contrast to Figs.\[zp5\_v\_zp3\] and \[chi5\_v\_chi3\]. Figures \[cross\_plot\_n1p5\] and \[cross\_plot\_n2\] show these plots for $n=1.5$ and $n=2$ respectively. The low order measure in Fig.\[cross\_plot\_n1p5\] shows a relation that is nearly linear, with a gradient close to the ideal value of one, see Eq.(\[mod\_k\_ref\]). This is encouraging considering the deviation expected at the smallest and largest scales due to finite size effects. However, unlike the case in Fig.\[chi5\_v\_chi3\], there may be some curvature across the range of the plot. The higher order measure in Fig.\[cross\_plot\_n2\] deviates from a linear scaling relation. We note that taking this test to high order involves the product of two quantities that have challenging counting statistics, plotted against a high order structure function. The deviation of the gradient seen in Fig.\[cross\_plot\_n2\] from the ideal value of one is perhaps not surprising, because the constraints described above become stronger at high order.
![Plot to test Kolmogorov’s refined similarity hypothesis as applied to extended self-similarity, Eq.(\[mod\_k\_ref\]). This involves taking the product of the field variable and dissipative structure functions, in contrast to Figs.\[zp5\_v\_zp3\] and \[chi5\_v\_chi3\]. Agreement with the hypothesis would give a straight line with unit gradient. Normalisation was performed as in Figs.\[zp5\_v\_zp3\] and \[chi5\_v\_chi3\] to allow time averaging despite the decay process.[]{data-label="cross_plot_n1p5"}](Merrif_5.eps){width="50.00000%"}
![High order test of Kolmogorov’s refined similarity hypothesis as applied to extended self-similarity, Eq.(\[mod\_k\_ref\]). Normalisation is performed as in Fig.\[cross\_plot\_n1p5\].[]{data-label="cross_plot_n2"}](Merrif_6.eps){width="50.00000%"}
Conclusions
===========
Extended self-similarity is recovered in the gradient squared proxy for the local rate of dissipation of the Elsässer field variables $\mathbf{z}^{(\pm)}$ computed by Biskamp and Müller. We believe this is the first time this has been shown for MHD flows. This result supports the application to Elsässer field scaling exponents $\zeta_p^{(\pm)}$ of turbulence theories that require statistical self-similarity in the local rate of dissipation, even when $\zeta_p^{(\pm)}$ are extracted from relatively low Reynolds number flows via ESS. Furthermore the ratio of exponents recovered is that predicted by the SL theory proposed by Biskamp and Müller [@BiskMullPoP]. This supplies further evidence that the cascade mechanism in three dimensional MHD turbulence is non-linear random eddy scrambling, with the level of intermittency determined by dissipation through the formation of two dimensional coherent structures. However, Kolmogorov’s ESS modified refined similarity hypothesis remains to be verified at high order.\
We are grateful to Tony Arber for helpful discussions. This research was supported in part by the United Kingdom Engineering and Physical Sciences Research Council. SCC acknowledges a Radcliffe fellowship.
|
---
abstract: |
The evidence for positive cosmological constant $\Lambda$ from Type Ia supernovae is reexamined.
Both high redshift supernova teams are found to underestimate the effects of host galaxy extinction. The evidence for an absolute magnitude- decay time relation is much weakened if supernovae not observed before maximum light are excluded. Inclusion of such objects artificially supresses the scatter about the mean relation.
With a consistent treatment of host galaxy extinction and elimination of supernovae not observed before maximum, the evidence for a positive lambda is not very significant (3-4 $\sigma$). A factor which may contribute to apparent faintness of high z supernovae is evolution of the host galaxy extinction with z.
The Hubble diagram using all high z distance estimates, including SZ clusters and gravitational lens time-delay estimates, does not appear inconsistent with an $\Omega_o$ = 1 model.
Although a positive $\Lambda$ can provide an, albeit physically unmotivated, resolution of the low curvature implied by CMB experiments and evidence that $\Omega_o <$ 1 from large-scale structure, the direct evidence from Type Ia supernovae seems at present to be inconclusive.
author:
- |
Michael Rowan-Robinson\
$^1$Astrophysics Group, Imperial College London, Blackett Laboratory, Prince Consort Road, London SW7 2BW
title: 'Do Type Ia Supernovae prove $\Lambda$ $>$ 0 ?'
---
0.0in 0.0in 9.0in 6.25in
infrared: cosmology: observations
Introduction
============
The claim that the measured brightnesses of Type Ia supernovae at redshifts 0.1 - 1.0 imply $\Lambda > 0$ (Schmidt et al 1998, Garnavich et al 1998, Riess et al 1998, Perlmutter et al 1999, Fillipenko and Riess 2000, Riess et al 2001, Turner and Riess 2001) has had a dramatic effect on cosmology. The model with $\lambda_o$ = 0.7, where $\lambda_o = \Lambda/3 H_o^2$, and $\Omega_0$ = 0.3 has become a concensus model, believed to be consistent with most evidence from large-scale stucture and CMB fluctuations.
In this paper I test the strength of the evidence that $\Lambda > 0$ and show that there are inconsistencies in the way the supernovae data have been analyzed. When these are removed, the strength of the evidence for $\Lambda > 0$ is much diminished.
To set the scene, Fig 1 shows $B_{max}$ versus log (cz) for 117 Type Ia supernovae since 1956 from the Barbon et al (1998) catalogue (excluding those labelled ’\*’ which are discovery magnitudes only), together with published supernovae from the high z programmes, corrected for Galactic and internal extinction, but not for decay-time effects, together with predicted curves from an $\Omega_o$ = 1 model. At first sight there is not an enormous difference between the high z and low z supernovae, except that the latter seem to show a larger scatter. Fig 2 shows the same excluding less reliable data (flagged ’:’, in the Barbon et al catalogue, or objects with pg magnitudes only (Leibundgut et al (1991)), correcting for peculiar velocity effects (see section 3), using the Phillips et al (1999) internal extinction correction (see section 2) where available, and deleting two objects for which the dust correction is $>$ 1.4 mag. The scatter for the low z supernovae appears to have been reduced. Finally Fig 3 shows the supernovae actually used by Perlmutter et al (1999). Now the scatter in the low z supernovae is not much different from the high z supernovae and a difference in absolute magnitude between low z and high z supernovae, relative to an $\Omega_o$ = 1 model, can be perceived. However comparison with Fig 2 suggests that the low z supernovae used may be an abnormally luminous subset of all supernovae. We will return to this point in section 4.
Excellent recent reviews of Type Ia supernovae, which fully discuss whether they can be thought of as a homogenous population, have been given by Branch (1998), Hildebrand and Niemeyer (2000) and Leibundgut (2000, 2001). In this paper I shall assume that they form a single population and that their absolute magnitude at maximum light depends, at most, on a small number of parameters. I do not, for example, consider the possibility of evolution, discussed by Drell et al (2000).
Absolute magnitudes are quoted for $H_o$ = 100 throughout.
Internal Extinction
===================
One of the most surprising claims of the high z supernovae teams is that internal extinction in the high z supernovae is small or even negligible (Perlmutter et al 1999, Riess et al 2000). While some nearby Type Ia supernovae take place in elliptical galaxies, where internal extinction in the host galaxy may indeed be negligible, the majority take place in spiral galaxies, where internal extinction can not be neglected. Moreover as we look back towards z = 1, we know from the CFRS survey that there is a marked increase in the fraction of star-forming systems (Lilly et al 1996), and we would expect the average host galaxy extinction to be, if anything, higher than in low z supernovae.
On average, the extinction along the line of sight to a (non edge-on) spiral galaxy can be represented by de Vaucouleurs’s prescription (de Vaucouleurs et al 1976):
$A_{int}$ = 0.17 + $\alpha(T) lg_{10} (a/b)$, for T $>$ -4, = 0 for T = -4, -5,
where T is the de Vaucouleurs galaxy type, a, b are the major and minor diameters of the galaxy, and $\alpha$(T) = 0.2, 0.4, 0.6, 0.7, 0.8 for T = -3, -2, -1, 0, 1-8, respectively. I assume $lg_{10} (a/b)$ = 0.2 where this not known.
The extinction to a particularly supernova can be expected to show marked deviations from this average value, since the dust distributions in galactic discs are very patchy. Because of the cirrus-like distribution of interstellar dust clouds, lines-of-sight to some stars will have much lower extinctions than this average value. The presence of dense molecular clouds in the galaxy means that other lines-of-sight can have very much higher extinctions. Phillips et al (1999) have analyzed the extinction to 62 Type Ia supernovae, using both the colours at maximum light and the colours at very late time, 30-90 days after maximum. Their extinction corrections do resolve a number of cases of anomalously faint Type Ia supernovae (eg 1995E, 1996Z, 1996ai). The agreement of their internal extinction values with those given by the de Vaucouleurs prescription is not brilliant in detail (see Table 1), but as expected give broadly the same median values (median $A_{int}$ for Phillips et al sample: 0.29, median de Vaucouleurs correction for same sample: 0.33). These median values are broadly consistent with the Monte Carlo simulations of Hatano et al (1998). Figure 4 shows the correlation between $A_{int}$ and the (B-V) colour at maximum light, corrected for Galactic extinction, with different symbols for Phillips et al (1999) estimates and those derived from the de Vaucouleurs prescription. The distribution is consistent with an intrinsic (unreddened) colour range of (B-V) = -0.1 to 0.1, combined with the usual $A_{int}$ = 4.14 (B-V) relation.
Riess et al (1998) have given estimates of the total extinction ($A_{int} + 4.14 E(B-V)_{Gal}$ for their sample of low supernova derived both via the MLCS method and via a set of templates (their Table 10). We can compare the template estimates directly with those of Phillips et al (1999) for the same galaxies (Fig 5). The Riess et al values are lower on average by 0.22 magnitudes, which implies that the set of templates (and the training set for the MLCS method) have not been completely dereddened. The large scatter in this diagram perhaps indicates the difficulty of estimating the host galaxy extinction for supernovae. If this average underestimate of 0.22 mag. is added to the Riess et al estimates of $A_{int}$ for high-z supernovae, the average value of $A_{int}$ for these is almost identical to that for low z supernovae. Thus the claim that high z supernovae have lower extinction than low z supernovae seems to be based on a systematic underestimate of extinction in the high z galaxies.
Perlmutter et al (1999) estimate that host galaxy extinction is on average small both in local and high z supernovae, and neglect it in most of their solutions. Parodi et al (2000) neglect internal extinction completely, preferring to cut out the redder objects (B-V$>$0.1) from their samples. This will not change the relative absolute magnitudes between low and high z (or between supernovae with Cepheid calibration and the others) provided the two samples end up with the same mean extinction. This, however, might be difficult to guarantee. I have preferred to correct the low z supernovae as described above, and then correct the Perlmutter et al data by an average host galaxy extinction of 0.33 mag.
Ellis and Sullivan (2001) have carried out HST imaging and Keck spectroscopy on host galaxies for supernova used by Perlmutter et al (1999) and find that the Hubble diagram for supernovae in later type galaxies shows more scatter than those hosted by E$/$S0 galaxies, presumably due to the effects of host galaxy extinction.
Correction for peculiar velocity effects
========================================
For nearby supernovae it is important to correct for the peculiar velocity of the host galaxy. Hamuy et al (1995) tackle this be setting a minimum recession velocity of 2500 $km/s$ for their local sample. However since peculiar velocities can easily be in excess of 500 $km/s$ this is probably not sufficient for an accurate result. Here I have used the model for the local (z $<$ 0.1) velocity field developed to analyze the CMB dipole, using data from the IRAS PSCz redshift survey (Rowan-Robinson et al 2000). I estimate the velocity error in estimates from this model to be 100 + 0.2 x V $km/s$ for V $<$ 15000 $km/s$, 400 $km/s$ for V $>$ 15000 $km/s$. These errors are incorporated into the subsequent analysis (points are weighted by error$^{-2}$). The code for this peculiar velocity model is available at http://astro.ic.ac.uk/$\sim$mrr.
Tables 1 and 2 gives data for the Type Ia supernovae used in the present study. The columns are as follows: (1) supernova name, (2) host galaxy recession velocity, (3) same, corrected for peculiar velocity, (4) blue magnitude at maximum light $B_{max}$, (5) $E(B-V)_{Gal}$, from Schlegel et al (1998), (6) $A_{int}$ from de Vaucouleurs et al (1976) presciption, (7) $A_{int}$ from Phillips et al (1999), (8) absolute magnitude $M_B$ ($\Omega_o$ = 1 universe), (9) distance modulus, assuming $M_B$ = -19.47 (Gibson et al 2000), (10) $\Delta m_{15}$, (11) $(B-V)_o$. Sources of $B_{max}$, in order of preference: Hamuy et al (1996)/Perlmutter et al (1999)/Riess et al (1998, Barbon et al (1999). Sources of $\Delta m_{15}$, in order of preference: Phillips et al (1999), Saha et al (1999), Riess et al (1998), Parodi et al (2001). Sources of $(B-V)_o$: Hamuy et al (1996), Saha et al (1999), Parodi et al (2001), Leibundgut et al (1991).
Absolute magnitude-decay time relation
======================================
Once we have corrected for the effects of internal extinction, we can test whether the absolute magnitude of supernovae at maximum light depends on decay time, or some other parameter like the colour at maximum light $(B-V)_o$, as proposed by Tripp and Branch (1999), and Parodi et al (2000).
Phillips (1993) proposed that there is a strong dependence of absolute magnitude at maximum light and the decay-time, characterized by the blue magnitude change during the 15 days after maximum, $\Delta m_{15}$. Specifically he found $d M_B/d \Delta m_{15}$ = 2.70. Riess et al (1995) showed that the dispersion in the sn Ia Hubble diagram was significantly reduced by applying this correction. Tammann and Sandage (1995) used supernovae in galaxies for which distance estimates were available to set a strong limit on the slope of the $M_B - \Delta m_{15}$ relation ($<$ 0.88). Hamuy et al (1996) used a new sample of well-studied supernovae to derive a B-band slope of 0.784 $\pm$ 0.18 for the $M_B - \Delta m_{15}$ relation, consistent with the Tammann and Sandage limit, and much lower than the original claim of Phillips (1993). Riess et al (1996) have discussed a related method of analyzing this correlation, the MLCS method.
However there is a further consideration here. It is really only valid to carry out this analysis on supernovae which have been detected prior to maximum light. The process used hitherto by all workers in this field of extrapolating to maximum light assuming an $M_B - \Delta m_{15}$, or equivalently MLCS, relation is assuming that all extrapolated objects adhere to the mean line of the relation, thus underestimating the scatter in the relation. This is a process which artificially improves the apparent signal-to-noise of the final Hubble relation or $\Lambda > 0$ signal. Hamuy et al (1995) do make some allowance for this in assigning a larger uncertainty (and hence lower weight) to supernova first observed after maximum.
Fig 6 shows a plot of the absolute magnitude at maximum light, $M_B$, corrected by 0.784 $\Delta m_{15}$, versus $(B-V)_o$, the colour at maximum light corrected for the effects of extinction, using the Phillips et al (1999) estimates of extinction. No clear correlation remains between these corrected quantities. Thus correction for the $M_B - \Delta m_{15}$ relation removes most of the correlation between $M_B$ and $(B-V)_o$.
Fig 7 shows the relations between $M_B$, corrected for extinction, and $\Delta M_{15}$, for objects detected at least 1 day prior to maximum light. The best fit linear relation is shown, which has slope 0.99 $\pm$0.38, consistent with values reported by Hamuy et al (1995), 0.85 $\pm$0.13, and by Hamuy et al (1996), 0.78 $\pm$ 0.18. However the significance of the relation is reduced, because of the smaller number of data points, and is now only 2.6 $\sigma$. The rms deviation from this mean relation is 0.44 mag, much larger than is generally claimed for this relation. For example, Riess et al (1996) claim that the residual sigma after correction for the $M_B - \Delta m_{15}$ relation is 0.12 mag.
The spurious reduction in scatter generated by extrapolating supernovae first observed after maximum can be seen in Fig 8, which shows the same relation for supernovae first observed after maximum. The same effect can in fact be seen in the top panel of Fig 4 in Parodi et al (2000). The marked difference between Fig 7 and 8 suggests that supernovae first observed after maximum should be excluded from the analysis.
Perlmutter et al (1999) have applied a different version of the $M_B - \Delta m_{15}$ method, which they call the ’stretch’ method, to high z supernovae. This method appears to give a significantly lower correction to $M_B$ as a function of $\Delta m_{15}$ (Leibundgut 2000). Figure 9 shows the Perlmutter et al ’stretch’ correction to the absolute magnitude, as a function of $\Delta m_{15}$, for 18 low z supernovae (note that they omit the two supernovae with $\Delta m_{15}$ = 1.69 from their solution). The slope is 0.275 $\pm$0.04, only one third of the Hamuy et al (1996) value. The method has been further discussed by Efstathiou et al (1999) and Goldhaber et al (2001), but no explanation for the lower slope is given.
Testing for positive $\Lambda$
==============================
With the criteria established above (i) that a full and consistent correction must be made for extinction, (ii) that if the the $M_B - \Delta m_{15}$ relation is used it should only be applied to supernovae detected before maximum light, we can now reexamine the Hubble diagram for supernovae. I consider several different samples:
\(1) all well-observed supernovae, with no correction for the $M_B - \Delta m_{15}$ relation. Supernovae with only pg magnitudes are excluded, as also are supernovae first observed after maximum. The de Vaucouleurs extinction correction is used for supernovae not studied by Phillips et al (1999). Supernovae with $A_{int} >$ 1.4 are excluded.
\(2) all well-observed supernovae for which in addition $\Delta m_{15}$ is known, with correction for the $M_B - \Delta m_{15}$ relation, using the 0.784 slope of Hamuy et al (1966).
\(3) as (2), but with internal extinction set to zero, as advocated by Perlmutter et al (1999).
\(4) as (2), but using the Perlmutter ’stretch’ correction (or 0.275 $\Delta m_{15}$ where stretch correction not available).
\(5) as (2) but using the quadratic $\Delta m_{15}$ correction of Phillips et al (1999).
It was not possible to independently check the effect of applying the MLCS method for correcting for decay-time correlations because the set of training vectors published by Riess et al (1996) is not the one actually being used in the high z supernova analysis (Riess et al 1998).
The mean absolute magnitudes for low z ( z $<$ 0.1) and high z supernovae, in an Einstein de Sitter model are tabulated for the different samples in Table 3. Without the correction for $\Delta m_{15}$, the significance of the difference in absolute magnitude for 53 low and 52 high redshift supernovae is only 2.8 $\sigma$, hardly sufficient to justify the current wide acceptance of positive $\Lambda$. The significance increases to 3.5 $\sigma$ if only the 26 supernovae observed since 1990 are used.
Including the $\Delta m_{15}$ correction, the significance increases to 4.0 $\sigma$ (3.9 $\sigma$ if we use the Perlmutter correction, 4.6 $\sigma$ if we use the quadratic $\Delta m_{15}$ correction of Phillips et al 1999, which is the form used by Riess et al 1998). Could this increase in significance be due to some bias in the local supernova sample for which $\Delta m_{15}$ has been measured ? The mean absolute magnitude at maximum light, corrected for extinction but not for $\Delta m_{15}$, for all 53 ’good’ low z supernovae is -18.54. The mean for those for which $\Delta m_{15}$ has been measured is -18.64, 0.1 magnitudes brighter. In fact the difference in the mean between those for which $\Delta m_{15}$ has been measured and those for which it has not been measured is 0.23 mag., almost the size of the whole signal on which the claim for positive $\Lambda$ is based. However there is no difference in mean absolute magnitude between the mean absolute magnitude for the 10 local Calan Tololo supernovae used by by the high-z supernova teams and the 16 other local supernovae studied since 1990, so there is no evidence that the Calan Tololo sample is biased. An alternative explanation of the fainter mean absolute magnitudes seen in the pre-1990 data is a systematic error in the photographic photometry used.
There is another factor which may contribute to the apparent faintness of high z supernovae. Current models of the star formation history of the universe, which show a strong peak in the star formation rate at z = 1-2, imply that the mean dust optical depth in galaxies would be expected to increase with redshift out to z =1.5-2 (Calzetti and Heckman 1999, Pei et al 1999). Line 6 of Table 3 shows the effect of adopting Calzetti and Heckman’s model 3 for the evolution of $A_{int}(z)$, a model which agrees well with models for infrared and submillimetre sourcounts and backgrounds (eg Rowan-Robinson 2001a) and direct evidence from oberved estimates of the star formation history (Rowan-Robinson 2001b). This correction, which increases the average $A_{int}$ for the high z supernovae by 0.15 mag., is sufficient to reduce the significance of the magnitude difference between high and low z supernovae, in an Einstein de Sitter model, to 3.0 and 1.5 $\sigma$ for cases with and without correction for $\Delta m_{15}$.
Hubble diagram at high redshift
===============================
Rowan-Robinson (2001c) has reviewed distance estimates and the Hubble constant. In addition to Type Ia supernovae, S-Z clusters and gravitational lens time-delay methods also give distance estimates at z $>$ 0.1. Figure 10 shows a compilation of all these high z estimates. I have also included the HDF supernova (1997ff) at the redshift 1.7, proposed by Riess et al (2001) although Rowan-Robinson (2001b) finds a photometric redshift z = 1.4 for the parent galaxy. The assumed B band magnitude at maximum light is 25.5 (Fig 7 of Riess et al) and the mean absolute magnitude for Type Ia supernovae is assumed to be -19.47 (Gibson et al 2000). The model with $\Omega_o$ = 1 is a good overall fit to these data and the HDF supernova lies right on the $\Omega_o$ = 1 mean line. A least squares fit to these data, with an assumed $H_o$ of 63 $km/s/Mpc$ (Rowan-Robinson 2001c) and the spatial curvature parameter k = 0, yields $\Omega_o = 0.81 \pm 0.12$, where the error in the distance modulus has been taken to be 0.35 mag. for all points.
---------------------------------------------------------------------------------------- ----- ------------ ----------- ---------------- ------------- -------------- ------- --------- ----------------- ----------- --
name $V$ $V_{corr}$ $B_{max}$ $E(B-V)_{Gal}$ $A_{int,V}$ $A_{int,Ph}$ $M_B$ $\mu_o$ $\Delta m_{15}$ $(B-V)_o$
**[low z]{}& & & & & & & & & & &\
1960R & 751 & 802 & 11.60 & 0.0255 & 0.25 & - & -18.28 & 30.71 & - & 0.17\
1963P & 1414 & 1088 & 14.00 & 0.0445 & 0.39 & - & -16.76 & 32.90 & - & -\
1965I & 1172 & 1511 & 12.41 & 0.0198 & 0.36 & - & -18.93 & 31.44 & - & -0.17\
1970J & 3791 & 3360 & 15.00 & 0.0760 & 0.00 & - & -17.95 & 34.16 & 1.30 & 0.10\
1971I & 503 & 604 & 11.60 & 0.0120 & 0.34 & - & -17.70 & 30.68 & - & 0.17\
1971L & 1659 & 1996 & 13.00 & 0.1505 & 0.39 & - & -19.52 & 31.46 & - & 0.17\
1972E & 403 & 261 & 8.49 & 0.0483 & 0.42 & 0.04 & -18.83 & 27.34 & 0.87 & -0.03\
1972J & 3213 & 2690 & 14.76 & 0.0468 & 0.22 & - & -17.81 & 33.82 & - & 0.10\
1974G & 720 & 802 & 12.28 & 0.0124 & 0.35 & - & -17.64 & 31.35 & 1.11 & 0.21\
1974J & 7468 & 7390 & 15.60 & 0.0692 & 0.23 & - & -19.27 & 34.55 & - & -0.28\
1975G & 1900 & 2246 & 14.44 & 0.0092 & 0.29 & - & -17.65 & 33.58 & - & -\
1975N & 1867 & 1620 & 14.00 & 0.0302 & 0.28 & - & -17.46 & 33.06 & - & 0.17\
1976J & 4558 & 4164 & 14.28 & 0.0263 & 0.56 & - & -19.49 & 33.08 & 0.90 & 0.02\
1978E & 4856 & 4423 & 15.40 & 0.1612 & 0.33 & - & -18.83 & 33.87 & - & -0.19\
1979B & 954 & 1699 & 12.70 & 0.0099 & 0.19 & - & -18.68 & 31.94 & - & 0.27\
1980N & 1806 & 1442 & 12.50 & 0.0207 & 0.21 & 0.20 & -18.58 & 31.67 & 1.28 & 0.05\
1981B & 1804 & 1266 & 11.74 & 0.0207 & 0.43 & 0.45 & -19.31 & 30.69 & 1.10 & -0.02\
1981D & 1806 & 1442 & 12.59 & 0.0207 & 0.21 & - & -18.50 & 31.76 & 1.28 & 0.19\
1983G & 1172 & 1511 & 12.97 & 0.0198 & 0.36 & - & -18.37 & 32.00 & - & 0.01\
1983U & 1156 & 1483 & 13.40 & 0.0216 & 0.29 & - & -17.84 & 32.49 & - & -\
1983W & 1937 & 2171 & 13.30 & 0.0104 & 0.52 & - & -18.95 & 32.21 & - & -\
1986A & 3039 & 3385 & 14.40 & 0.0409 & 0.20 & - & -18.62 & 33.50 & - & -\
1987D & 2227 & 2551 & 13.70 & 0.0217 & 0.33 & - & -18.76 & 32.75 & - & -\
1988F & 5274 & 5090 & 14.80 & 0.0223 & 0.25 & - & -19.08 & 33.93 & - & -\
1989A & 2514 & 2914 & 14.10 & 0.0221 & 0.17 & - & -18.49 & 33.31 & - & -\
1989B & 726 & 617 & 12.34 & 0.0304 & 0.41 & 1.36 & -18.10 & 31.27 & 1.31 & -\
1989M & 1518 & 1266 & 12.56 & 0.0376 & 0.24 & - & -18.35 & 31.63 & - & -\
1990N & 998 & 1266 & 12.76 & 0.0243 & 0.28 & 0.37 & -18.23 & 31.85 & 1.07 & 0.04\
1990Y & 10800 & 10804 & 17.70 & 0.0095 & 0.00 & 0.94 & -18.47 & 37.13 & 1.13 & 0.33\
1990af & 15180 & 14807 & 17.87 & 0.0336 & 0.25 & 0.16 & -18.31 & 36.95 & 1.56 & 0.05\
1991M & 2169 & 2562 & 14.55 & 0.0461 & 0.33 & - & -17.89 & 33.63 & - & -\
1992A & 1845 & 1484 & 12.56 & 0.0138 & 0.25 & 0.00 & -18.36 & 31.72 & 1.47 & 0.02\
1992G & 1580 & 2193 & 13.63 & 0.0121 & 0.38 & - & -18.41 & 32.77 & - & -\
1992P & 7616 & 8159 & 16.08 & 0.0210 & 0.33 & 0.29 & -18.87 & 35.13 & 0.87 & -0.03\
1992ag & 7544 & 7724 & 16.41 & 0.0856 & 0.33 & 0.41 & -18.81 & 35.20 & 1.19 & 0.08\
1992al & 4377 & 4711 & 14.60 & 0.0315 & 0.33 & 0.04 & -18.94 & 33.61 & 1.11 & -0.05\
1992bc & 5700 & 5602 & 15.16 & 0.0196 & 0.33 & 0.00 & -18.67 & 34.22 & 0.87 & -0.08\
1992bh & 13500 & 13384 & 17.70 & 0.0217 & 0.33 & 0.49 & -18.54 & 36.75 & 1.05 & 0.08\
1992bo & 5575 & 5245 & 15.86 & 0.0270 & 0.25 & 0.00 & -17.86 & 34.97 & 1.69 & 0.01\
1992bp & 23790 & 23791 & 18.41 & 0.0717 & 0.21 & 0.00 & -18.81 & 37.37 & 1.32 & -0.05\
1993L & 1925 & 1784 & 13.20 & 0.0179 & 0.66 & - & -18.62 & 32.11 & 1.18 & -\
1993O & 15300 & 14750 & 17.67 & 0.0610 & 0.21 & 0.00 & -18.45 & 36.68 & 1.22 & -0.09\
1993ag & 14700 & 15417 & 17.72 & 0.1031 & 0.21 & 0.29 & -18.96 & 36.55 & 1.32 & 0.03\
1994D & 450 & 1266 & 11.84 & 0.0217 & 0.37 & 0.00 & -18.76 & 30.85 & 1.32 & -0.04\
1994S & 4550 & 4523 & 14.80 & 0.0170 & 0.33 & 0.00 & -18.56 & 33.87 & 1.10 & 0.01\
1994ae & 1282 & 1483 & 13.20 & 0.0284 & 0.34 & 0.49 & -18.27 & 32.21 & 0.86 & 0.10\
1995D & 1967 & 2442 & 13.40 & 0.0533 & 0.25 & 0.16 & -18.92 & 32.40 & 0.99 & -0.02\
1995al & 1541 & 2011 & 13.36 & 0.0207 & 0.34 & 0.61 & -18.86 & 32.40 & 0.83 & -\
1996X & 2032 & 1811 & 13.22 & 0.0685 & 0.00 & 0.04 & -18.40 & 32.41 & 1.25 & -0.03\
1996bl & 10800 & 10063 & 17.05 & 0.1173 & 0.33 & 0.33 & -18.80 & 35.70 & 1.17 & 0.04\
1996bo & 5241 & 4617 & 16.17 & 0.0724 & 0.25 & 1.15 & -18.61 & 35.09 & 1.25 & -\
1997ej & 6686 & 6774 & 15.85 & 0.0170 & 0.25 & - & -18.64 & 35.00 & - & -\
1998bu & 943 & 1483 & 12.22 & 0.0303 & 0.28 & 1.35 & -20.11 & 31.28 & 1.01 & -\
**
---------------------------------------------------------------------------------------- ----- ------------ ----------- ---------------- ------------- -------------- ------- --------- ----------------- ----------- --
----------------------------------------------------------------------------- ----- ------------ ----------- ---------------- -------------- -------------- ------- --------- ----------------- ----------- --
name $V$ $V_{corr}$ $B_{max}$ $E(B-V)_{Gal}$ $A_{int,dV}$ $A_{int,Ph}$ $M_B$ $\mu_o$ $\Delta m_{15}$ $(B-V)_o$
**[Riess 98]{}& & & & & & & & & & &\
1996E & 0 &128914 & 22.81 & 0.0000 & - & 0.10 & -18.26 & 41.95 & 1.18 & -\
1996H & 0 &185876 & 23.23 & 0.0000 & - & 0.00 & -18.59 & 42.37 & 0.87 & -\
1996I & 0 &170886 & 23.35 & 0.0000 & - & 0.00 & -18.27 & 42.49 & 1.39 & -\
1996J & 0 & 89940 & 22.23 & 0.0000 & - & 0.64 & -18.55 & 41.37 & 1.27 & -\
1996K & 0 &113924 & 22.64 & 0.0000 & - & 0.00 & -18.04 & 41.78 & 1.31 & -\
1996U & 0 &128914 & 22.78 & 0.0000 & - & 0.00 & -18.19 & 41.92 & 1.18 & -\
1997ce & 0 &131912 & 22.85 & 0.0000 & - & 0.00 & -18.17 & 41.99 & 1.30 & -\
1997cj & 0 &149900 & 23.19 & 0.0000 & - & 0.09 & -18.22 & 42.33 & 1.16 & -\
1997ck & 0 &290806 & 24.78 & 0.0000 & - & 0.10 & -18.20 & 43.92 & 1.00 & -\
1995K & 0 &143904 & 22.91 & 0.0000 & - & 0.00 & -18.31 & 42.05 & 1.16 & -\
**[Perlmutter 99]{}& & & & & & & & & & &\
1992bi & 0 &137308 & 22.81 & 0.0000 & 0.33 & - & -18.40 & 41.95 & - & -\
1994F & 0 &106129 & 22.55 & 0.0000 & 0.33 & - & -18.07 & 41.69 & - & -\
1994G & 0 &127415 & 22.17 & 0.0000 & 0.33 & - & -18.87 & 41.31 & - & -\
1994H & 0 &112125 & 21.79 & 0.0000 & 0.33 & - & -18.95 & 40.93 & - & -\
1994al & 0 &125916 & 22.63 & 0.0000 & 0.33 & - & -18.38 & 41.77 & - & -\
1994am & 0 &111526 & 22.32 & 0.0000 & 0.33 & - & -18.41 & 41.46 & - & -\
1994an & 0 &113324 & 22.55 & 0.0000 & 0.33 & - & -18.22 & 41.69 & - & -\
1995aq & 0 &135809 & 23.24 & 0.0000 & 0.33 & - & -17.95 & 42.38 & - & -\
1995ar & 0 &139407 & 23.36 & 0.0000 & 0.33 & - & -17.89 & 42.50 & - & -\
1995as & 0 &149300 & 23.66 & 0.0000 & 0.33 & - & -17.75 & 42.80 & - & -\
1995at & 0 &196369 & 23.21 & 0.0000 & 0.33 & - & -18.84 & 42.35 & - & -\
1995aw & 0 &119920 & 22.27 & 0.0000 & 0.33 & - & -18.63 & 41.41 & - & -\
1995ax & 0 &184377 & 23.10 & 0.0000 & 0.33 & - & -18.80 & 42.24 & - & -\
1995ay & 0 &143904 & 23.00 & 0.0000 & 0.33 & - & -18.32 & 42.14 & - & -\
1995az & 0 &134910 & 22.53 & 0.0000 & 0.33 & - & -18.64 & 41.67 & - & -\
1995ba & 0 &116322 & 22.66 & 0.0000 & 0.33 & - & -18.17 & 41.80 & - & -\
1996cf & 0 &170886 & 23.25 & 0.0000 & 0.33 & - & -18.48 & 42.39 & - & -\
1996cg & 0 &146902 & 23.06 & 0.0000 & 0.33 & - & -18.31 & 42.20 & - & -\
1996ci & 0 &148401 & 22.82 & 0.0000 & 0.33 & - & -18.58 & 41.96 & - & -\
1996ck & 0 &196669 & 23.62 & 0.0000 & 0.33 & - & -18.43 & 42.76 & - & -\
1996cl & 0 &248234 & 24.58 & 0.0000 & 0.33 & - & -18.03 & 43.72 & - & -\
1996cm & 0 &134910 & 23.22 & 0.0000 & 0.33 & - & -17.95 & 42.36 & - & -\
1996cn & 0 &128914 & 23.19 & 0.0000 & 0.33 & - & -17.88 & 42.33 & - & -\
1997F & 0 &173884 & 23.45 & 0.0000 & 0.33 & - & -18.32 & 42.59 & - & -\
1997G & 0 &228747 & 24.49 & 0.0000 & 0.33 & - & -17.92 & 43.63 & - & -\
1997H & 0 &157695 & 23.21 & 0.0000 & 0.33 & - & -18.33 & 42.35 & - & -\
1997I & 0 & 51566 & 20.20 & 0.0000 & 0.33 & - & -18.78 & 39.34 & - & -\
1997J & 0 &185576 & 23.80 & 0.0000 & 0.33 & - & -18.12 & 42.94 & - & -\
1997K & 0 &177482 & 24.33 & 0.0000 & 0.33 & - & -17.48 & 43.47 & - & -\
1997L & 0 &164890 & 23.53 & 0.0000 & 0.33 & - & -18.11 & 42.67 & - & -\
1997N & 0 & 53964 & 20.42 & 0.0000 & 0.33 & - & -18.66 & 39.56 & - & -\
1997O & 0 &112125 & 23.50 & 0.0000 & 0.33 & - & -17.24 & 42.64 & - & -\
1997P & 0 &141506 & 23.14 & 0.0000 & 0.33 & - & -18.14 & 42.28 & - & -\
1997Q & 0 &128914 & 22.60 & 0.0000 & 0.33 & - & -18.47 & 41.74 & - & -\
1997R & 0 &196969 & 23.83 & 0.0000 & 0.33 & - & -18.23 & 42.97 & - & -\
1997S & 0 &183478 & 23.59 & 0.0000 & 0.33 & - & -18.30 & 42.73 & - & -\
1997ac & 0 & 95936 & 21.83 & 0.0000 & 0.33 & - & -18.56 & 40.97 & - & -\
1997af & 0 &173584 & 23.54 & 0.0000 & 0.33 & - & -18.22 & 42.68 & - & -\
1997ai & 0 &134910 & 22.81 & 0.0000 & 0.33 & - & -18.36 & 41.95 & - & -\
1997aj & 0 &174184 & 23.12 & 0.0000 & 0.33 & - & -18.65 & 42.26 & - & -\
1997am & 0 &124717 & 22.52 & 0.0000 & 0.33 & - & -18.47 & 41.66 & - & -\
1997ap & 0 &248834 & 24.30 & 0.0000 & 0.33 & - & -18.31 & 43.44 & - & -\
****
----------------------------------------------------------------------------- ----- ------------ ----------- ---------------- -------------- -------------- ------- --------- ----------------- ----------- --
------------------------------------ ------- -------- ------------- ------------- --------- ---------- --------- ---------- -------------- ----------------- ------------------------
sample no. no. $<A_{int}>$ $<A_{int}>$ $<M_B>$ $\sigma$ $<M_B>$ $\sigma$ $\Delta M_B$ $\sigma_{diff}$ $\Delta/\sigma_{diff}$
low z high z low z high z low z high z
\(1) all good sn, 53 52 0.327 0.329 -18.54 0.55 -18.30 0.33 0.25 0.089 2.8
no $\Delta M_{15}$ corrn
\(2) with $\Delta M_{15}$ corrn 31 52 0.353 0.329 -18.69 0.42 -18.30 0.38 0.39 0.093 4.25
\(3) with $\Delta M_{15},$ corrn 31 52 0.0 0.0 -18.35 0.48 -17.975 0.38 0.375 0.101 3.7
but no dust correctn
\(4) with Perlmutter corrn 31 52 0.353 0.329 -18.655 0.44 -18.30 0.33 0.36 0.092 3.9
\(5) with quadratic 31 52 0.353 0.329 -18.72 0.41 -18.31 0.38 0.41 0.091 4.6
$\Delta M_{15}$ corrn
\(6) with $\Delta M_{15}$ corrn 30 52 0.283 0.446 -18.69 0.42 -18.42 0.37 0.27 0.091 3.0
and dust evoln
\(7) with no $\Delta M_{15}$ corrn 53 52 0.331 0.446 -18.54 0.55 -18.41 0.33 0.13 0.089 1.5
and dust evoln
------------------------------------ ------- -------- ------------- ------------- --------- ---------- --------- ---------- -------------- ----------------- ------------------------
Discussion and conclusions
==========================
\(1) I have reanalyzed the evidence that high-z supernovae support a universe with positive $\Lambda$.
\(2) Both high-z supernova teams appear to have underestimated host galaxy extinction.
\(3) The evidence for an $M_B - \Delta m_{15}$ relation is weaker than previously stated (only 2.6 $\sigma$) if analysis is restricted to supernovae observed before maximum. The rms deviation about the mean relation is significantly larger than previously claimed.
\(4) After consistent corrections for extinction are applied the significance of the difference in absolute magnitude between high and low z supernovae, in an Einstein de Sitter ($\Omega_o$ = 1) universe, is 2.8-4.6 $\sigma$, depending whether (and how) the $M_B - \Delta M_{15}$ correction is applied, so such a model can not really be rejected conclusively by the present data.
\(5) The Hubble diagram based on all high redshift estimates supports an Einstein de Sitter universe. The HDF-N supernova favours such a universe also, contrary to the published claims of Riess et al (2001).
\(6) The community may have been too hasty in its acceptance of a positive $\Lambda$ universe, for which no physical motivation exists, and needs to reconsider the astrophysical implications of the more natural Einstein de Sitter, $\Omega_o$ =1, model. For the supernova method, the need is to continue study of low z supernovae to improve understanding of extinction and of the absolute magnitude decay-time relation, and to consider shifting towards infrared wavelengths, as advocated by Meikle (2000), in order to reduce the effects of extinction.
Of course the arguments presented here do not prove that $\Lambda$ = 0. The combination of the evidence from CMB fluctuations for a spatially flat universe with a variety of large-scale structure arguments for $\Omega_o$ = 0.3-0.5 may still make positive $\Lambda$ models worth pursuing. However it would seem to be premature to abandon consideration of other alternatives.
Acknowledgements
================
I thank Peter Meikle, Adam Riess, Steve Warren, Martin Haehnelt, Bruno Leibundgut, David Branch, Eddie Baron, Richard Ellis and an anonymous referee for helpful comments or discussions, and Mark Phillips and Bernhard Parodi for supplying machine-readable versions of their data.
[99]{}
Barbon R., Buondi V., Cappellaro E., Turatto M., 1999, AAS 139, 531
Branch D., 1998, ARAA 36, 17
Branch D., Romanishin W., and Baron E., 1996, ApJ 465, 73
Calzetti D. and Heckman T.M., 1999, ApJ 519, 27
Drell P.S., Loredo T.J., Wasserman I., 2000, ApJ 530, 593
Efstathiou G., Bridle S.L., Lasenby A.N., Hobson M.P., Ellis R.S., 1999, MN 303, L47
Ellis R. and Sullivan M., 2001, in IAU Symposium 201, New Cosmological Data and the Values of the Fundamental Parameters, August 2000 (eds. A. Lasenby and A. Wilkinson), astro-ph/0011369
Filippenko A.V. and Riess A.G., 2000, in Second Tropical Workshop on Particle Physics and Cosmology: Neutrino and Flavor Physics, ed. J. F. Nieves (New York: American Institute of Physics, astro-ph/0008057
Garnavich P.M. et al, 1998, ApJ 493, L53
Gibson B.K. et al, 2000, ApJ 529, 723
Goldhaber G. et al, 2001, ApJ 558, 359
Hamuy M., Phillips M., Maza J., Suntzeff N.B., Schommer R.A., Aviles R., 1995, AJ 109, 1
Hamuy M., Phillips M., Schommer R.A.,Suntzeff N.B., Maza J., Aviles R., 1996, AJ 112, 2391
Hamuy M., Phillips M., Schommer R.A.,Suntzeff N.B., Maza J., Aviles R., 1996, AJ 112, 2398
Hatano K., Branch D., Deaton J., 1998, ApJ 502, 177
Hildebrandt W. and Niemeyer J.C., 2000, A.R.A.A. 38 (in press), astro-ph/0006305
Leibundgut B., Tammann G.A., Cadonau R., Cerrito D.,1991, AAS 89, 537
Leibundgut B., 2000, Astronomy and Astrophysics Review, astro-ph/0003326
Leibundgut B., 2001, Ann.Rev.Astron.Astroph. 39, 67
Lilly S.J., Le Fevre O., Hammer F., Crampton D., 1996, ApJ 460, L1
Meikle W.P.S., 2000, MN 314, 782
Parodi B.R., Saha A., Sandage A., Tammann G.A., 2000, ApJ 540, 634
Pei Y.C., Fall M. and Hauser M.G., 1999, ApJ 522, 604
Perlmutter S. et al, 1999, ApJ 517, 565
Phillips M.M., 1993, ApJ 413, L105
Phillips M.M., Lira P., Suntzeff N.B., Schommer R.A., Hamuy M., Maza J., 1999, AJ 118, 1766
Riess A.G., Press W.H., Kirshner R.P., 1995, ApJ438, L17
Riess A.G., Press W.H., Kirshner R.P., 1996, ApJ 473, 588
Riess A.G. et al, 1998, AJ 116, 1009
Riess A.G. et al, 1999, AJ 117, 707
Riess A.G. et al, 2000, ApJ 536, 62
Riess A.G. et al, 2001, ApJ 560, 49
Riess A.G., 2000, PASP 112, 1284
Rowan-Robinson M., et al, 2000, MN 314, 375
Rowan-Robinson M., 2001a, ApJ (in press)
Rowan-Robinson M., 2001b, ApJ (in press)
Rowan-Robinson M., 2001c, in ’IDM2000: 3rd International Workshop on Identification of Dark Matter’, ed N.Spooner (World Scientific), astro-ph/0012026
Saha A., Sandage A., Tammann G.A., Labhardt L., Macchetto F.D., Panagia N., 1999, ApJ, 522, 802
Schlegel D.J., Finkbeiner D.P., Davis M., 1998, ApJ 500, 525
Schmidt B.P. et al, 1998, ApJ 507, 46
Suntzeff N.B., et al, 1999, AJ 117, 1175
Tammann G.A. and Sandage A., 1995, ApJ 452, 16
Tripp R. and Branch D., 1999, ApJ 525, 209
Turner M.S. and Riess A.G., 2001, ApJL (in press), astro-ph/0106051
de Vaucouleurs G., de Vaucouleurs A., Corwin H.G. Jr, 1976, Second Reference Catalogue of Bright Galaxies, University of Texas Press
|
---
abstract: 'We prove that a uniform algebra is weakly sequentially complete if and only if it is finite-dimensional.'
address:
- 'Department of Mathematics and Statistics, Bowling Green State University, Bowling Green, OH 43403'
- 'School of Mathematical Sciences, University of Nottingham, University Park, Nottingham NG7 2RD, UK '
author:
- 'Alexander J. Izzo'
- 'J. F. Feinstein'
title: |
Weak Sequential Completeness of\
Uniform Algebras
---
[^1]
The Result
==========
A uniform algebra is reflexive if and only if it is finite-dimensional. However, the only mention of this that we have found in the literature is at the very end of the paper [@Ellis] where the result is obtained as a consequence of the general theory developed in that paper concerning a representation due to Asimow [@Asimow] of a uniform algebra as a space of affine functions. The purpose of the present paper is to give a simple direct proof of the stronger fact that a uniform algebra is weakly sequentially complete if and only if it is finite-dimensional. Since it is trivial that every finite-dimensional Banach space is weakly sequentially complete, the substance of our result is the following.
\[main\] No infinite-dimensional uniform algebra is weakly sequentially complete.
As an immediate consequence we have the following.
For a uniform algebra $A$, the following are equivalent.
- $A$ is weakly sequentially complete.
- $A$ is reflexive.
- $A$ is finite-dimensional.
One can also consider what are sometimes called [*nonunital uniform algebras*]{}. These algebras are roughly the analogues on noncompact locally compact Hausdorff spaces of the uniform algebras on compact Hausdorff spaces. (The precise definition is given in the next section.) Every nonunital uniform algebra is, in fact, a maximal ideal in uniform algebra, and hence is, in particular, a codimension 1 subspace of a uniform algebra. Since it is easily proven that the failure of weak sequential completeness is inherited by finite codimensional subspaces, it follows at once that the above results hold also for nonunital uniform algebras.
It should be noted that the above results do *not* extend to general semisimple commutative Banach algebras. For instance the Banach space $\ell^p$ of $p$th-power summable sequences of complex numbers is a Banach algebra under coordinatewise multiplication and is of course well-known to be reflexive for $1<p<\infty$; for $p=1$ the space is nonreflexive but is weakly sequentially complete [@Wojtaszczyk p. 140]. Also for $G$ a locally compact abelian group, the Banach space $L^1(G)$ is a Banach algebra with convolution as multiplication and is nonreflexive but is weakly sequentially complete [@Wojtaszczyk p. 140]. All of these Banach algebras are nonunital, with the exception of the algebras $L^1(G)$ for $G$ a discrete group. However, adjoining an identity in the usual way where necessary, one obtains from them unital Banach algebras with the same properties with regard to reflexivity and weak sequential completeness.
In the next section, which can be skipped by those well versed in basic uniform algebra and Banach space concepts, we recall some definitions. The proof of Theorem \[main\] is then presented in the concluding section.
Definitions
===========
\[prelim\]
For $X$ a compact Hausdorff space, we denote by $C(X)$ the algebra of all continuous complex-valued functions on $X$ with the supremum norm $ \|f\|_{X} = \sup\{ |f(x)| : x \in X \}$. A *uniform algebra* on $X$ is a closed subalgebra of $C(X)$ that contains the constant functions and separates the points of $X$. We say that the uniform algebra $A$ on $X$ *natural* if $X$ is the maximal ideal space of $A$, that is, if the only (non-zero) multiplicative linear functionals on $A$ are the point evaluations at points of $X$. For $Y$ a noncompact locally compact Hausdorff space, we denote by $C_0(Y)$ the algebra of continuous complex-valued functions on $Y$ that vanish at infinity, again with the supremum norm. By a *nonunital uniform algebra* $B$ on $Y$ we mean a closed subalgebra of $C_0(Y)$ that strongly separates points in the sense that for every pair of distinct points $x$ and $y$ in $Y$ there is a function $f$ in $B$ such that $f(x)\neq f(y)$ and $f(x)\neq 0$. If $B$ is a nonunital uniform algebra on $Y$, then the linear span of $B$ and the constant functions on $Y$ forms a unital Banach algebra that can be identified with a uniform algebra $A$ on the one-point compactification of $Y$, and under this identification $B$ is the maximal ideal of $A$ consisting of the functions in $A$ that vanish at infinity.
Let $A$ be a uniform algebra on a compact Hausdorff space $X$. A closed subset $E$ of $X$ is a *peak set* for $A$ if there is a function $f\in A$ such that $f(x)=1$ for all $x\in E$ and $|f(y)|< 1$ for all $y\in X\setminus E$. Such a function $f$ is said to *peak on $E$*. A *generalized peak set* is an intersection of peak sets. A point $p$ in $X$ is a *peak point* if the singleton set $\{p\}$ is a peak set, and $p$ is a *generalized peak point* if $\{p\}$ is a generalized peak set. For $\Lambda$ a bounded linear functional on $A$, we say that a regular Borel measure $\mu$ on $X$ represents $\Lambda$ if $\Lambda(f)=\int f \, d\mu$ for every $f\in A$.
A Banach space $A$ is *reflexive* if the canonical embedding of $A$ in its double dual $A^{**}$ is a bijection. The Banach space $A$ is *weakly sequentially complete* if every weakly Cauchy sequence in $A$ is weakly convergent in $A$. More explicitly the condition is this: for each sequence $(x_n)$ in $A$ such that $(\Lambda x_n)$ converges for every $\Lambda$ in the dual space $A^*$, there exists an element $x$ in $A$ such that $\Lambda x_n \rightarrow \Lambda x$ for every $\Lambda$ in $A$. It is trivial that every reflexive space is weakly sequentially complete.
The Proof
=========
Our proof of Theorem \[main\] hinges on the following lemma.
\[peak\] Every infinite-dimensional natural uniform algebra has a peak set that is not open.
Let $A$ be a natural uniform algebra on a compact Hausdorff space $X$.
Consider first the case when $A$ is a proper subalgebra of $C(X)$. In that case, by the Bishop antisymmetric decomposition [@Browder Theorem 2.7.5] there is a maximal set of antisymmetry $E$ for $A$ such that the restriction $A|E$ is a proper subset of $C(E)$. Then of course $E$ has more than one point, and because $A$ is natural, $E$ is connected [@Stout p. 119]. Since every maximal set of antisymmetry is a generalized peak set, and every generalized peak set contains a generalized peak point (see the proof of [@Browder Corollary 2.4.6]), $E$ contains a generalized peak point $p$. Choose a peak set $P$ for $A$ such that $p\in P$ but $P\nsupseteq E$. Then $P\cap E$ is a proper nonempty closed subset of the connected set $E$ and hence is not open in $E$. Thus $P$ is not open in $X$.
In case $A=C(X)$, it follows from Urysohn’s lemma that the peak sets of $A$ are exactly the closed $G_\delta$-sets in $X$ (see for instance [@Munkres Section 33, exercise 4]). Thus the proof is completed by invoking the following lemma.
\[G\] Every infinite compact Hausdorff space $X$ contains a closed $G_\delta$-set that is not open.
Let $\{x_n\}$ be a countably infinite subset of $X$. For each $n=1, 2, 3, \ldots$ choose by the Urysohn lemma a continuous function $f_n:X\rightarrow [0,1]$ such that $f_n(x_k)=0$ for $k<n$ and $f(x_n)=1$. Let $F:X\rightarrow [0,1]^\omega$ be given by $F(x)=\bigl(f_n(x)\bigr)_{n=1}^\infty$. Then $F(x_m)\neq F(x_n)$ for all $m\neq n$. Thus the collection $\{F^{-1}(t):t\in [0,1]^\omega\}$ is infinite. Each of the sets $F^{-1}(t)$ is a closed $G_\delta$-set because $F$ is continuous and $[0,1]^\omega$ is metrizable. Since these sets form an infinite family of disjoint sets that cover $X$, they cannot all be open, by the compactness of $X$.
\[Proof of Theorem \[main\]\] Let $A$ be an infinite-dimensional uniform algebra on a compact Hausdorff space $X$. Since $A$ is isometrically isomorphic to a natural uniform algebra via the Gelfand transform, we can assume without loss of generality that $A$ is natural. By Lemma \[peak\] there exists a peak set $P$ for $A$ that is not open. Choose a function $f\in A$ that peaks on $P$.
For a bounded linear functional $\Lambda$ on $A$, and a regular Borel measure $\mu$ on $X$ that represents $\Lambda$, we have by the Lebesgue dominated convergence theorem that $$\label{Cauchy}
\Lambda(f^n) = \int f^n\, d\mu \rightarrow \mu(P) \quad {\rm as\quad} n\rightarrow \infty.$$ Thus the sequence $(f^n)_{n=1}^\infty$ in $A$ is weakly Cauchy. Furthermore (\[Cauchy\]) shows that, regarded as a sequence in the double dual $A^{**}$, the sequence $(f^n)_{n=1}^\infty$ is weak\*-convergent to a functional $\Phi\in A^{**}$ that satisfies the equation $\Phi(\Lambda)=\mu(P)$ for every functional $\Lambda\in A^*$ and every regular Borel measure $\mu$ that represents $\Lambda$.
For $x\in X$, denote the point mass at $x$ by $\delta_x$. Denote the characteristic function of the set $P$ by $\chi_P$. Then $$\Phi(\delta_x)=\chi_P(x)$$ while for any function $h\in A$ we have $$\int h \, d\delta_x=h(x).$$ Since $P$ is not open in $X$, the characteristic function $\chi_P$ is not continuous and hence is not in $A$. Consequently, equations (2) and (3) show that the functional $\Phi\in A^{**}$ is not induced by an element of $A$. We conclude that the weakly Cauchy sequence $(f^n)_{n=1}^\infty$ is not weakly-convergent in $A$.
Acknowledgement: The question concerning reflexivity of uniform algebras was posed to us by Matthias Neufang, and this was the inspiration for the work present here. It is a pleasure to thank Neufang for his question.
[BC87]{}
L. Asimow, [*Decomposable compact convex sets and peak sets for function spaces*]{}, Proc. Amer. Math. Soc. (1) [**25**]{} (1970), 75–79.
A. Browder, [*Introduction to Function Algebras*]{}, Benjamin, New York, 1969.
A. J. Ellis, [*Central Decompositions and the Essential set for the Space $A(K)$*]{}, Proc. London Math. Soc. (3) [**26**]{} (1973), 564–576.
J. R. Munkres, [*Topology*]{}, 2nd ed., Prentice-Hall, Upper Saddle River, New Jersey, 2000.
E. L. Stout, [*The Theory of Uniform Algebras*]{}, Bogden & Quigley, New York, 1971.
P. Wojtaszczyk, [*Banach Spaces for Analysts*]{}, Cambridge Studies in Advanced Mathematics, 25, Cambridge University Press, 1991.
[^1]: The first author was partially supported by a Simons collaboration grant and by NSF Grant DMS-1856010.
|
[**Precision measurements of sodium - sodium and sodium - noble gas molecular absorption**]{}
M. Shurgalin, W.H. Parkinson, K. Yoshino, C. Schoene\* and W.P. Lapatovich\*
Harvard-Smithsonian Center for Astrophysics, 60 Garden St, MS 14, Cambridge MA 02138 USA
$^{*}$OSRAM SYLVANIA Lighting Research, 71 Cherry Hill Dr, Beverly, MA 01915 USA
Precision measurements of molecular absorption
PACS numbers: 39.30.+w
07.60.Rd
33.20.Kf
Submitted to Measurement Science and Technology, January 2000
[**Abstract.**]{}
Experimental apparatus and measurement technique are described for precision absorption measurements in sodium - noble gas mixtures. The absolute absorption coefficient is measured in the wavelength range from 425 nm to 760 nm with $\pm $2% uncertainty and spectral resolution of 0.02 nm. The precision is achieved by using a specially designed absorption cell with an accurately defined absorption path length, low noise CCD detector and double-beam absorption measurement scheme. The experimental set-up and the cell design details are given. Measurements of sodium atomic number density with $\pm $5% uncertainty complement absorption coefficient measurements and allow derivation of the reduced absorption coefficients for certain spectral features. The sodium atomic number density is measured using the anomalous dispersion method. The accuracy of this method is improved by employing a least-squares fit to the interference image recorded with CCD detector and the details of this technique are given. The measurements are aimed at stringent testing of theoretical calculations and improving the values of molecular parameters used in calculations.
absorption cell, molecular absorption, anomalous dispersion method
[**1. Introduction**]{}
Atomic collision processes significantly influence the absorption and emission of light by atomic vapors at high pressures. As a result the absorption and emission spectra reveal not only atomic line broadening but also very broad, essentially molecular features with rich rotational-vibrational structure and satellite peaks due to formation of molecules and quasi-molecules. Since pioneering work by Hedges [*et al*]{}. \[1\], such spectra have been a subject of extensive studies, both theoretical and experimental, and proved to be a rich source of information about the interaction potentials, collision dynamics and transition dipole moments \[2-12\]. The experimental approaches employed include absorption measurements \[4,5,8,9,12\], laser-induced fluorescence \[3,6,9\] and thermal emission spectra \[7\]. While laser-induced fluorescence and emission spectra provide the shapes and positions of many molecular bands, the measurements of absorption coefficient spectra also give absorption coefficients over a large spectral range. Absolute measurements of the absorption spectra may allow more comprehensive tests of theoretical calculations. As a result, better differentiation between different theoretical approaches and improved values for various molecular parameters and potentials can be obtained. However, in many cases absorption spectra are obtained on a relative scale or only the absorption coefficient or optical depth is measured accurately. Extraction of absolute cross-sections (or reduced absorption coefficients) from traditional measurements of the optical depth as well as any quantitative comparisons of absorption spectra with theoretical calculations require accurate knowledge of the absorption path length and the absorbing species concentrations.
Most of absorption spectroscopy experiments with hot and corrosive vapors such as sodium have been performed using heat pipes \[8,9,13\]. In a heat pipe the alkali vapor is contained in the hot middle of the furnace between cold zones where windows are located. Buffer noble gas facilitates the alkali containment and protects cold windows from alkali deposits. In this type of absorption cell the vapor density is not uniform at the ends of the absorption path and the path length is not accurately defined. In addition, the temperature of the vapor-gas mixture is not uniform and at higher alkali vapor densities formation of fog at the alkali - buffer gas interface seriously affects the optical absorption coefficient measurements \[13,14\]. Absorption cells have been designed, where heated windows, placed in the cell hot zone, define the absorption length with good precision \[14-16\]. The absorption cell described in \[15\] is suitable for hot sodium vapors up to 1000K but it is difficult to make it with a long absorption path. The cell described in \[16\] is not suitable for corrosive vapors and may still have problems with window transmission due to metal deposits \[16\]. Schlejen [*et. al.*]{} \[14\] designed a cell specifically for spectroscopy of sodium dimers at high temperatures. Their cell allowed uniform concentration of absorbers and uniform temperature up to 1450K with an absorption length defined accurately by hot sapphire windows. However, the cell design is not suitable for spectroscopy of gas - vapor mixtures because it was completely sealed and did not easily enable changing the mixtures.
As well as defining the absorption length accurately, an equally important aspect is measuring the alkali vapor density. While the noble gas density can be calculated reasonably well from the measurements of pressure and temperature using the ideal gas law, it is difficult to establish the density of alkali atoms. In the majority of reported experiments alkali concentration was determined from the temperature and published saturated vapor pressure curves but this approach can introduce significant uncertainties. For example, in measurements of oscillator strengths or [*f*]{}-values significant discrepancies were often obtained between oscillator strengths measured by methods involving knowledge of the number density and by methods not requiring it \[17\]. Even if the vapor pressure curve is well known for pure saturated vapor, introducing buffer gas or using unsaturated vapors prohibit accurate knowledge of the vapor density along the absorption path. To achieve a higher precision in determination of absolute cross-sections or reduced absorption coefficients it is necessary to measure the alkali vapor density directly.
In this paper we describe experimental apparatus and technique used for precision measurements on an absolute scale of molecular absorption coefficients in sodium vapor + noble gas mixtures. To overcome the above-mentioned difficulties with definition of absorption length we have designed a special absorption cell. In our cell, heated sapphire windows, resistant to hot sodium vapor, are used to define the absorption path. A high temperature valve, kept at the same temperature as the cell itself, is utilized to introduce different noble gases. A separate sodium reservoir, maintained at a lower temperature, is used to control the sodium vapor pressure independently of the cell temperature. The cell can be operated at temperatures up to 900K. During the spectral measurements we measure and monitor the sodium number density at different temperatures and pressures using the anomalous dispersion or ’hook’ method \[8,9,18,19\]. The ’hook’ method allows accurate measurement of [*nfl*]{} value where [*n*]{} is the atomic number density, [*f*]{} is the atomic line oscillator strength and [*l*]{} is the absorption length. If the absorption length and [*f*]{}-value for sodium D lines are known, the sodium number density is accurately obtained. The next section concentrates on the details of the experiment and the absorption cell design.
[**2. Experiment**]{}
[**2.1 Experimental set-up**]{}
Fig. 1 shows schematically the experimental set-up that is used for our absorption measurements. The light source is a 100W halogen lamp powered by a voltage-stabilized DC power supply. A well-collimated beam of white light is produced with the help of a condenser lens, an achromat lens, a pinhole aperture 0.4 mm diameter and another achromat lens of shorter focal length. The light beam is sent through a Mach-Zender interferometer and focused on the entrance slit of 3m Czerny-Turner spectrograph (McPherson model 2163) with a combination of spherical and cylindrical lenses. An absorption cell is placed in the test-beam arm of the Mach-Zender interferometer. Beam blocks are used in both arms to switch the beams or block them altogether. The light beam through the reference arm of the Mach-Zender interferometer is used as a reference for the absorption in the usual manner of double-beam absorption spectroscopy \[9\].
The spectra are recorded with a two-dimensional CCD detector (Andor Technology model V420-0E) placed in the focal plane of the spectrograph. This detector has 1024 pixels horizontally and 256 pixels vertically with pixel size of 26x26 µm. For spectral measurements the detector is used in the vertical bin mode, that is, as a one-dimensional array detector. The stigmatic spectrograph has a plane diffraction grating with 2400 grooves/mm and theoretical resolution of $\sim $ 0.005 nm at 500 nm wavelength. We used 150 µm entrance slit width which gives actual resolution of 0.02 nm. At least 5 pixels of the array detector are used over 0.02 nm wavelength interval and as a result smoother spectral data are obtained from the array detector.
The overall spectral range determined by the diffraction grating and the detector sensitivity is 425 nm to 760 nm. To record spectra through this spectral range the diffraction grating is rotated through 160 different positions by a stepper motor. Backlash is avoided by rotating the grating in one direction from its calibration position at 425 nm, which in turn is set by rotating the grating beyond the calibration point. The calibration point is located by a rotation photosensor placed on the worm screw of the spectrograph sine-bar mechanism. The photosensor signal is sent to a programmable stepper motor controller (New England Affiliated Technology NEAT-310) which drives the stepper motor and allows the grating to be set at the calibration position automatically.
Each position of the grating permits recording consecutively spectral intervals ranging from about 3 nm at 430 nm to 1.1 nm at 760 nm, which are determined by the linear reciprocal dispersion of the spectrograph at a given wavelength and the overall length of the array detector. All grating positions are wavelength-calibrated using a large number of different atomic lines obtained from a number of different hollow cathode spectral lamps. The wavelength calibration enables identifying the wavelength of any pixel of the CCD detector within $\pm $ 0.05 nm in the range 425 to 760 nm.
To measure the sodium atomic density using the anomalous dispersion or ’hook’ method, both beams through the Mach-Zender interferometer are unblocked and interfere to produce a spectrally dispersed two-dimensional interference pattern in the focal plane of the spectrograph. The Mach-Zender interferometer is adjusted to localize the interference fringes at infinity. This insures that the integral sodium number density along the absorption path is measured. The CCD detector is used in its normal two-dimensional array mode to record the interference pattern. From the analysis of the interference pattern recorded in the vicinity of sodium D lines, sodium number density is derived. The general description of the ’hook’ method is given in \[17,18,19\] and the details of analyzing the interference pattern recorded with CCD detector are given in the next section. A glass plate and a stack of windows, identical to those used in the absorption cell, are placed in the reference arm of the Mach-Zender interferometer \[17\]. These compensating optics remain in the reference beam during spectral absorption measurements as well and have no effect on the spectral measurements due to the nature of the dual-beam absorption technique.
A simple vacuum system consisting of a turbomolecular pump (Sargent Welch model 3106S) backed by a rotary vane pump (Sargent Welch model 1402) is used to evacuate the absorption cell. The turbomolecular pump can handle short bursts of increased pressure and gas flow load and therefore it is utilized also to pump gases from the cell. A liquid nitrogen trap is placed in between the cell and the pump to trap sodium vapor. A precision pressure gauge (Omega Engineering model PGT-45) is used to measure accurately the pressure of noble gases when filling the cell.
An experiment control and data acquisition computer (Pentium PC) controls the CCD detector, spectral data acquisition and the spectrograph diffraction grating via the stepper motor controller connected to the serial port. The absorption cell temperature is monitored constantly through a number of thermocouples connected via commercial plug-in data acquisition board (American Data Acquisition Corporation) and the cell heaters are controlled via output channels of the data acquisition board and solid state relays. Andor Technology CCD software and custom ’in-house’ written software are used to perform all these tasks.
[**2.2 Absorption cell**]{}
Fig. 2 shows the schematic diagram of the absorption cell. The cell body is made of stainless steel (SS) 316 and is approximately 470 mm in length and 8 mm internal diameter. A vertical extension is welded to the middle of the cell body, 70 mm in length and 11 mm internal diameter. A sodium reservoir is located at the end of this extension. It is made of SS 316 with internal diameter 5.5 mm and 70 mm length and it is connected using a Swagelok fitting which enables disconnection for loading sodium. The sodium reservoir is heated with a separate heater to introduce sodium vapor into the cell or it can be cooled with a circulating water cooler to reduce the alkali number density. Both the heater and cooler are made to slide on and off the sodium reservoir. A valve is connected to the vertical extension through which the cell is evacuated and noble gases can be admitted. This valve is a special bellows-sealed high-temperature valve (Nupro Company) rated to work at temperatures up to 920K. The valve is heated to the same temperature or 5 to 10 K higher than the cell itself to prevent sodium from condensing in the valve.
The major problem one faces when designing an absorption cell with heated windows is making good vacuum seals for the windows. In case of sodium, sapphire proved to be material of choice for the windows because of its excellent resistance to hot sodium \[14\]. However, it is difficult to make a reliable sapphire to metal seal that withstands repeated heating cycles up to 900K. In our design (Fig. 2) the sapphire windows are sealed into polycrystalline alumina (PCA) tubes with special sealing ceramics (frit) used in the construction of commercial high-pressure sodium (HPS) lamps. The sealing technique used was similar to the one described by Schlejen [*et. al.*]{} \[14\]. The PCA tubes have 10.2 mm outside diameter and 110 mm length. They are made from commercial OSRAM SYLVANIA standard HPS lamps ULX880 by cutting off slightly larger diameter portions. Since PCA and sapphire have similar thermal expansion coefficients, such window-tube assembly retains its integrity over a wide range of temperatures. The tubes are inserted into the heated cell body so that the windows are located in the hot zone while the tube ends extend to cooler cell ends where Viton O-rings are used for vacuum seals. Additional external windows made of fused silica are used with O-ring seals to create an evacuated transition zone from the heated middle of the cell to the cooler ends. These silica windows are not heated.
Our cell design allows sodium to condense along the PCA tube towards the colder zone where O-ring seals are located. To reduce the amount of sodium condensed there the PCA tubes were carefully selected in outside diameter tolerance and straightness to match closely the internal diameter of the SS cell body at room temperature. When the cell is heated the SS expands more than PCA thus creating some space for sodium to condense. Once the sodium build-up reaches the hot zone, no more sodium is lost into the void along the PCA tubes.
The windowed ends of the PCA tubes rest against the stepped profiles inside the SS cell body as shown in Fig. 2. These stepped profiles determine the positions of the heated windows and thus the absorption length. To ensure that the PCA tubes are always firmly pressed against these stepped profiles regardless of the thermal expansion differences between PCA and SS, compression springs are used to push the PCA tubes via stacks of spacers, made of SS 316, and the external windows. Cap nuts complete the assembly of the windows, PCA tubes, O-ring seals and spacers as Fig. 2 illustrates. These caps allow easy removal of all windows for cleaning if needed as well as adjustment of the spring compression. The compression springs are chosen to produce about 12 N force, equivalent to about 1.5 atm on the surface area of the heated windows.
The absorption length at room temperature is measured using a special tool made of two rods about 4 mm in diameter inserted into a tube of 6 mm outside diameter. One rod is permanently fixed while the other can slide in and out with friction, thus allowing change in the overall length of the tool. The ends of the tool are rounded and polished. With one sapphire window completely in place at one end of the cell, the tool is inserted into the cell and the second PCA tube with sapphire window is put in place. The tool adjusts its length precisely to the distance between two sapphire windows. Then the PCA tube and the tool are carefully taken out and the length of the tool is measured with a micrometer. In our cell the absorption length at room temperature was measured 190.03 $\pm $ 0.025 mm. The absorption length at operating temperature is calculated from the temperature of the cell and the thermal expansion coefficient for SS of 18 $\pm $ 2.2 x 10$^{-6}$ K$^{-1}$ \[20\]. Since the change in the absorption path length due to thermal expansion is a small percentage of the overall length, large uncertainties in the thermal expansion calculation do not lead to a large uncertainty in the resulting absorption length at a given temperature.
The whole absorption cell including the valve is heated by sets of heaters made of Nickel-Chromium wire. Separate sets of heaters are used to heat the cell and the valve. Each heater set consists of two separate heaters. One is switched on constantly while the other one is used in on-off mode, controlled from the experiment control computer, to maintain average cell and valve temperatures constant. Six type K thermocouples are used to measure the temperatures at different points. Three thermocouples are placed in contact with the main cell body, one of them in the middle and the other two at the locations of heated windows. Another thermocouple measures the valve temperature. Two thermocouples are used to measure the temperatures at the bottom and at the top of the sodium reservoir. The heaters are isolated from each other and the SS parts of the cell by embedding them into insulation made of moldable Alumina-Silica blankets (Zircar Products). All heated parts are wrapped into thermal insulation made of Alumina blankets (Zircar Products). The positions of the heaters are chosen as shown schematically in Fig. 2. The middle part of the cell of $\sim $60 mm length does not have heaters but is nevertheless heated sufficiently by thermal conductance. Also the thermal insulation is adjusted in such a way that the temperature, measured with thermocouple in the middle of the cell, is 5 to 10 K lower than the temperature at the points where the heated windows are located. Heating sapphire windows to a slightly higher temperature ensures they remain free from any deposits during the operation of the cell.
The cell body thermocouple and the valve thermocouple readings give an average temperature of the cell body. The gas mixture temperature is assumed to be equal to the average temperature of the cell body. Since the thermocouples are located between the SS cell body and the heaters, they may give readings of slightly higher temperature than the actual temperature of the cell and the gas inside it. Given this fact and the temperature reading differences between the thermocouples, the uncertainty in the gas mixture temperature is estimated to be + 10 K - 50 K.
[**2.3 Measurement technique**]{}
Absorption spectroscopy is based on Beer’s law describing absorption of light in homogeneous absorbing media
$$I_{1} \left( {\lambda} \right) = I_{0} \left( {\lambda} \right)exp\left(
{
- k\left( {\lambda} \right)l} \right)$$
where [*I*]{}$_{1}$ is the transmitted intensity of light, [*I*]{}$_{0}$ is the incident intensity of light, [*k*]{} is absorption coefficient and [*l*]{} is the absorption length. In real experimental measurements the transmission through optics, absorption cell windows and spectrograph, detector sensitivity and light source spectral characteristics all have to be taken into account. In the dual beam arrangement for absorption spectroscopy the test $S_{t} \left( {\lambda}
\right)$ and reference $S_{r} \left( {\lambda} \right)$ beam spectra, obtained from the detector, are
$$\label{eq1}
S_t (\lambda ) = k_t^0 (\lambda )I_0 (\lambda )\exp \left( { - \tau _t
(\lambda ) - k(\lambda )l} \right)$$
$$\label{eq2}
S_{r} \left( {\lambda} \right) = k_{r}^{0} \left( {\lambda} \right)I_{0}
\left( {\lambda} \right)exp\left( { - \tau _{r} \left( {\lambda}
\right)}
\right)$$
where [*I*]{}$_{0}$ is the intensity of the light source, $k\left( {\lambda } \right)$is the absorption coefficient to be measured, [*l*]{} is the absorption length and $k_{t}^{0} $, $\tau _{t} $ and $k_{r}^{0} $, $\tau _{r} $ are the coefficients that take into account the detector efficiency, absorption of all optics elements such as windows and lenses and spectrograph transmission. To eliminate all unknown parameters represented in (\[eq1\]) and (\[eq2\]) by $k_{t}^{0} $, $\tau _{t} $, $k_{r}^{0} $ and $\tau _{r} $, we measure first the reference spectra (the spectra obtained without sodium in the absorption path and thus without atomic and molecular absorption we are interested in). Sodium concentration in the absorption path is reduced to less than 10$^{14}$ cm$^{-3}$ by cooling the sodium reservoir down to between +5 to + 10° C using the circulating water cooler around it. At densities below 10$^{14}$ cm$^{-3}$ the molecular absorption of both sodium-sodium and sodium-noble gas is negligible and $k\left( {\lambda} \right) = 0$. Both test and reference beam spectra are taken at each grating position and their ratio
$$\label{eq3}
\frac{{S_{t}^{0} \left( {\lambda} \right)}}{{S_{r}^{0} \left( {\lambda}
\right)}} = \frac{{k_{t}^{0}} }{{k_{r}^{0}} }exp\left( {\tau _{r} \left(
{\lambda} \right) - \tau _{t} \left( {\lambda} \right)} \right)$$
is calculated. Thus obtained reference spectra contain information about all unknown parameters. To reduce statistical error a number of measurements are performed and the average is calculated.
To measure the absorption spectra of sodium-sodium and sodium-noble gas molecules, the sodium vapor is introduced in the absorption path by heating the sodium reservoir. Both test and reference beam spectra are taken at each diffraction grating position and their ratio is calculated:
$$\label{eq4}
\frac{{S_{t}^{Na} \left( {\lambda} \right)}}{{S_{r}^{Na} \left( {\lambda}
\right)}} = \frac{{k_{t}^{0}} }{{k_{r}^{0}} }exp\left( {\tau _{r} \left(
{\lambda} \right) - \tau _{t} \left( {\lambda} \right) - k\left(
{\lambda
} \right)l} \right)$$
Once again to reduce statistical error a number of measurements are performed and averaged. From (\[eq3\]) and (\[eq4\]) it follows that the absorption coefficient [*k*]{}($\lambda $) is obtained from measurements of absorption and reference spectra with all unknown parameters eliminated:
$$k\left( {\lambda} \right) = - \frac{{1}}{{l}}ln\left(
{{\raise0.7ex\hbox{${\frac{{S_{t}^{Na}} }{{S_{r}^{Na}} }}$}
\!\mathord{\left/ {\vphantom {{\frac{{S_{t}^{Na}} }{{S_{r}^{Na}} }}
{\frac{{S_{t}^{0}} }{{S_{r}^{0}
}}}}}\right.\kern-\nulldelimiterspace}\!\lower0.7ex\hbox{${\frac{{S_{t}^{0}
}}{{S_{r}^{0}} }}$}}} \right)$$
Using the procedure described above we are able to measure the absolute absorption coefficient with as small as $\pm $1 % statistical error in the range 425 - 760 nm with spectral resolution $\sim $0.02 nm.
Derivation of the reduced absorption coefficient for sodium-sodium and sodium-noble gas quasi-molecules requires accurate knowledge of atomic number densities. The atomic density for noble gas is calculated from pressure and temperature using the ideal gas relationship and the sodium density is measured by the ’hook’ method \[17-19\]. Fig. 3 shows the ’hook’ interference pattern recorded with CCD detector in the focal plane of the spectrograph. The analysis of this pattern and extraction of the sodium atomic number density is performed by a least-squares fit of the interference fringe model to the recorded pattern using software specifically written for this purpose. The following equation can be used to describe the positions $ y_{k} $ of interference fringes of maximum intensity in the focal plane of the spectrograph \[19\]:
$ y_{k} = a\left( {k\lambda - \left( {\frac{{A_{1}} }{{\lambda _{1} -
\lambda
}} - \frac{{A_{2}} }{{\lambda _{2} - \lambda} }} \right)N + \Delta nl}
\right) \quad , $
$\quad
A_{1} = \frac{{r_{0} g_{1} f_{1} l\lambda _{1}^{3}} }{{4\pi} }$ and $A_{2}
=
\frac{{r_{0} g_{2} f_{2} l\lambda _{2}^{3}} }{{4\pi} }$
where $r_{0} $ is the classical electron radius, $g_{1} , f_{1}
, \lambda _{1} $ are respectively the[*g*]{}-factor, the oscillator strength and the wavelength of sodium D1 line, $g_{2} ,f_{2} ,\lambda _{2}
$ are respectively the [*g*]{}-factor, the oscillator strength and the wavelength of sodium D2 line, [*l*]{} is the absorption path length, $\Delta n$ is the coefficient accounting for optical path length difference between test and reference beams of the Mach-Zender interferometer, [*a*]{} is the scaling factor accounting for imaging properties of the optical set-up, [*k*]{} is the fringe order and [*N*]{} is the sodium number density.$_{} $ The above equation is valid at wavelengths separated from the atomic line core by more than the FWHM of the broadened line, $\lambda _{} - \lambda _{i} > > \Delta \lambda $ \[17\]. Our model calculations of the ’hook’ interference pattern, which included the atomic line broadening, showed that the error introduced by neglecting the atomic line broadening in the above equation, is negligible when atomic number density of sodium is above 5x10$^{14}$ cm$^{-3}$ and noble gas pressure is below 500 Torr. These conditions are always met in our measurements. After some algebraic manipulations the following fringe model equation is obtained which gives positions $y_{i} $ of a number of fringes in terms of 2D CCD detector coordinates and fit parameters:
$$y_i = a_3 + a_2 \lambda + a_1 i\lambda + a_4 \left( {\frac{{A_1
}}{{\lambda _1 - \lambda }} + \frac{{A_2 }}{{\lambda _2 - \lambda }}}
\right)$$
where $y_{i} $ is the vertical fringe coordinate at a given wavelength $\lambda $, [*i*]{} = 0,1,2,3,4,5 denotes adjacent fringes seen by the detector and $a_{1} $, $a_{2} $, $a_{3} $ and $a_{4} $ are the fit parameters. The sodium number density is calculated from fit parameters $a_{1} $ and $a_{4} $:
$$N = \frac{{a_{4}} }{{a_{1}} }$$
From the recorded interference pattern three to five interference fringes, defined at maximum intensity, are extracted at each side of the sodium doublet and the ([*y, $\lambda $*]{}) coordinates for each fringe are calculated from the CCD pixel coordinates and wavelength calibration to provide the data set for the least-squares fit. Fig. 4a shows the interference fringes obtained from the image presented in Fig. 3 and Fig 4b shows the fitted model curves. Since a large number of points are used to locate the fringe positions, higher accuracy can be achieved in comparison with traditional methods of extracting atomic number density from measuring the location of the ’hook’ maxima of a single fringe \[17\]. The main limitation on the accuracy of this new technique is imposed by wavelength calibration, especially at lower atomic densities. Using the system described above, the sodium density is routinely measured with $\pm
$ 2 % uncertainty, given the wavelength uncertainty $\pm $0.03 nm in the vicinity of the sodium D-lines and the uncertainty in the interference fringe position of $\pm $5 pixels. During consecutive spectral measurements used in calculating the resultant average spectra, the sodium number density was measured at the beginning of each measurement and was found to remain constant within $\pm $ 4% to $\pm $5%.
[**3. Measurement results**]{}
Fig. 5 presents a spectrum of the absolute absorption coefficient of a sodium - xenon mixture measured at $900K^{+10K}_{-50K} $ cell temperature. The Xe pressure is 400 $\pm $ 0.5 Torr, which gives xenon density 4.29x10$^{18}$ cm$^{-3}$ at 900 K temperature. Sodium density is measured as 2.05 $\pm $ 0.06 x 10$^{16}$ cm$^{-3}$. The absorption coefficient in the 425 nm to 760 nm range consists of contributions from the broadened sodium atomic lines around 589 nm, the sodium - noble gas and the sodium - sodium molecular spectra. From 460 nm to about 550 nm a blue wing of the sodium dimer absorption is apparent \[5\]. At 560 nm there appears a sodium-xenon blue wing satellite feature \[6\] and towards the longer wavelength of the significantly broadened sodium D-lines there are red wings of the sodium dimer \[5\] and the sodium-xenon molecules \[6\]. Fig 6 shows a spectrum of the absolute absorption coefficient of a sodium - argon mixture measured at $900K^{+10K}_{-50K} $ cell temperature. The Ar pressure is 401 $\pm $ 0.5 Torr, which gives argon density 4.3x10$^{18}$ cm$^{-3}$ at 900 K temperature. Sodium density is measured as 1.00 $\pm $ 0.04 x 10$^{16}$ cm$^{-3}$. This spectrum is similar to the sodium - xenon spectrum shown in Fig. 5 except that the sodium - argon blue wing satellite is located at a slightly shorter wavelength of 554.5 nm and the sodium - argon red wing extends further from the sodium atomic line core. The magnitude of the absorption coefficient is lower in proportion to the lower sodium density.
Fig. 7 illustrates rotational-vibrational features of sodium dimer absorption at a 0.02 nm resolution in the vicinity of the 520 nm band. The features are a complicated superposition of many rotational-vibrational bands of the sodium dimer and identification of these bands has not yet been attempted. The statistical uncertainty in the absorption coefficient magnitude is indicated. This uncertainty includes both detector statistical errors and the uncertainty in the absorption path length. At any wavelength in the 425 nm to 760 nm range the uncertainty in the absorption coefficient does not exceed $\pm $2% where absorption coefficient values are larger than 0.008 cm$^{-1}$. The measured spectra can serve for stringent quantitative tests of theoretical calculations \[21\]. Preliminary comparisons showed good overall agreement between the measurements and theoretical calculations at a temperature of 870 K, which is within our experimental temperature uncertainty \[22\]. Full details of the calculations and comparisons with experiment will be presented in the forthcoming publication \[23\].
A reduced absorption coefficient is calculated for the blue wing of sodium dimer absorption, which is well separated from the rest of the spectrum, using the measured absorption coefficient and sodium atomic number density and it is presented in Fig. 8. Apart from the broad and strong molecular absorption arising mostly from transitions from bound to bound states between $X\quad {}^{1}\Sigma _{g}^{+} $ and $B\quad {}^{1}\Pi _{u}^{} $ molecular singlet states of the sodium dimer, there are also features from the triplet transitions $2\quad {}^{3}\Pi _{g}^{} \leftarrow a\quad
{}^{3}\Sigma _{u}^{+} $ and $c{}^{3}\Pi _{g} \leftarrow a\quad
{}^{3}\Sigma _{u}^{+} $ \[5,24\].
Since the sodium-xenon molecular absorption bands are very close to the sodium D-lines, it is difficult to separate them completely from atomic lines and to derive the reduced absorption coefficient. Fig.9 presents the absolute absorption coefficient in the vicinity of sodium-xenon blue satellite features at different xenon densities and sodium density of 7.7x10$^{15}$ cm$^{-3}$ and at 900 K temperature. There are two satellite features present at 560 nm and 564 nm. The positions and relative magnitude of these features can provide some insights into potentials of the sodium-xenon molecule as well as transition dipole moments \[21,23\].
Details of precision absorption measurements in sodium - noble gas mixtures at high spectral resolution have been presented. To perform more stringent tests of theoretical calculations and molecular parameters used in the calculations the goal was to obtain the absorption coefficient spectra on an absolute scale with better than $\pm $ 2% uncertainty at near UV and visible wavelengths. To achieve such precision an absorption cell for containment of sodium vapor with accurately defined absorption path was constructed. The measurements were performed using double-beam absorption measurement scheme eliminating all unknown parameters such as detector sensitivity and optics transmission. A low noise CCD detector was used to record the spectra and a number of separate measurements were averaged to reduce statistical error. To measure accurately the alkali number density the anomalous dispersion or ’hook’ method was employed. The accuracy of the ’hook’ method was improved by means of least-squares fit to the interference fringes image recorded using 2D CCD detector in the focal plane of the spectrograph. The measurements obtained with the apparatus and technique described extend the available data on the sodium - sodium and sodium - rare gas absorption to different temperatures and higher precision and spectral resolution.
This work is supported in part by National Science Foundation under grant No PHY97-24713. The authors would like to acknowledge useful discussions with J.F. Babb, H. Adler and G. Lister and generous equipment and materials support from OSRAM SYLVANIA.
[**References**]{}
\[1\] R.E. Hedges, D.L. Drummond and A. Gallagher 1972 [*Phys. Rev. A*]{} [**6**]{}, 1519
\[2\] D.L. Drummond and A. Gallagher 1974 [*J. Chem. Phys.*]{} [**60**]{}, 3246
\[3\] W. Demtröder and M. Stock 1975 [*J. Mol. Spectr.*]{} [**55**]{}, 476
\[4\] J.F. Kielkopf, and N.F. Allard 1980 [*J. Phys. B*]{} [**13**]{}, 709
\[5\] J. Schlejen, C.J. Jalink, J. Korving, J.P. Woerdman and W. Müller 1987 [*J. Phys. B*]{} [**20**]{}, 2691
\[6\] K.J. Nieuwesteeg, Tj. Hollander and C.Th. J. Alkemade 1987 [*J. Phys. B*]{} [**20**]{}, 515
\[7\] J. Schlejen, J.P. Woerdman and G. Pichler 1988 [*J. Mol. Spectr.*]{} [**128**]{}, 1
\[8\] K. Ueda, H. Sotome and Y. Sato 1990 [*J. Chem. Phys.*]{} [**94**]{}, 1903
\[9\] K. Ueda, O. Sonobe, H. Chiba and Y. Sato 1991 [*J. Chem. Phys.*]{} [**95**]{}, 8083
\[10\] D. Gruber, U. Domiaty, X. Li, L. Windholz, M. Gleichmann and B. A. Heß 1994 [*J. Chem. Phys.*]{} [**102**]{}, 5174
\[11\] J. Szudy and W.E. Baylis 1996 [*Phys. Rep.*]{} [**266**]{}, 127
\[12\] P.S. Erdman, K.M. Sando, W.C. Stwally, C.W. Larson, M.E. Fajardo 1996 [*Chem. Phys, Lett.*]{} [**252**]{}, 248
\[13\] A. Vasilakis, N.D. Bhaskar and W Happer 1980 [*J. Chem. Phys.*]{} [**73**]{}, 1490
\[14\] J. Schlejen, J. Post, J. Korving and J.P. Woerdman 1987 [*Rev. Sci. Instrum.*]{} [**58**]{}, 768
\[15\] A.G. Zajonc 1980 [*Rev. Sci. Instrum.*]{} [**51**]{}, 1682
\[16\] Y. Tamir and R. Shuker 1992 [*Rev. Sci. Instrum.*]{} [**63**]{}, 1834
\[17\] W.H. Parkinson 1987 [*Spectroscopy of Astrophysical Plasmas (Cambridge University Press)*]{}
\[18\] D. Roschestwensky 1912 [*Ann. Physik,*]{}[**39**]{}, 307
\[19\] M.C.E. Huber and R.J. Sandeman 1986 [*Rep. Prog. Phys.*]{} [**49**]{} 397
\[20\] American Institute of Physics Handbook 1972 4-138 [*(McGraw Hill Book Company)*]{}
\[21\] H-K. Chung, M. Shurgalin and J.F. Babb 1999 [*52*]{}$^{nd}$[*GEC, Bull. APS*]{}, [**44**]{}, 31
\[22\] H-K. Chung and J.F. Babb, [*private communication*]{}
\[23\] H-K. Chung, K. Kirby, J.F. Babb and M. Shurgalin, 2000, [*in preparation*]{}
\[24\] D.Veza, J. Rukavina, M. Movre, V. Vujnovic and G. Pilcher 1980 [*Optics Comm.*]{} [**34**]{} 77
[**Figure 1.**]{} Schematic diagram of the experimental set-up.
[**Figure 2.**]{} Schematic diagram of the absorption cell.
[**Figure 3.**]{} Image of a ’hook’ pattern obtained with a two-dimensional CCD detector.
[**Figure 4.**]{} Interference fringes extracted from ’hook’ pattern image (a) and fitted model curves (b).
[**Figure 5.**]{} Absorption coefficient of sodium - xenon mixture at 900 K.
[**Figure 6.**]{} Absorption coefficient of sodium - argon mixture at 900 K.
[**Figure 7.**]{} Rotational-vibrational features of sodium dimer absorption spectra at 0.02 nm resolution.
[**Figure 8.**]{} Reduced absorption coefficient for the blue wing of sodium dimer molecular absorption.
[**Figure 9.**]{} Blue wing of sodium-xenon molecular absorption.
|
---
abstract: |
The interplay between crystal symmetry and charge stripe order in [$\rm Pr_{1.67}Sr_{0.33}NiO_{4}$]{}and [$\rm Nd_{1.67}Sr_{0.33}NiO_{4}$]{} has been studied by means of single crystal x-ray diffraction. In contrast to tetragonal [$\rm La_{1.67}Sr_{0.33}NiO_{4}$]{}, these crystals are orthorhombic. The corresponding distortion of the $\rm NiO_2$ planes is found to dictate the direction of the charge stripes, similar to the case of diagonal spin stripes in the insulating phase of [$\rm La_{2- \it x} Sr_{\it x} Cu O_{4}$]{}. In particular, diagonal stripes seem to always run along the short $a$-axis, which is the direction of the octahedral tilt axis. In contrast, no influence of the crystal symmetry on the charge stripe ordering temperature itself was observed, with $T_{\rm
CO}\sim$240 K for La, Pr, and Nd. The coupling between lattice and stripe degrees of freedom allows one to produce macroscopic samples with unidirectional stripe order. In samples with stoichiometric oxygen content and a hole concentration of exactly 1/3, charge stripes exhibit a staggered stacking order with a period of three $\rm NiO_2$ layers, previously only observed with electron microscopy in domains of mesoscopic dimensions. Remarkably, this stacking order starts to melt about 40 K below $T_{\rm CO}$. The melting process can be described by mixing the ground state, which has a 3-layer stacking period, with an increasing volume fraction with a 2-layer stacking period.
author:
- 'M. Hücker$^{1}$, M. v. Zimmermann$^{2}$, R. Klingeler$^{3}$, S. Kiele$^{3}$, J. Geck$^{3}$, S. N. Bakehe$^{4}$, J. Z. Zhang$^{1,5}$, J. P. Hill$^{1}$, A. Revcolevschi$^{6}$, D. J. Buttrey$^{7}$, B. Büchner$^{3}$, and J. M. Tranquada$^{1}$'
title: 'Unidirectional Diagonal Order and 3D Stacking of Charge Stripes in Orthorhombic $\bf Pr_{1.67}Sr_{0.33}NiO_4$ and $\bf Nd_{1.67}Sr_{0.33}NiO_4$.'
---
Introduction
============
In recent years the nickelate [$\rm La_{2- \it x}Sr_{\it x}NiO_{4+\delta}$]{} has been studied intensively because of its similarity to the high temperature superconductor [$\rm La_{2- \it x} Sr_{\it x} Cu O_{4}$]{}. [@Tranquada98e; @Chen93aN; @Kajimoto03aN; @Ishizaka03aN; @Boothroyd03aN; @Homes03aN] Of particular interest is that, in both materials, incommensurable spin correlations and lattice modulations are observed which are consistent with the concept of charge and spin stripes. [@Tranquada95a] Undoped, both systems are insulating antiferromagnets (AF). Doping with Sr ($x$) or O ($\delta$) introduces hole-like charge carriers, at a concentration $p=x+2\delta$, into the $\rm NiO_2$ and $\rm CuO_2$ planes, which leads to a suppression of the commensurate AF order. However, with increasing hole concentration both systems show a strong tendency towards a frustrated electronic phase separation. [@Emery93] For certain compositions the holes segregate into one-dimensional charge stripes, separating intermediate spin stripes with low hole concentration. [@Tranquada95a] In [$\rm La_{2- \it x}Sr_{\it x}NiO_{4}$]{}, stripe order results in an insulating ground state. [@Cheong94a] Stripes run diagonally to the $\rm NiO_2$ square lattice and are most stable for $x=0.33$, with the charges ordering at $T_{\rm CO}\sim$240 K and the spins at $T_{\rm SO}
\sim$190 K. [@Cheong94a; @Tranquada95a; @Yoshizawa00aN; @Kajimoto03aN] Stripes can be identified by various techniques. [@Chen93aN; @Tranquada95a; @Vigliante97a; @Yoshinari99aN; @Lee02aN; @Li03aN; @Blumberg98aN; @Abbamonte05a; @Langeheine05aN] Here we focus on the characterization of the charge stripes with x-ray diffraction by probing the lattice modulation associated with the spatial modulation of the charge density. [@Vigliante97a]
We are particularly interested in the response of the charge stripe order to changes of the crystal lattice symmetry. For certain cuprates it was shown that lattice distortions can pin stripes or influence their orientation with respect to the lattice. The most prominent example is observed in [$\rm La_{2- \it x} Ba_{\it x} Cu O_{4}$]{} and Nd or Eu-doped [$\rm La_{2- \it x} Sr_{\it x} Cu O_{4}$]{} around a hole doping of $x=1/8$, where stripes are pinned parallel to the $\rm CuO_2$ square lattice as a result of a structural transition from orthorhombic to tetragonal symmetry. [@Fujita04a; @Tranquada95a; @Niemoeller99a; @Klauss00a] In samples with a fully developed static stripe order, superconductivity is strongly suppressed. [@Wagener97a; @Tranquada97a; @Klauss00a; @Fujita04a] Another example is observed in orthorhombic [$\rm La_{2- \it x} Sr_{\it x} Cu O_{4}$]{} below the metal insulator (MI) transition ($x<0.06$), i.e., in the short-range ordered spin-stripe phase. The ground state is insulating, with spin stripes running diagonally to the square lattice. [@Wakimoto99a; @Fujita02a] In this case the coupling to the orthorhombic distortions is evident from the finding that the spin stripes always order parallel to the orthorhombic $a$-axis. [@Wakimoto00a] Note that, below the MI transition, no diffraction evidence for charge stripes has been found. Therefore, one has to consider the possibility that here the incommensurate magnetism might emerge from ground states different from stripe order, such as helical spin structures or disorder-induced spin topologies. [@Shraiman92a; @Gooding97; @Berciu04a; @Sushkov04a; @Luescher05a; @Lindgard05a]
To gain insight into the mechanisms of the charge-lattice coupling in the cuprates, we have carried out a study of the diagonal stripes in the isostructural nickelates. The particular question which motivated this work is: do lattice distortions in the nickelates, which are very similar to those in the cuprates, have an impact on the stripe orientation? In this context, we consider the crystal structure of [$\rm Ln_{2- \it x}Sr_{\it x}NiO_{4+\delta}$]{} with Ln=La, Pr, and Nd.
As summarized in Fig. \[fig1\](a), undoped [$\rm La_{2}NiO_{4}$]{} (open symbols) undergoes two structural transitions: From the high-temperature-tetragonal (HTT) phase to the low-temperature-orthorhombic (LTO) phase at $\rm
T_{HT}\simeq 750$ K, and from the LTO phase to the low-temperature less-orthorhombic (LTLO) phase at $\rm T_{LT}\simeq 80$ K. (See Sec. \[structure\] for definition of phases.) With increasing Sr content, both transitions decrease in temperature and, above a critical Sr content of $x_c\simeq 0.2$, the HTT phase is stable for all temperatures. [@Rodriguez88aN; @Medarde97aN; @Sachan95aN; @FriedtDipl; @Huecker04a] Hence, in [$\rm La_{1.67}Sr_{0.33}NiO_{4}$]{}, stripe order takes place in a tetragonal lattice, which results in an equal population of domains with stripes running along the orthorhombic \[100\] and \[010\] directions. [@Boothroyd03aN; @Woo05aN] Obviously, such stripe twinning complicates any anisotropy study of stripes, such as electronic transport parallel and perpendicular to the charge stripes or magnetic excitations parallel and perpendicular to the spin stripes.
The substitution of trivalent La with the smaller, isovalent Pr or Nd causes a significant increase of the chemical pressure on the $\rm NiO_2$ planes. [@Huecker04a; @Ishizaka03aN] As a result, the HTT/LTO phase boundary shifts to much higher temperatures and Sr concentrations so that ,even for $x=0.33$, the transition takes place above room temperature (see Fig. \[fig1\]). Therefore, the Pr and the Nd-based systems allow us to investigate the formation of stripes under the influence of orthorhombic strain. The effect of the Pr and Nd substitution on both the high temperature and the low temperature structural transitions has been characterized by several groups, although in some studies samples with non-stoichiometric oxygen content were investigated. [@Martinez91aN; @Buttrey90aN; @Medarde94aN] In Ref. , the charge order in [$\rm Nd_{2- \it x}Sr_{\it x}NiO_{4}$]{} with $x\geqslant 0.33$ was studied with the focus on the evolution from stripe to checkerboard order.
In the present paper we focus on the coupling between charge stripes and lattice distortions in the LTO phase of [$\rm Pr_{1.67}Sr_{0.33}NiO_{4}$]{} and [$\rm Nd_{1.67}Sr_{0.33}NiO_{4}$]{}. In particular, we find that stripes always order parallel to the short $a$-axis of the LTO unit cell. This will allow one to produce macroscopic samples with unidirectional stripe order by detwinning the crystals under unidirectional strain. The identified stripe orientation is the same as for the short range spin stripe order in lightly doped [$\rm La_{2- \it x} Sr_{\it x} Cu O_{4}$]{} with LTO structure, as will be discussed. The critical temperature, [$T_{\rm CO}$]{}, turns out to be the same as for [$\rm La_{1.67}Sr_{0.33}NiO_{4}$]{}, i.e., no significant dependence on the orthorhombic strain is observed. Furthermore, we find a strong dependence of the stacking order of stripes along the $c$-axis on the oxygen and hole concentration. Notably, crystals containing excess oxygen exhibit strong stacking disorder. Crystals with stoichiometric oxygen content show a tendency towards a 3-layer stacking period, resulting in a unit cell enlarged by a factor of 1.5 along the $c$-direction. However, only for a hole concentration of exactly 1/3 does this 3-layer period lead to a well defined superstructure.
Crystal structure {#structure}
=================
[$\rm Ln_{2- \it x}Sr_{\it x}NiO_{4+\delta}$]{}, with $\rm Ln=La$, Pr, or Nd, crystallizes in the $\rm K_2NiF_4$ structure and at high temperature is expected to assume tetragonal symmetry, space group $I4/mmm$, regardless of the Sr and O contents. [@Tamura96aN; @Sullivan91aN] For convenience, it is common to index all phases on the basis of the orthorhombic $\sqrt{2}a \times
\sqrt{2}b \times c$ supercell, in which case the space group of the HTT phase is $F4/mmm$. [@Huecker04a] As a function of temperature and doping, a total of four structural phases are observed, which can be described by different buckling patterns of the $\rm NiO_6$ octahedral network. [@Axe89; @Crawford91; @Huecker04a] The HTT phase is the only phase with untilted octahedra, i.e., the $\rm NiO_2$ planes are flat \[cf. Fig. \[fig1\](e)\]. In the LTO phase, space group $Bmab$, octahedra rotate about the $a$-axis, which runs diagonally to the $\rm NiO_2$ square lattice. In the LTT phase, space group $P4_2/ncm$, the octahedral tilt axis is parallel to the Ni-O bonds, which means that it has rotated by $\alpha = 45^\circ$ with respect to the LTO phase. Moreover, its direction alternates between \[110\] and \[1-10\] from plane to plane. The LTLO phase, space group $Pccn$, is an intermediate between LTO and LTT with $\alpha < 45^\circ$.
Experiment
==========
Two single crystals, with compositions [$\rm Pr_{1.67}Sr_{0.33}NiO_{4}$]{} and [$\rm Nd_{1.67}Sr_{0.33}NiO_{4}$]{} and sizes of 4.5 mm in diameter and several centimeters in length, were grown at the Laboratoire de Physico-Chimie des Solides at Orsay by the travelling-solvent floating-zone method. In both cases, crystal growth was performed in 1 atm of air. The polycrystals used to map the phase diagram in Fig. \[fig1\] were synthesized by a standard solid-state reaction. [@BakeheDiss; @FriedtDipl] To remove non-stoichiometric oxygen interstitials (excess oxygen), the polycrystals were annealed under reducing conditions that depended on the Sr content. Powder x-ray diffraction patterns of the polycrystals for temperatures up to 800 K were taken with a standard laboratory diffractometer. [@BakeheDiss; @FriedtDipl]
Single crystal x-ray diffraction experiments were performed at beamline X22C of the National Synchrotron Light Source (NSLS) at Brookhaven and at beamline BW5 of the Hamburg Synchrotron Laboratory (HASYLAB). At BW5, the photon energy was set to 100 keV. [@Bouchard98] At this energy, a sample several millimeters thick can be studied in transmission geometry, allowing one to probe its bulk. In contrast, at Brookhaven 8.1 keV photons, which have a penetration depth on the order of $10~\mu$m, were used, so that, here, samples were studied in reflection geometry. The [$\rm Pr_{1.67}Sr_{0.33}NiO_{4}$]{} crystal, with a polished and twinned \[100\]/\[010\] surface, at first has been studied as-grown-in-air at X22C and BW5. Subsequently, it was studied again at X22C, after removing the excess oxygen ($\delta =0$) by Ar-annealing at 900$^\circ$C for 24 h. The [$\rm Nd_{1.67}Sr_{0.33}NiO_{4}$]{} crystal was studied at BW5 after being subjected to an identical Ar-annealing. For the hard x-ray diffraction experiments no polished surface is required.
The specific heat of the Ar-annealed [$\rm Pr_{1.67}Sr_{0.33}NiO_{4}$]{} crystal was measured with a Physical Property Measurement System from Quantum Design. The $ab$-plane resistivity of a bar-shaped, Ar-annealed [$\rm Nd_{1.67}Sr_{0.33}NiO_{4}$]{} crystal was measured using the four-probe technique.
Results
=======
Phase diagram
-------------
To put the results for the two single crystals ($x=0.33$) into context with [$\rm La_{1.67}Sr_{0.33}NiO_{4}$]{}, we refer back to Fig. \[fig1\](a), the Sr-doping phase diagram for Ln=La, Pr, and Nd. [@BakeheDiss; @FriedtDipl] The shift of the HTT/LTO phase boundary to higher temperatures $T_{\rm HT}$ and $x$ with decreasing rare earth ionic radius is obvious ($\rm La^{\rm
3+}>Pr^{\rm 3+}>Nd^{\rm 3+}$). In those cases where $T_{\rm HT}>800$ K, the transition temperature was determined by linearly extrapolating the orthorhombic strain to higher temperatures. The low-temperature structural transition was observed in all orthorhombic samples. However, it changes qualitatively as a function of the Sr content. At low $x$, the transition is of the discontinuous LTO/LTLO type whereas, at higher $x$, it becomes continuous. No reliable information was obtained for the crossover concentrations, or whether these are different for the La, Pr, and Nd-based systems. However, with $x$ approaching $x_c(T_{\rm HT}=0)$, the transition becomes very broad and the orthorhombicity barely decreases below [$T_{\rm LT}$]{}. For Nd-based samples, similar results were reported in Ref. . Note that while in the Pr-based polycrystal with $x=0.35$ a weak LTO/LTLO is still visible, it is not observed in the Pr-based single crystal with $x=0.33$ (see next section). In Figs. \[fig1\](b) and (d) we show the Sr-doping dependence of the lattice parameters, and in Fig. \[fig1\](c) the orthorhombic strain at room temperature.
Corresponding data for the single crystals (triangular symbols) are in fair agreement with the polycrystal data (cf. Fig. \[fig1\]). The largest deviations concern the absolute values of the lattice parameters $a$ and $b$. It seems that the single crystal data are systematically too low by about 0.5%. Differences of this magnitude are within the error of our single crystal diffraction experiment, where $a$, $b$, and $c$ were determined from basically one reflection each. The temperatures of the HTT/LTO transition are 370 K (315 K) for the surface (bulk) in [$\rm Pr_{1.67}Sr_{0.33}NiO_{4}$]{}(details in Sec. \[unidirect\]), and an estimated 1100 K in [$\rm Nd_{1.67}Sr_{0.33}NiO_{4}$]{}. In addition, the Nd-based crystal undergoes the LTO/LTLO transition at $\sim$125 K.
The approximate melting temperatures in Fig. \[fig1\](a) are based on measurements of the surface temperature of the melt using an optical pyrometer. These results were obtained in earlier crystal growth experiments on [$\rm Ln_{2- \it x}Sr_{\it x}NiO_{4+\delta}$]{} with $x=0$, 0.25, and 1.5 in which the skull melting method was applied. We mention that our results in Fig. \[fig1\] are consistent with available data by other groups, in particular when taking the effect of oxygen doping into account. [@Martinez91aN; @Medarde94aN; @Sullivan91aN; @Ishizaka03aN; @Huecker04a] However, Fig. \[fig1\] may be the most coherent account of [$\rm Ln_{2- \it x}Sr_{\it x}NiO_{4+\delta}$]{}with stoichiometric oxygen content for $x \leq 0.35$ and Ln=La, Pr, and Nd.
Structural transitions in single crystals
-----------------------------------------
In this and the next section, we focus on the temperature dependence of the crystal symmetry and its influence on charge stripe order. As mentioned earlier, the Pr and Nd-based compounds undergo the HTT/LTO transition above room temperature. It is well known that below this transition, crystals tend to form twin boundaries in the $ab$-plane. In particular, in space group $Bmab$, up to four twin domains can form, resulting in a corresponding manifold of fundamental reflections. [@Wakimoto00a] In both crystals studied, only two out of these four domains are present, which we call domains $A$ and $B$.
Figure \[fig2\] shows a projection of the $(hk)$-zone along the $l$-direction in the case of the LTLO phase with charge stripe order, as we find it in the [$\rm Nd_{1.67}Sr_{0.33}NiO_{4}$]{} crystal at low temperatures. The legend shows the $l$-conditions for the different types of reflections. The fundamental reflections of the domains $A$ and $B$ are indicated by black symbols. The angle between the two domains amounts to $\sim$0.7$^\circ$ at 200 K (exaggerated in the figure). Superstructure reflections in the LTO phase, such as (032), are indicated by grey symbols. In the LTLO phase, additional reflections appear, such as (110) and (302), which are indicated by symbols with dot (green) and open symbols, respectively. Furthermore, the angle between the domains is smaller than in the LTO phase. In the LTT phase, the reflections of the two domains are merged into single peaks. In our [$\rm Nd_{1.67}Sr_{0.33}NiO_{4}$]{} crystal, the LTT phase is not observed down to $T=7$ K.
Unidirectional Stripes in the LTO phase {#unidirect}
---------------------------------------
Now let us turn to the weak reflections which appear below $T_{\rm CO}\sim
240$ K due to the charge stripe order, indicated in Fig. \[fig2\] by symbols with cross for domain $A$ (red) and domain $B$ (blue). In both crystals, as well as in each of their two domains, these peaks were always found along $k$. This conclusion is based on scans through possible charge-peak positions in both domains, similar to those presented in Fig. \[fig3\]. [@NSNOpeaks] Note that all scans where performed in the orientation matrix of domain $A$, so that the positions of the peaks of domain $B$ appear slightly shifted. The results indicate a lattice modulation along the $b$-axis, which means that in the LTO phase stripes are parallel to the short $a$-axis. Obviously, the orthorhombic strain in the LTO phase dictates the direction of the stripes. No change of the stripe pattern was observed in the LTLO phase of [$\rm Nd_{1.67}Sr_{0.33}NiO_{4}$]{}, which is not surprising because in the LTLO phase one still has $a<b$.
For the [$\rm Nd_{1.67}Sr_{0.33}NiO_{4}$]{} crystal, the temperature dependence of various superstructure peaks is shown in Fig. \[fig4\]. Below room temperature, the (032) peak intensity first increases slightly, but starts to decrease below $\sim$125 K. The decrease of the (032) peak coincides with a strong increase of the (330) peak, indicating the LTO/LTLO transition. [@LTLOpeak] Above $\sim$125 K, the (330) peak is very weak and broad but remains visible up to room temperature. [@newXTAL] The decrease of the (032) peak in the LTLO phase is due to a shift of intensity to the (302) peak (white symbols in Fig. \[fig2\]). Below the charge stripe transition at $T_{\rm CO} \simeq 240$ K, the intensity of the $(0,4-2\epsilon,3)$ peak first increases steeply, but tends to saturate below 200 K. There is no obvious change of the intensity of this peak due to the LTO/LTLO transition.
In Fig. \[fig5\] we show corresponding data for [$\rm Pr_{1.67}Sr_{0.33}NiO_{4}$]{}. As mentioned before, no LTO/LTLO transition was observed in this crystal. The figures on the left side show 100 keV x-ray data collected in transmission geometry on the crystal in the as-prepared-in-air state ($\delta \gtrsim
0$). The (032) reflection strongly decreases with increasing temperature, indicating a LTO/HTT transition temperature of $\sim$315 K. Charge stripe order sets in at the same temperature as in [$\rm Nd_{1.67}Sr_{0.33}NiO_{4}$]{}, although the transition is somewhat broader \[Fig. \[fig5\](b)\]. The latter effect is due to the non-stoichiometric oxygen and its inhomogeneous distribution in the crystal. The influence of excess oxygen will be discussed in more detail in Secs. \[stripedistance\] and \[stripestacking\].
The figures on the right side of Fig. \[fig5\] show data collected at 8.1 keV in a reflection geometry after Ar-annealing ($\delta \simeq 0$). The charge stripe transition now is much sharper, though the transition temperature remains the same. On the other hand, the LTO/HTT transition is observed at a significantly higher temperature. \[Note that in this experiment we have studied the equivalent (452) reflection.\] In addition to the data in Fig. \[fig5\], we have collected another set of data (not shown) in reflection geometry on the as-prepared crystal. Also in this case the LTO/HTT transition occurs at a much higher temperature (370 K) than in transmission geometry. Therefore, we think that [$T_{\rm HT}$]{} not only depends on the hole concentration \[cf. Fig. \[fig1\](a)\], but generally seems to be somewhat higher in the surface layer than in the bulk.
In-plane Charge Stripe Distance {#stripedistance}
-------------------------------
Excess oxygen $\delta$ increases the total hole concentration $p=x+2\delta$ in the $\rm NiO_2$ planes. One way to measure $p$ is to measure the in-plane distance between the charge stripes via the incommensurability $\epsilon$ (see Fig. \[fig2\]). The property $\epsilon(p)$ was studied intensively in recent years and it is well known that it is not precisely linear. [@Yoshizawa00aN] However, around $p=1/3$, one finds that $\epsilon=p$ is approximately satisfied. Hence, deviations from $p=1/3$ can be probed quite accurately. A precise way to determine $\epsilon$ is to take one fourth the distance between the simultaneously measured reflections at $k=2n-2\epsilon$ and $k=2n+2\epsilon$.
In Fig. \[fig6\] we show representative scans for as-prepared-in-air and Ar-annealed crystals taken at different photon energies, as indicated in the figure. The larger the distance between the peaks, the larger $p$. The dashed lines are for $p=1/3$. By comparing the 8.1 keV and 100 keV data in Figs. \[fig6\](a) and (b), we find that in the as-prepared [$\rm Pr_{1.67}Sr_{0.33}NiO_{4}$]{}crystal the oxygen content in the surface is larger than in the bulk. Only after the Ar-annealing is the hole concentration in this crystal very close to the nominal value $p=x=1/3$, suggesting that $\delta\simeq 0$ \[Fig. \[fig6\](c)\]. In the Ar-annealed [$\rm Nd_{1.67}Sr_{0.33}NiO_{4}$]{} crystal we observe $p<1/3$, which we attribute to a Sr content slightly below $x=1/3$, assuming that $\delta\simeq 0$ \[Fig. \[fig6\](d)\]. Note that for [$\rm Nd_{1.67}Sr_{0.33}NiO_{4}$]{}we have measured the peak at $k=4-2\epsilon$, only. Another feature is that, in the as-prepared [$\rm Pr_{1.67}Sr_{0.33}NiO_{4}$]{} crystal, the peaks are asymmetric and obviously broader in $k$ than in both Ar-annealed crystals, indicating that excess oxygen leads to a enhanced distribution of $\epsilon$.
In Fig. \[fig7\](b) we show the corresponding temperature dependencies of $\epsilon$. The symbols are the same as in Fig. \[fig6\]. At low temperatures one can clearly see the deviations from $\epsilon=1/3$ for as-prepared [$\rm Pr_{1.67}Sr_{0.33}NiO_{4}$]{} and Ar-annealed [$\rm Nd_{1.67}Sr_{0.33}NiO_{4}$]{}. It is believed that the low temperature values of $\epsilon$ represent the true hole concentration $p$. However, close to [$T_{\rm CO}$]{}, in all measurements $\epsilon$ gravitates towards a value of $1/3$. This lock-in effect is well known and indicates that the in-plane stripe distance prefers to be three Ni sites (1.5$\times
b$), i.e., commensurate with the lattice. [@Tranquada96aN; @Vigliante97a; @Wochner98aN; @Kajimoto01aN; @Ishizaka04aN] Note that in the case of the Ar-annealed [$\rm Pr_{1.67}Sr_{0.33}NiO_{4}$]{} crystal, $\epsilon\simeq
1/3$ for all temperatures, which indicates that this is in fact the only experiment where we were truly looking at a sample with $p = 1/3$.
Three Dimensional Charge Stripe Order {#stripestacking}
-------------------------------------
The most dramatic effect the Ar-annealing had on the [$\rm Pr_{1.67}Sr_{0.33}NiO_{4}$]{} crystal was a drastic change of the correlations between charge stripes along the $c$-axis. Corresponding $l$-scans through the charge stripe peak at $\sim
10 $ K are shown in Fig. \[fig7\](a). In the as-prepared crystal, a single narrow peak appears at $l=3$, which is in accordance with results by other groups on [$\rm La_{2- \it x}Sr_{\it x}NiO_{4+\delta}$]{}. [@Vigliante97a; @Lee01aN; @Du00aN] After the Ar-annealing, however, the peak is split with two pronounced maxima appearing at $l\simeq 2.7$ and $l\simeq 3.3$. Since these values are close to $l=2+ 2/3$ and $l=4-2/3$, the split most likely indicates that a long range stacking order of stripes has developed along the $c$-axis with a period of $3/2\times c$, corresponding to a period of three $\rm NiO_2$ layers. Figure \[fig8\] shows the locations of the charge stripe peaks in the $(0kl)$-zone as observed for the Ar-annealed (left) and the as-prepared [$\rm Pr_{1.67}Sr_{0.33}NiO_{4}$]{} crystal (right). The corresponding ordering wave vectors are ${\bf g}_{\rm CO}=(0,2\epsilon,2/3)$ and ${\bf g^{\bf '}}_{\rm
CO}=(0,2\epsilon,-2/3)$ as well as ${\bf g}_{\rm CO}=(0,2\epsilon,1)$, respectively. Details will be discussed in Sec. \[discussion\]. In the [$\rm Nd_{1.67}Sr_{0.33}NiO_{4}$]{} crystal, a similarly clear splitting in $l$ was not observed, but, as one can see in Fig. \[fig7\](a), the scan is extremely broad and shows two shoulders at comparable $l$-positions, plus a central maximum, indicating the presence of incommensurate peaks obscured by disorder.
In Fig. \[fig9\](a) we show the temperature dependence of $l$-scans through $\rm (4,4-2\epsilon,3)$ for [$\rm Pr_{1.67}Sr_{0.33}NiO_{4}$]{}. To extract the peak positions we have fit the data with three lines, keeping the position of the central line fixed at $l=3$. As one can see in Fig. \[fig9\](b), the splitting in $l$ is about constant up to 200 K and then starts to decrease. Eventually, for $T\gtrsim 228$ K the three peaks have merged into a single reflection at $l=3$, indicating the loss, or a qualitative change, of the long-range correlations along the $c$-axis.
In Fig. \[fig9\] peaks appear at slightly asymmetric positions with respect to $l=3$. We have performed additional scans through charge and nearby Bragg peaks, which confirm that the peak positions are symmetric. The deviations in Fig. \[fig9\] are due to a small misalignment of the crystal, as well as the temperature dependence of the lattice parameter $c$. In our data analysis in Sec. \[discussion\], this deviation is compensated by a small offset in $l$.
Discussion
==========
Modelling the stacking of stripes
---------------------------------
Previous studies of [$\rm La_{2- \it x}Sr_{\it x}NiO_{4+\delta}$]{} have shown that the stacking of stripes along the $c$-axis is mainly controlled by two mechanisms [@Tranquada96aN; @Wochner98aN]: the minimization of the Coulomb repulsion between the charge stripes, and the interaction of the charge stripes with the underlying lattice. The Coulomb interaction favors a body centered-type stacking to maximize the distance between nearest neighbor stripes in adjacent planes. On the other hand, the interaction with the lattice favors shifts by increments of half of the in-plane lattice parameter. This means that nearest-neighbor stripes do not stack on top of each other. For a hole content of $p=0.25$ and 0.5, a body-centered stacking also satisfies commensurability with the lattice. This is not the case for $p=1/3$. Here, a body-centered stacking is achieved only in the case of an alternation of layers with site-centered and bond-centered stripes, similar to the stacking shown Fig. \[fig10\](c). However, our results for Ar-annealed [$\rm Pr_{1.67}Sr_{0.33}NiO_{4}$]{} strongly suggest a stripe ground state with a 3-layer period, consistent with a stacking similar to Fig. \[fig10\](a). Stripes successively shift by increments of $b/2$, i.e., there are three different types of planes with stripes at in-plane positions 0, $b/2$ and $b$. A stacking pattern such as that in Fig. \[fig10\](a) but with all stripes centered on Ni-sites is possible, too.
This problem can be projected onto that of the stacking of close-packed layers of atoms, where layers can be shifted by 1/3 of the primitive translation, again resulting in three types of planes: A, B and C. [@Hendricks42a] The two most simple arrangements of these planes are the hexagonal packing ...ABAB..., [@BCBC] and the cubic packing ...ABCABC..., corresponding to stripe stacking patterns with 2-layer and 3-layer periods, respectively \[cf. Fig. \[fig10\](a,b)\]. Energetic differences between these two stacking types appear only if second-nearest-neighbor interactions are included.
In Fig. \[fig11\](a) we show the $l$-dependence of the scattered x-ray intensity for close-packed structures with second neighbor interaction, following the calculations by Hendricks and Teller. [@Hendricks42a; @Kakinoki54a] Stacking faults can be introduced by tuning the probability for second neighbor layers to be alike or unlike. For details, see section 6 of Ref. . The curves in Fig. \[fig11\](a) were calculated for different degrees of stacking disorder $z$, ranging from a predominant 2-layer period for $z\ll
1$ (central peaks at integer $l$) to a predominant 3-layer period for $z\gg 1$ (split peaks at $l \pm 2/3$ with $l$ even). The dashed line (red) line is for complete disorder ($z=1$), which is characterized by a central peak for odd $l$ and no obvious remains of the peaks at even $l$. [@evenpeaks] In Fig. \[fig11\](b) we show that, as the volume fraction with a 3-layer period decreases, the splitting in $l$ decreases and eventually disappears close to the point where complete disorder is reached.
The experimental results for Ar-annealed [$\rm Pr_{1.67}Sr_{0.33}NiO_{4}$]{}, in Fig. \[fig9\], clearly show that for $p=1/3$ a ground state with a 3-layer stacking period is favored over a 2-layer period. First, this indicates that stripes prefer to be all of the same type, i.e., either bond- or site-centered, but not mixed as in Fig. \[fig10\](c). Second, it shows that the Coulomb repulsion between stripes in second neighbor planes is significant. The fact that in Fig. \[fig9\](a) at 10 K the splitting in $l$ is somewhat smaller than $2/3$, indicates a certain degree of disorder \[cf. Fig. \[fig11\](b)\]. Furthermore, it is obvious that the peak intensity at approximately $(4,4-2\epsilon,2+2/3)$ is significantly lower than at $(4,4-2\epsilon,4-2/3)$. We assume that this difference is largely due to the structure factor, since the Bragg intensity at (442) is about a factor forty smaller than at (444). [@ABCimbalance] From the width of the larger of the two peaks we have estimated a correlation length of $\xi_c=17.5$ Å, which corresponds to approximately three $\rm NiO_2$ layers. [@correlationlength]
In Fig. \[fig12\] we present fits to the data obtained by the matrix method used by Hendricks and Teller, [@Hendricks42a] multiplied by a slowly varying structure factor $F^2$ for which we assume the phenomenological, Gaussian $l$-dependence indicated in Fig. \[fig12\](a). The only variables are the stacking parameter $z$ and the amplitude. A finite background estimated from $k$-scans, a small offset in $l$ of -0.04 (explanation in Sec. \[stripestacking\]), and the structure factor were kept constant for all temperatures. Obviously, the model provides a good description of the data. Moreover, the parameter $z$ gives us access to the temperature dependence of the stacking disorder. Note that fits with a different $F^2$ than the one shown in Fig. \[fig12\](a), e.g., with the maximum centered at $l=4$, had virtually no effect on the $T$-dependence of $z$.
The charge stripe melting process
---------------------------------
In Fig. \[fig13\] we compare the fitted $z(T)$ with various properties of the Ar-annealed [$\rm Pr_{1.67}Sr_{0.33}NiO_{4}$]{} and [$\rm Nd_{1.67}Sr_{0.33}NiO_{4}$]{} crystals, such as the $l$-integrated charge-peak intensity, the inverse in-plane correlation length $\xi_b^{-1}$ of the charge stripe order, the incommensurability $\epsilon$, the specific heat $c_p$, and the in-plane resistivity $\rho_{ab}$. A pronounced maximum is observed in $c_p$ at 238 K, which we identify with $T_{\rm CO}$, the onset of static charge stripe order. Above this temperature, charge peaks become very small, and $\xi_b^{-1}$ starts to diverge, i.e., the in-plane stripe order melts. [@Du00aN] In contrast, the parameter $z$ already starts to decrease at $\sim$200 K and reaches complete disorder ($z=1$) at about 228 K. From this we conclude that the melting of the 3D charge stripe lattice sets in with a melting of the interlayer correlations, leaving the 2D correlations intact, until these eventually melt at $T_{\rm CO}$ as well.
In Fig. \[fig13\](b) one can see that $z$ goes through a minimum and then increases for $T\gtrsim 235$ K. This anew increase goes along with the increase of $\xi_b^{-1}$ and reflects the drastic decrease of the stripe correlations above [$T_{\rm CO}$]{}. Note that, in [$\rm La_{1.67}Sr_{0.33}NiO_{4}$]{}, a clear minimum in $\xi_c^{-1}$ was observed at about the same temperature. [@Du00aN] We believe that there the increase of $\xi_c^{-1}$ at temperatures below the minimum is a precursor of the formation of the 3-layer period stacking order. According to Fig. \[fig13\](b), $z<1$ at the temperature of the minimum, which, if taken serious, implies a slight tendency towards a 2-layer stacking period. It is certainly possible that, once the stacking order starts to melt, stripes can arrange more freely, and may assume a body-centered stacking order, for which the ordering wave vector ${\bf
g}_{\rm CO}$ is the same as in the case of disorder \[cf. Figs. \[fig8\](b) and \[fig10\](c)\]. However, further work is necessary to verify if our analysis, which is based on the model by Hendricks and Teller, correctly describes even such small details.
The spin stripe ordering temperature $T_{\rm SO}$ of our crystals has not yet been determined. While in [$\rm La_{1.67}Sr_{0.33}NiO_{4}$]{} both [$T_{\rm CO}$]{} and [$T_{\rm SO}$]{} can be identified by weak anomalies in the static magnetic [susceptibility]{} of the $\rm
NiO_2$ planes, this is impossible in the case of our crystals, because of the large paramagnetic contribution of the $\rm Pr^{3+}$ and $\rm Nd^{3+}$ ions. [@Klingeler05aN] However, results from neutron diffraction in Ref. show that, in [$\rm Nd_{1.67}Sr_{0.33}NiO_{4}$]{}, spin stripes order at essentially the same temperature as in [$\rm La_{1.67}Sr_{0.33}NiO_{4}$]{}, with $T_{\rm SO}\simeq
190$ K. This means that the spin stripe order disappears just about the temperature where the $c$-axis stacking correlations of the charge stripes start to melt. In this context it is worth noting that the low temperature side of the anomaly in $c_p$ has a shoulder which extends down to $T_{\rm
SO}$. Measurements of $c_p$ in Ref. for a [$\rm La_{1.67}Sr_{0.33}NiO_{4}$]{} single crystal show this even more clearly. We believe that the shoulder indicates that the melting of the stripe stacking order is associated with a significant entropy change. [@entropy]
Fingerprints of the melting of the interlayer stripe order well below $T_{\rm CO}$ are also observed in the electronic transport. In [$\rm Nd_{1.67}Sr_{0.33}NiO_{4}$]{}, in Fig. \[fig1\](f), the onset of the charge stripe order at $T_{\rm CO}$ is announced by a kink in log($\rho_{ab}$), as is also evident from a peak in $d^2{\rm log}(\rho_{ab})/dT^2$. However, the entire transition in log($\rho_{ab}$) is not sharp, but extends down to about 200 K, similar to the behavior of $z$, $\Delta c_p$, and the charge-peak intensity. One might think of this as a manifestation of the intimate connection between charge and spin stripe order, since it implies that the charge order has to be fairly progressed, before spin stripe order can occur.
Another interesting feature of the anomaly in $c_p$ is its extension towards high temperatures. In this region charge peaks become extremely weak and broad. It is reasonable to consider that both effects are related to slowly fluctuating charge stripe correlations. [@Du00aN] Intensity from these correlations is picked-up due to the poor energy resolution of the x-ray diffraction experiment. An inelastic neutron scattering study of the incommensurable magnetic fluctuations in [$\rm La_{1.725}Sr_{0.275}NiO_{4}$]{} gives indirect evidence for a liquid phase of charge stripes above $T_{\rm
CO}$. [@Lee02aN]
The present observations are consistent with a stripe smectic or nematic phase at $T>T_{\rm CO}$. [@Kivelson98] (Note that this assignment is different from that originally made by Lee and Cheong. [@Lee97aN])
Comparison with TEM data
------------------------
Our results concerning the stacking of stripes are in qualitative agreement with a recent transmission-electron-microscopy (TEM) study on tetragonal [$\rm La_{1.725}Sr_{0.275}NiO_{4}$]{} with x=0.275. [@Li03aN] In this study, it was shown that, in domains of mesoscopic dimension, charge stripes are indeed one dimensional. However, the Sr content of $x=0.275$ gave rise to a mixing of features expected either for $x=0.25$ or $x=0.33$, as well as features unique to this intermediate doping. In particular, the obtained average in-plane stripe distance ($1.75\times b$) results in a mixture of site- and bond-centered stripes within the same $\rm NiO_2$ plane. As a consequence, the stacking of stripes along $c$ is strongly disordered, with both simple body-centered arrangements as well as staggered shifts by $0.5 \times b$ prevalent. [@Li03aN]
Here, we have shown that the orthorhombic symmetry of [$\rm Pr_{1.67}Sr_{0.33}NiO_{4}$]{} and [$\rm Nd_{1.67}Sr_{0.33}NiO_{4}$]{}can be used as a tool to obtain large single crystals with unidirectional stripe order, suited to the study of macroscopic properties. For $x=0.33$ and $\delta = 0$, stripes in a single $\rm NiO_2$ plane are likely to be either all bond-centered or all site-centered, since the stripe distance of $1.5\times b$ is commensurate with the lattice. In the case of long range in-plane order, the Coulomb interaction between stripes in first and second neighbor planes becomes crucial and stabilizes a stacking such as depicted in Fig. \[fig10\](a). Thus, for the first time, a stripe order with a 3-layer period has been observed in the x-ray diffraction pattern of a macroscopic crystal.
It is worthwhile mentioning that there are several studies of [$\rm La_{1.67}Sr_{0.33}NiO_{4}$]{} with $\epsilon$ matching the critical value of 1/3 quite well. [@Lee97aN; @Kajimoto01aN; @Du00aN; @Ishizaka04aN] However, a stripe stacking order of the kind identified here in [$\rm Pr_{1.67}Sr_{0.33}NiO_{4}$]{} and [$\rm Nd_{1.67}Sr_{0.33}NiO_{4}$]{} has not been reported. These observations suggest that the stabilization of the 3-layer stacking period may benefit from the fact that, in orthorhombic crystals, the in-plane stripe order is unidirectional.
Comparison with [$\bf La_{2- \it x} Sr_{\it x} Cu O_{4}$]{}
------------------------------------------------------------------------------------------------------
Besides the obvious differences, the electronic phase diagrams of nickelates and cuprates show some interesting similarities. In the present context it is particularly remarkable that, in [$\rm La_{2- \it x} Sr_{\it x} Cu O_{4}$]{}, as long as $x$ is below the MI transition, a static diagonal spin stripe order forms. [@Wakimoto99a; @Wakimoto00a; @Fujita02a] It remains an open experimental question, whether corresponding samples also exhibit diagonal charge stripes. In this respect, it is an interesting finding that, in nickelates with the LTO structure, $charge$ stripes prefer to be parallel to the octahedral tilt axis, similar to the $spin$ stripes in the cuprates. These findings thus provide motivation for further experimental searches for charge stripes in the insulating cuprates.
There are certainly alternative models for the lightly-doped cuprates that do not involve stripes [@Berciu04a; @Sushkov04a; @Luescher05a; @Lindgard05a; @Gooding97]; however, there is also additional evidence for the cuprates suggesting a common coupling of charge and spin stripes to an orthorhombic lattice distortion. In [$\rm La_{2} Cu O_{4+\delta}$]{} and [$\rm La_{1.88} Sr_{0.12} Cu O_{4}$]{}, both systems with parallel stripes and LTO structure, it was observed that the direction of the spin stripes is not perfectly parallel to the Cu-O bonds, but slightly inclined towards the diagonally running octahedral tilt axis ($\parallel a$), by an angle too large to be explained with the orthorhombic lattice distortion. [@Lee99; @Kimura00a] Unfortunately, in these compounds, no charge peaks were detected. However, in Ref. and Ref. it was shown that in orthorhombic [$\rm La_{1.875} Ba_{0.125- \it x}Sr_{\it x} Cu O_{4}$]{} with $x=0.075$, both spin stripes and charge stripes show the same inclination towards the octahedral tilt axis. This strongly suggests that, in both classes of materials, nickelates and cuprates, the response of charge stripes to orthorhombic lattice distortions is similar.
Conclusion
==========
We have presented a detailed x-ray diffraction study which sheds light on the coupling between the charge stripes and structural distortions in [$\rm Pr_{1.67}Sr_{0.33}NiO_{4}$]{} and [$\rm Nd_{1.67}Sr_{0.33}NiO_{4}$]{} single crystals. In contrast to the sister compound [$\rm La_{1.67}Sr_{0.33}NiO_{4}$]{}, both crystals undergo a HTT$\rightarrow$LTO transition well above room temperature, so that stripe order at [$T_{\rm CO}$]{} takes place in an anisotropic environment. We find that the orthorhombic strain causes the stripes to align parallel to the short $a$-axis, which is also the direction of the $\rm NiO_6$ octahedral tilt axis. In addition to these in-plane correlations, we have observed correlations between the planes, which are consistent with a stacking period of three $\rm NiO_2$ layers. This stacking order is extremely vulnerable to interstitial oxygen impurities and deviations of the total hole concentration from $p=1/3$; as a result, it was observed in Ar-annealed samples, only. Further, we find that the melting of the static charge stripe order is a two-step process, which starts at 200 K with the melting of the interlayer correlations, and is completed at [$T_{\rm CO}$]{} with the melting of the in-plane correlations. Implications for the stripe order in insulating [$\rm La_{2- \it x} Sr_{\it x} Cu O_{4}$]{} have been discussed. The observation of unidirectional stripes in [$\rm Pr_{1.67}Sr_{0.33}NiO_{4}$]{} and [$\rm Nd_{1.67}Sr_{0.33}NiO_{4}$]{}opens up the unique possibility to characterize their anisotropic properties.\
The work at Brookhaven was supported by the Office of Science, U.S. Department of Energy under Contract No. DE-AC02-98CH10886.
[10]{}
J.M. Tranquada. In A. Furrer, editor, [*Neutron Scattering in Layered Copper-Oxide Supercondcutors*]{} page 225. Kluwer, Dordrecht 1998.
R. Kajimoto, K. Ishizaka, H. Yoshizawa and Y. Tokura, , 14511 (2003).
C. H. Chen, S-W. Cheong and A. S. Cooper, , 2461 (1993).
A. T. Boothroyd, P. G. Freeman, D. Prabhakaran, A. Hiess, M. Enderle, J. Kulda and F. Altorfer, , 257201 (2003).
K. Ishizaka, Y. Taguchi, R. Kajimoto, H. Yoshizawa and Y. Tokura, , 184418 (2003).
C. C. Homes, J. M. Tranquada, Q. Li, A. R. Moodenbaugh and D. J.Buttrey, , 184516 (2003).
J. M. Tranquada, B. J. Sternlieb, J. D. Axe, Y. Nakamura and S.Uchida, , 561 (1995).
V. J. Emery and S. A. Kivelson, , 597 (1993).
S-W. Cheong, H. Y. Hwang, C. H. Chen, B. Batlogg, L. W. [Rupp Jr.]{} and S. A. Carter, , 7088 (1994).
H. Yoshizawa, T. Kakeshita, R. Kajimoto, T. Tanabe, T. Katsufuji and Y. Tokura, , R854 (2000).
A. Vigliante, M. von Zimmermann, J. R. Schneider, T. Frello, N. H. Andersen, J. Madsen, D. J. Buttrey, D. Gibbs and J. M. Tranquada, , 8248 (1997).
Y. Yoshinari, P. C. Hammel and S-W. Cheong, , 3536 (1999).
S.-H. Lee, J. M. Tranquada, K. Yamada, D. J. Buttrey, Q. Li and S-W. Cheong, , 126401 (2002).
J. Li, Y. Zhu, J. M. Tranquada, K. Yamada and D. J. Buttrey, , 12404 (2003).
G. Blumberg, M. V. Klein and S-W. Cheong, , 564 (1998).
P. Abbamonte, A. Rusydi, S. Smadici, G. D. Gu, G. A. Sawatzky and D. L. Feng, , 155 (2005).
C. Schü[ß]{}ler-Langeheine, J. Schlappa, A. Tanaka, Z. Hu, C. F.Chang, E. Schierle, M. Benomar, H. Ott, E. Weschke, G. Kaindl, O. Friedt, G. A. Sawatzky, H.-J. Lin, C. T. Chen, M. Braden, and L. H. Tjeng, , 156402 (2005).
M. Fujita, H. Goka, K. Yamada, J. M. Tranquada and L. P. Regnault, , 104517 (2004).
T. Niemöller, N. Ichikawa, T. Frello, H. Hünnefeld, N. H.Andersen, S. Uchida, J. R. Schneider and J. M. Tranquada, , 509 (1999).
H.-H. Klauss, W. Wagener, M. Hillberg, W. Kopmann, H. Walf, F. J. Litterst, M. Hücker and B. Büchner, , 4590 (2000).
W. Wagener, H.-H. Klauss, M. Hillberg M. A. C. de Melo, M. Birke, F. J. Litterst, B. Büchner and H. Micklitz, , R14761 (1997).
J. M. Tranquada, J. D. Axe, N. Ichikawa, A. R. Moodenbaugh, Y.Nakamura and S. Uchida, , 338 (1997).
S. Wakimoto, G. Shirane Y. Endoh, K. Hirota, S. Ueki, K. Yamada, R. J. Birgeneau, M. A. Kastner, Y. S. Lee, P. M. Gehring and S. H. Lee, , R769 (1999).
M. Fujita, K. Yamada, H. Hiraka, P. M. Gehring, S. H. Lee, S.Wakimoto and G. Shirane, , 64505 (2002).
S. Wakimoto, R. J. Birgeneau, M. A. Kastner, Y. S. Lee R. Erwin, P. M. Gehring, S. H. Lee, M. Fujita, K. Yamada, Y. Endoh, K. Hirota and G. Shirane, , 3699 (2000).
B. I. Shraiman and E. D. Siggia. , 8305 (1992).
R. J. Gooding, N. M. Salem, R. J. Birgeneau and F. C. Chou, , 6360 (1997).
M. Berciu and S. John, , 224515 (2004).
O. P. Sushkov and V. N. Kotov, , 24503 (2004).
A. Lüscher, G. Misguich, A. I. Milstein and O. P. Sushkov, .
P.-A. Lindg[å]{}rd, , 217001 (2005).
J. Rodríguez-Carvajal, J. L. Martínez, J. Pannetier and R. Saez-Puche, , 7148 (1988).
M. Medarde and J. Rodríguez-Carvajal, , 307 (1997).
V. Sachan, D. J. Buttrey, J. M. Tranquada, J. E. Lorenzo and G.Shirane, , 12742 (1995).
O. Friedt, (1998).
M. Hücker, K. Chung, M. Chand, T. Vogt, J. M. Tranquada and D. J. Buttrey, , 64105 (2004).
H. Woo, A. T. Boothroyd, K. Nakajima, T. G. Perring, C. D. Frost, P. G. Freeman, D. Prabhakaran, K. Yamada and J. M. Tranquada, , 64437 (2005).
J. L. Martínez, M. T. Fernández–-Díaz, J.Rodríguez–-Carvajal and P. Odier, , 13766 (1991).
D. J. Buttrey, J. D. Sullivan, G. Shirane and K. Yamada, , 3944 (1990).
M. Medarde, J. Rodríguez-Carvajal, M. Vallet-Regí, J. M. González-Calbet and J. Alonso, , 8591 (1994).
S. Bakehe, (2002).
H. Tamura, A. Hayashi and Y. Ueda, , 61 (1996).
J. D. Sullivan, D. J. Buttrey, D. E. Cox and J. Hriljac, , 337 (1991).
J. D. Axe, A. H. Moudden, D. Hohlwein, D. E. Cox, K. M. Mohanty, A. R. Moodenbaugh and Y. Xu, , 2751 (1989).
M. K. Crawford, R. L. Harlow, E. M. McCarron, W. E. Farneth, J. D.Axe, H. Chou and Q. Huang, , 7749 (1991).
R. Bouchard, D. Hupfeld, T. Lippmann, J. Neuefeind, H.-B. Neumann, H.F. Poulsen, U. Rütt, T. Schmidt, J.R. Schneider, J. Süssenbach and M. von Zimmermann, , 90 (1998).
These reflections were measured after the [$\rm Nd_{1.67}Sr_{0.33}NiO_{4}$]{} crystal was exposed to air for about one year. Therefore, the crystal contains some excess oxygen, resulting in a larger and more inhomogeneous hole concentration than immediately after the Ar-annealing. This is apparent from the larger value of $\epsilon$, and an approximately twice as large peak width \[cf. Figs. \[fig6\](d) and \[fig7\](b)\].
Note that we have integrated the intensity of the (330) peaks in both domains, i.e., $A$ and $B$.
We mention that a newly grown [$\rm Nd_{1.67}Sr_{0.33}NiO_{4}$]{} crystal shows a sharp LTO/LTLO transition at $\sim$100 K and no intensity above that temperature. Results will be published elsewhere.
Note that the Hendricks-Teller model allows us to simulate the mixing of 3-layer and 2-layer period stacking segments at any ratio. In the case of a 2-layer stacking period like the one depicted in Fig. \[fig10\](b), peaks for $l$ even are allowed. However, it is very unlikely that in our samples the mixing of the 3-layer ground state with this 2-layer stacking type goes beyond the point of complete disorder. On the other hand, a ground state with a 2-layer period of the kind depicted in Fig. \[fig10\](c) is possible. For this body centered stacking type, charge peaks for $l$ even should be absent.
J. M. Tranquada, D. J. Buttrey and V. Sachan, , 12318 (1996).
P. Wochner, J. M. Tranquada, D. J. Buttrey and V. Sachan, , 1066 (1998).
R. Kajimoto, T. Kakeshita, H. Yoshizawa, T. Tanabe, T. Katsufuji and Y. Tokura, , 144432 (2001).
K. Ishizaka, T. Arima, Y. Murakami, R. Kajimoto, H. Yoshizawa, N.Nagaosa and Y. Tokura, , 196404 (2004).
S.-H. Lee, S-W. Cheong, K. Yamada and C. F. Majkrzak, , 60405 (2001).
C-H. Du, M. E. Ghazi, Y. Su, I. Pape, P. D. Hatton, S. D. Brown, W. G. Stirling, M. J. Cooper and S-W. Cheong, , 3911 (2000).
S. Hendricks and E. Teller, , 147 (1942).
Equivalent with this are the sequences ...BCBC... and ...CACA... .
J. Kakinoki and Y. Komura, , 169 (1954).
In Ref. it is mentioned that an imbalance in the population of domains with stacking sequence ...ABCABC... and ...CBACBA... can cause a similar effect. However, this effect should vary between different crystals. In contrast, in about half a dozen samples with a pronounced 3-layer stacking period, we always have observed a similar intensity ratio.
The correlation lengths were calculated from the peak width using $\xi_b=({\rm
HWHM} \times b^*)^{-1}$ and $\xi_c=({\rm HWHM} \times c^*)^{-1}$ where HWHM is the half width at half maximum in reciprocal lattice units of $b^*=1.162~{\rm \AA} ^{-1}$ and $c^*=0.504~{\rm \AA} ^{-1}$, respectively.
R. Klingeler, B. Büchner, S-W. Cheong and M. Hücker, , 104424 (2005).
As was discussed in Ref. , it is likely that both spin as well as charge degrees of freedom contribute to the entropy.
S. A. Kivelson, E. Fradkin and V. J. Emery, , 550 (1998).
S.-H. Lee and S-W. Cheong, , 2514 (1997).
Y. S. Lee, R. J. Birgeneau, M. A. Kastner, Y. Endoh, S. Wakimoto, K. Yamada R. W. Erwin, S.-H. Lee and G. Shirane, , 3643 (1999).
H. Kimura, H. Matsushita, K. Hirota, Y. Endoh, K. Yamada, G.Shirane, Y. S. Lee, M. A. Kastner and R. J. Birgeneau, , 14366 (2000).
H. Kimura, Y. Noda, H. Goka, M. Fujita, K. Yamada, M. Mizumaki, N.Ikeda and H. Ohsumi, , 134512 (2004).
M. Fujita, H. Goka, K. Yamada and M. Matsuda, , 184503 (2002).
|
---
author:
- |
Oliver F. Piattella\
Dipartimento di Scienze Fisiche e Matematiche, Università dell’Insubria, Via Valleggio 11, 22100 Como, Italy\
and\
INFN, sezione di Milano, Via Celoria 16, 20133 Milano, Italy\
and\
Institute of Cosmology and Gravitation, University of Portsmouth, Dennis Sciama Building, Portsmouth PO1 3FX, United Kingdom\
E-mail:
title: The extreme limit of the generalised Chaplygin gas
---
Introduction
============
The generalised Chaplygin gas [@Kamenshchik:2001cp] (hereafter gCg) is a cosmological model which has raised some interest thanks to its ability to describe alone the dynamics of the entire dark sector of the Universe, i.e. Dark Matter and Dark Energy. The gCg is a perfect fluid characterised by the following equation of state: $$\label{gcgeos}
p = - \frac{A}{\rho^{\alpha}}\;,$$ where $p$ is the pressure, $\rho$ is the density and $A$ and $\alpha$ are positive parameters.
In the setting of the Friedmann-Lemaître-Robertson-Walker cosmological theory, one easily integrates the energy conservation equation and finds that the gCg density evolves as a function of the redshift $z$ as follows: $$\label{gcgrhoevo}
\rho = \left[A + B\left(1 + z\right)^{3(1 + \alpha)}\right]^{\frac{1}{1 + \alpha}}\;,$$ where $B$ is a positive integration constant. Equation interpolates between a past ($z \gg 1$) dust-like phase of evolution, i.e. $\rho \sim \left(1 + z\right)^{3}$, and a recent ($z \lesssim 1$) one in which $\rho$ tends asymptotically to a cosmological constant, i.e. $\rho \sim A^{\frac{1}{1 + \alpha}}$.
As cosmological model, the gCg was first investigated in [@Kamenshchik:2001cp] because it had already raised some attention, e.g. in connection with string theory. In particular, the equation of state (\[gcgeos\]) can be extracted from the Nambu-Goto action for $d$-branes moving in a $(d + 2)$-dimensional spacetime in the lightcone parametrisation, see [@Bordemann:1993ep]. Also, the gCg is the only fluid which, up to now, admits a supersymmetric generalisation, see [@Hoppe:1993gz; @Jackiw:2000cc].
From the point of view of cosmology, a possible unification picture between Dark Matter and Dark Energy is particularly appealing, especially in connection with the so-called cosmic coincidence problem (see [@Zlatev:1998tr] for much detail about the latter). This motivation prompted an intensive study of the gCg and of those models that have the property of unifying (dynamically) Dark Matter and Dark Energy. Such models are called Unified Dark Matter (UDM).
It must be pointed out that in a conventional cosmological model (i.e. a model in which Dark Matter and Dark Energy are separated entities) Dark Matter has the primary role of providing for structure formation whereas Dark Energy has to account for the recently observed accelerated expansion of the Universe [@Riess:1998cb; @Perlmutter:1998np]. The fundamental task for the gCg is then to play both these roles.
To understand if this is the case, the gCg parameters space $(A,\alpha)$ has been intensively analysed in relation with observation of Large Scale Structures (LSS), Cosmic Microwave Background (CMB), type Ia Supernovae (SNIa), X-ray cluster gas fraction and Baryon Acoustic Oscillations (BAO), see for example [@Wu:2006pe; @Davis:2007na; @Lu:2008hp; @Lu:2008zzb; @Barreiro:2008pn; @Sandvik:2002jz; @Carturan:2002si; @Bean:2003ae; @Amendola:2003bz; @Amendola:2005rk; @Giannantonio:2006ij].
The most instructive constraints are about the parameter $\alpha$ and come from the LSS matter power spectrum and the CMB angular power spectrum analysis. In the former, the resulting best fit value is $\alpha \lesssim 10^{-5}$, [@Sandvik:2002jz]. One can think that this narrow constraint is a flaw of the gCg model because if we take the limit $\alpha \to 0$ in the equation of state [Eq. ]{}, then we reproduce exactly the $\Lambda$CDM model (if $A$ assumes the value of the cosmological constant energy density).
The authors of [@Beca:2003an] have found a loophole in this degeneracy problem. Indeed, they have shown that if we consider a baryon component added to the gCg and we compute the matter power spectrum for the baryonic part alone, it turns out that the latter is only poorly affected (at the linear perturbative regime) by the presence of the gCg. The model constituted by gCg plus baryons is therefore in good agreement with LSS observation for all $\alpha \in (0,1)$.
On the other hand, even if the “baryon loophole” allows to circumvent the very narrow constraint on $\alpha$ coming from LSS analysis, it cannot prevent the tight one coming from CMB analysis and due in particular to the Integrated Sachs-Wolfe (ISW) effect, [@Carturan:2002si; @Bean:2003ae; @Amendola:2003bz; @Amendola:2005rk; @Giannantonio:2006ij; @Bertacca:2007cv]. In more detail, if we take into account an ordinary Cold Dark Matter (CDM) component and a baryonic one together with the gCg, we find that $\alpha < 0.2$, [@Amendola:2003bz]. Withdrawing the CDM component, we lower the bound by an order of magnitude: $\alpha < 10^{-2}$, [@Amendola:2005rk]. Finally, for the case of the pure gCg we find an even tighter constraint: $\alpha < 10^{-4}$, [@Bertacca:2007cv].
Taking into account all these results, we may conclude that the gCg model is viable only when it is very similar, if not degenerate, to the $\Lambda$CDM. However, it must be pointed out that in the major part of the literature on the subject, the preliminary constraint $\alpha < 1$ is assumed. The reason for this assumption resides in the form of the gCg adiabatic speed of sound, which is the following: $$\label{gCgsos}
c_{\rm s}^{2} \equiv \frac{{{\rm d}}p}{{{\rm d}}\rho} = \frac{A\alpha}{A + B\left(1 + z\right)^{3\left(\alpha + 1\right)}}\;.$$ If $z \to -1$ then $c_{\rm s}^{2} \to \alpha$. Therefore, causality would require $\alpha < 1$. However, the causality problem for UDM models should rather be addressed in terms of a microscopic theory, see [@Babichev:2007dw]. In the particular case of the gCg, for $\alpha > 1$ the authors of [@Gorini:2007ta] examine in some detail the causality issues and develop a suitable microscopic theory in which the signal velocity never exceeds the speed of light.
Given this, it is natural to wonder how the gCg model behaves in the “forbidden” range $\alpha > 1$. In the literature few papers carry out such analysis, for example [@Amendola:2005rk; @Gorini:2007ta; @Fabris:2008hy; @Fabris:2008mi; @Urakawa:2009jb]. An interesting result is for instance that, in the framework of the “baryon loophole” above mentioned, i.e. considering the model constituted by the gCg plus baryons, the agreement of the baryon power spectrum with observation increases for $\alpha \gtrsim 3$, see [@Gorini:2007ta]. In [@Fabris:2008hy] and [@Fabris:2008mi] the authors have recently confirmed this behaviour. On the other hand, CMB constraints do not change significantly from the case $\alpha < 1$, in the sense that the parameter is again constrained to be very small, see [@Amendola:2005rk] and [@Urakawa:2009jb].
Even in those papers which analyse the gCg for $\alpha > 1$, the parameter space is probed up to a maximum finite value. For example, $\alpha < 6$ in [@Amendola:2005rk] or $\alpha < 10$ in [@Urakawa:2009jb]. Since the $\alpha > 1$ case has proved to be promising, at least in relation with LSS analysis, in this paper we dedicate ourselves to investigate the extreme limit of the gCg, i.e. the behaviour of the model for very large values of $\alpha$. Our analysis is principally based on the ISW effect because, as we have discussed in the above and as the authors of [@Bertacca:2007cv] have shown, it provides the strongest constraints for a UDM model.
To this purpose, in section II we briefly outline the basic equations describing the ISW effect and, inspired by [@Bertacca:2007cv], we present a simple method based on the Mészáros effect which we employ in Section III to find a qualitative constraint for large values of $\alpha$. Indeed, in section III we analyse the Jeans wavenumber of gCg perturbations and we find that if $\alpha \gtrsim 350$ the ISW effect can be potentially small. In Section IV we then confirm the results of Sec. III by directly calculating the ISW effect in the limit $\alpha \to \infty$ and showing that it is smaller than the one for $\alpha = 0$, i.e. for the $\Lambda$CDM (the calculation of the ISW effect for the $\Lambda$CDM has been performed for the first time by Kofman and Starobinsky in 1985, see [@Kofman:1985fp]). In section V we address the behaviour of the background expansion for large values of $\alpha$ and find that the evolution is characterised by an early dust-like phase abruptly interrupted by a de Sitter (dS) one. We then analyse the 157 nearby SNIa of the Constitution set and find that the transition between the two phases takes place about a redshift $z_{\rm tr} = 0.22$, which is much more recent than the transition at $z_{\rm tr} = 0.79$ to the accelerated phase of expansion in the $\Lambda$CDM model. The last section is devoted to discussion and conclusions.
The ISW effect and the constraining method
==========================================
In this paper we discuss only adiabatic perturbations and we assume a flat spatial geometry. The ISW effect contribution to the CMB angular power spectrum is given by the following formula [@Sachs:1967er]: $$\label{ClISW}
\frac{2l + 1}{4\pi}C_{l}^{\rm ISW} = \frac{1}{2\pi^{2}}\int_{0}^{\infty}\frac{{{\rm d}}k}{k}k^{3}\frac{\left|\Theta_{l}\left(\eta_{0},k\right)\right|^{2}}{2l + 1}\;,$$ where $l$ is the multipole moment, $k$ is the wavelength number, $\eta_{0}$ is the present conformal time and $$\label{ThetalISW}
\frac{\Theta^{\rm ISW}_{l}\left(\eta_{0},k\right)}{2l + 1} = 2\int_{\eta_{*}}^{\eta_{0}}\Phi'\left(\eta,k\right){\rm j}_{l}\left[k\left(\eta_{0} - \eta\right)\right]{{\rm d}}\eta\;$$ is the fractional temperature perturbation generated by the ISW effect, where $\eta_{*}$ is the last scattering conformal time, $\Phi\left(\eta,k\right)$ is the Fourier transformed gravitational potential and ${\rm j}_{l}$ is the spherical Bessel function of order $l$. The prime denotes derivation with respect to the conformal time $\eta$. For a more detailed description of the perturbations equations and of the integrals (\[ClISW\]-\[ThetalISW\]) we refer the reader to [@Sachs:1967er; @Bardeen:1980kt; @Mukhanov:1990me; @Hu:1995em].
Following [@Mukhanov:1990me], consider the Fourier transformed evolution equation for the gravitational potential: $$\label{equ}
u'' + k^{2}c_{\rm s}^{2}u - \frac{\theta''}{\theta}u = 0\;,$$ where: $$u \equiv \frac{2\Phi}{\sqrt{\rho + p}}\;, \qquad\mbox{ and }\qquad \theta \equiv \sqrt{\frac{\rho}{3(\rho + p)}}(1 + z)\;,$$ and $c_{\rm s}$, $\rho$ and $p$ are, respectively, the adiabatic speed of sound, the energy density and the pressure defined for a generic cosmological model. We have chosen here units such that $8\pi G = c = 1$. For simplicity, here and in the following we do not explicit the $\left(\eta,k\right)$ dependence for $u$ and $\Phi$ and the $\eta$ dependence for $c_{\rm s}^{2}$, $\theta$, $\rho$, $p$ and $z$. They will therefore be always implied, unless otherwise stated.
In [Eq. ]{} define $$\label{kJ2def}
k^{2}_{\rm J} \equiv \frac{\theta''}{c_{\rm s}^{2}\theta}$$ as the square Jeans wavenumber. In general, when wavelengths smaller than the Jeans scale enter the Hubble horizon, they start to oscillate, affecting the CMB power spectrum and the matter one in ways not compatible with observation. For UDM models in particular, the authors of [@Bertacca:2007cv] have found a contribution to the ISW effect proportional to the fourth power of the speed of sound which generates in the CMB angular power spectrum a growth proportional to $l^{3}$ until $l \approx 25$ (the value $l \approx 25$ is related to the equivalence wave number $k_{\rm eq}$ which we discuss later). See [@Bertacca:2007cv] and also [@Hu:1995em] for more detail.
Consequently, if we take into account only those scales for which $$\label{cond}
k^{2} < k_{\rm J}^{2}\;,$$ then the gravitational potential does not oscillate.
Of course if $c_{\rm s}^{2} = 0$, as for the $\Lambda$CDM model, the Jeans scale vanishes and condition (\[cond\]) is satisfied for every $k$ at any time (remember that $k_{\rm J}^{2}$ is time dependent). On the other hand, this is not the only possible scenario. In fact, the cosmological scales which are important for the CMB and structure formation are those which entered the Hubble horizon after the matter-radiation equivalence epoch. Those which entered the horizon earlier had been damped by the dominating presence of radiation. This effect is known as Mészáros effect [@Hu:1995em; @Meszaros:1974tb; @Weinberg:2002kg; @Coles:1995bd].
If we require that the relevant scales, i.e. $k < k_{\rm eq}$, must satisfy condition (\[cond\]) we obtain: $$\label{cond2}
k^{2}_{\rm eq} < \frac{\theta''}{c_{\rm s}^{2}\theta}\;.$$ This relation can be used to infer qualitative constraints upon a generic cosmological model. In the next section we will make use of it to find constraints on $\alpha$. As a remark, a scenario for which [Eq. ]{} holds true without demanding a vanishing speed of sound is the so-called [*fast transition*]{}, introduced and investigated in great detail in [@Piattella:2009kt].
Constraints on the generalised Chaplygin gas
============================================
The equivalence wavenumber $k_{\rm eq}$ has the following form: $$k_{\rm eq}^{2} = \frac{H_{\rm eq}^{2}}{c^2\left(1 + z_{\rm eq}\right)^{2}}\;,$$ where $H_{\rm eq}$ is the Hubble parameter evaluated at the equivalence redshift $z_{\rm eq}$. From the 5-year WMAP observation[^1], the best fit values are $z_{\rm eq} = 3176^{+151}_{-150}$ and $k_{\rm eq} = 0.00968 \pm 0.00046$ $h$ Mpc$^{-1}$, where $h$ is the Hubble constant in 100 km s$^{-1}$ Mpc$^{-1}$ units. See also [@Komatsu:2008hk; @Dunkley:2008ie].
The Hubble parameter is related to the Universe energy content by Friedmann equation, which for the pure gCg model has the following form: $$\label{gcgH}
\frac{H^{2}}{H_{0}^{2}} = \left[\bar{A} + \left(1 - \bar{A}\right)\left(1 + z\right)^{3(\alpha + 1)}\right]^{\frac{1}{\alpha + 1}}\;,$$ where $\bar{A} \equiv A/(A + B)$ and $H_0$ is the Hubble constant. Notice that $\bar{A} = -w_0$, where $w_0$ is the present time equation of state parameter of the Universe.
Let $z_{\rm tr}$ be the redshift at which the accelerated phase of expansion begins. From [Eq. ]{}, we calculate the following relation between $\bar{A}$ and $z_{\rm tr}$: $$\label{Abaralpharel}
\bar{A} = \frac{\left(1 + z_{\rm tr}\right)^{3\left(\alpha + 1\right)}}{2 + \left(1 + z_{\rm tr}\right)^{3\left(\alpha + 1\right)}}\;.$$ From now on we make use of $\left(z_{\rm tr},\alpha\right)$ as independent parameters.
Plugging Eqs. (\[gcgeos\]), (\[gcgrhoevo\]), (\[gCgsos\]), (\[gcgH\]) and (\[Abaralpharel\]) into the definition (\[kJ2def\]), we plot in Fig. \[Fig1\] the Jeans wavenumber as function of $\alpha$ for fixed $z = 0$ and $z_{\rm tr} = 0.79$ and the equivalence wavenumber, computed from [Eq. ]{}, as function of $\alpha$ for a fixed $z_{\rm eq} = 3176$. The value we have chosen for the transition redshift $z_{\rm tr}$ is the WMAP5 best fit, see [@Komatsu:2008hk; @Dunkley:2008ie].
![$k_{\rm J}$ (black curve) and $k_{\rm eq}$ (red “quasi-horizontal” line) as functions of $\alpha$. $k_{\rm J}$ is evaluated at $z = 0$ and $z_{\rm tr} = 0.79$, while $k_{\rm eq}$ is evaluated at $z_{\rm eq} = 3176$. The wavenumbers are in units $h$ Mpc$^{-1}$.[]{data-label="Fig1"}](Figures/fig.eps){width="0.7\columnwidth"}
The most intriguing feature of Fig. \[Fig1\] is that the Jeans wavenumber has a minimum value for $\alpha \approx 1$. For sufficiently small or sufficiently large values of $\alpha$ it grows and equals $k_{\rm eq}$. From our numerical computation we have inferred the following constraints: $\alpha \lesssim 10^{-3}$ and $\alpha \gtrsim 250$.
It is also possible to obtain these results from approximations, but analytically. Indeed, when $\alpha \ll 1$ we expand in Taylor series $k_{\rm eq}^{2}$ and $\alpha k_{\rm J}^{2}$ and find: $$\begin{aligned}
k^{2}_{\rm eq} &=& \frac{\left(1 + z_{\rm tr}\right)^{3} + 2\left(1 + z_{\rm eq}\right)^{3}}{\left(1 + z_{\rm eq}\right)^{2}\left[2 + \left(1 + z_{\rm tr}\right)^{3}\right]} + O(\alpha)\;,\\ \nonumber\\
\alpha k_{\rm J}^{2} &=& \frac{3\left[4 + \left(1 + z_{\rm tr}\right)^{3}\right]}{4\left(1 + z_{\rm tr}\right)^{3}} + O(\alpha)\;.\end{aligned}$$ To leading order in $\alpha$, equating the above expressions gives the upper bound for small values of $\alpha$. For $z_{\rm eq} = 3176$ and $z_{\rm tr} = 0.79$: $\alpha \lesssim 10^{-3}$.
Now expand $k_{\rm eq}^{2}$ in Taylor series for $\alpha \gg 1$: $$\label{keqalphainf}
k^{2}_{\rm eq} = \frac{1 + z_{\rm eq}}{\left(1 + z_{\rm tr}\right)^{3}} + O\left(\frac{1}{\alpha}\right)\;.$$ The corresponding expansion for $k_{\rm J}^{2}$ is less immediate. Making use of the asymptotic forms of the speed of sound and of the Hubble parameter, which we will give in [Eq. ]{} and in [Eq. ]{}, we find: $$\label{kJ2alphainf}
k_{\rm J}^{2} = \left\{
\begin{array}{cl}
x^{3\alpha}\left[\dfrac{6}{\alpha}\dfrac{x^4}{\left(1 + z_{\rm tr}\right)^2} + O\left(\dfrac{1}{\alpha^2}\right)\right] & \mbox{ for } x > 1\\ \\
\dfrac{9\alpha}{4}\dfrac{1}{\left(1 + z\right)^2}\left[1 + O\left(x^{3\alpha}\right)\right] & \mbox{ for } x < 1
\end{array}
\right.\;,$$ where we have defined $$x \equiv \frac{1 + z}{1 + z_{\rm tr}}\;.$$ From [Eq. ]{}, to leading order in $1/\alpha$ the Jeans wavenumber is an exponential function of $\alpha$ for $x > 1$; for $x < 1$, the expansion can be performed only with respect to $x^{3\alpha}$ and, to leading order, $k_{\rm J}^{2}$ grows linearly with $\alpha$. We take into account the latter instance, equate (\[kJ2alphainf\]) to (\[keqalphainf\]) (each expression considered to its leading order) and find $\alpha \gtrsim 250$ (for $z = 0$, $z_{\rm eq} = 3176$ and $z_{\rm tr} = 0.79$).
Note that we have neglected the CDM, the baryon and the radiation components. Neglecting the CDM component is reasonable, because the gCg model aims to an unified description of Dark Matter and Dark Energy. Adding a CDM component would then spoil its purpose. Moreover, neglecting radiation is also reasonable, because the minimum value of the Jeans wavenumber is at late times ($z \approx 0$), where radiation is subdominant. For what concerns baryons, their presence would have the effect of lessening the average speed of sound, increasing thus the Jeans wavenumber and smoothing the constraints we have found. Nonetheless, at late times the baryon component is also subdominant with respect to the gCg one, so we can reasonably neglect it.
On the other hand, in the calculation of $k_{\rm eq}^{2}$ we are not allowed to neglect both radiation and baryons, because at $z_{\rm eq} = 3176$ they are important. Therefore, since at early times the gCg and the $\Lambda$CDM model are indistinguishable, we use the WMAP5 result $k_{\rm eq}~=~0.00968$ $h$ Mpc$^{-1}$ and we find that $\alpha \lesssim 10^{-3}$ and $\alpha \gtrsim 350$. As we expected, taking into account the radiation contribution has the effect of increasing the lower bound for large values of $\alpha$.
Calculation of the ISW effect for the extreme limit of the generalised Chaplygin gas
====================================================================================
In this section we calculate the integrals (\[ClISW\]) and (\[ThetalISW\]) in the limit $\alpha \to \infty$. To this purpose, in place of [Eq. ]{}, we employ the evolution equation for the gravitational potential $\Phi$, namely $$\label{eqPhi}
\Phi'' + 3\mathcal{H}\left(1 + c_{\rm s}^{2}\right)\Phi' + \left(2\mathcal{H}' +
\mathcal{H}^{2} + 3\mathcal{H}^{2}c_{\rm s}^{2} + k^{2}c_{\rm s}^{2}\right)\Phi = 0\;,$$ where $c_{\rm s}^2$ is the gCg adiabatic speed of sound defined in (\[gCgsos\]) and $\mathcal{H} = a'/a$ is the Hubble parameter written in the conformal time (see [@Mukhanov:1990me] for more detail). In the limit of very large values of $\alpha$, from [Eq. ]{} together with [Eq. ]{}, the square speed of sound has the following asymptotic behaviour $$\label{cs2alphainf}
c_{\rm s}^{2} = \left\{
\begin{array}{cl}
\dfrac{\alpha}{2}\left[x^{-3\alpha} + O\left(x^{-6\alpha}\right)\right] & \mbox{ for } x > 1\\ \\
\alpha\left[1 - 2x^{3\alpha} + O\left(x^{6\alpha}\right)\right] & \mbox{ for } x < 1
\end{array}
\right.\;,$$ with $c_{\rm s}^{2} = \alpha/3$ for $x = 1$. Friedmann equation (\[gcgH\]) becomes: $$\label{gcgH2alphainf}
\frac{H^{2}}{H_0^2} = \left\{
\begin{array}{cl}
x^{3}\left[1 + \dfrac{\ln2}{\alpha} + O\left(\dfrac{1}{\alpha^2}\right)\right] & \mbox{ for } x > 1\\ \\
1 + \dfrac{2x^{3\alpha}}{\alpha} + O\left(\dfrac{x^{6\alpha}}{\alpha^2}\right) & \mbox{ for } x < 1
\end{array}
\right.\;,$$ with $\tfrac{H^{2}}{H_0^2} = 1 + \tfrac{\ln3}{\alpha} + O\left(\tfrac{1}{\alpha^2}\right)$ for $x = 1$. To leading order in $1/\alpha$, the solution of [Eq. ]{} for $x > 1$ as a function of the conformal time has the following form: $$\label{gcgH2alphainfsol1}
a = \frac{1}{1 + z_{\rm tr}}\left(\frac{\eta}{\eta_{\rm tr}}\right)^2\;,$$ where $\eta_{\rm tr}$ is the conformal time corresponding to the transition redshift $z_{\rm tr}$. For $x < 1$, let $\eta_0$ be the present epoch conformal time and normalise the scale factor as $a(\eta_0) = 1$; the corresponding solution of [Eq. ]{} is: $$\label{gcgH2alphainfsol2}
a = \frac{1}{1 + \eta_0 - \eta}\;.$$ Joining solutions (\[gcgH2alphainfsol1\]) and (\[gcgH2alphainfsol2\]) in $a(\eta_{\rm tr})$, we can link the transition and the present epoch conformal time to the transition redshift as follows: $$\eta_0 - \eta_{\rm tr} = z_{\rm tr}\;.$$ Note that the relevant contribution to the ISW effect comes only from solution (\[gcgH2alphainfsol2\]). In fact: $i)$ the background solution (\[gcgH2alphainfsol1\]) corresponds to a CDM dominated Universe and $ii)$ from (\[cs2alphainf\]) for $x > 1$ the speed of sound is exponentially vanishing for $\alpha \to \infty$ so that we can reasonably assume that $c_{\rm s}^{2} \approx 0$. Therefore, if we substitute [Eq. ]{} and $c_{\rm s}^{2} = 0$ into [Eq. ]{} we obtain the same evolution equation for the gravitational potential as the one in a CDM dominated Universe, see [@Mukhanov:1990me]. In this instance, neglecting the decaying mode, $\Phi' = 0$ and no ISW effect is produced.
Write then [Eq. ]{} combined with [Eq. ]{} and, from the leading order in $x^{3\alpha}$ in (\[cs2alphainf\]) for $x < 1$, $c_{\rm s}^{2} = \alpha$: $$\label{eqPhialphainf}
\Phi'' + \frac{3\left(1 + \alpha\right)}{1 + \eta_0 - \eta}\Phi' + \left[\frac{3(1 + \alpha)}{(1 + \eta_0 - \eta)^2} + k^{2}\alpha\right]\Phi = 0\;.$$ Defining $y \equiv 1 + \eta_0 - \eta$, we recast [Eq. ]{} in the following form: $$\label{eqPhialphainfrecast}
\ddot{\Phi} - \frac{3\left(1 + \alpha\right)}{y}\dot{\Phi} + \left[\frac{3\left(1 + \alpha\right)}{y^2} + k^{2}\alpha\right]\Phi = 0\;,$$ where the dot denotes derivation with respect to $y$. Equation (\[eqPhialphainfrecast\]) can be solved exactly in terms of Bessel functions: $$\label{solbessel}
\Phi = y^{\frac{3\alpha}{2} + 2}\left[C_1{\rm J}_{\frac{3\alpha}{2} + 1}\left(k\sqrt{\alpha}y\right) + C_2{\rm Y}_{\frac{3\alpha}{2} + 1}\left(k\sqrt{\alpha}y\right)\right]\;,$$ where $C_1$ and $C_2$ are arbitrary integration constants. For large values of the order, the Bessel functions can be asymptotically expanded as follows [@AS]: $${\rm J}_{\frac{3\alpha}{2} + 1}\left(k\sqrt{\alpha}y\right) = \left(\frac{{\rm e}k\sqrt{\alpha}y}{3\alpha + 2}\right)^{3\alpha/2 + 1}\frac{1}{\sqrt{3\pi\alpha + 2\pi}}\left[1 - \frac{8}{\alpha} + O\left(\frac{1}{\alpha^2}\right)\right]$$ and $${\rm Y}_{\frac{3\alpha}{2} + 1}\left(k\sqrt{\alpha}y\right) = -\left(\frac{{\rm e}k\sqrt{\alpha}y}{3\alpha + 2}\right)^{-3\alpha/2 - 1}\sqrt{\frac{4}{3\pi\alpha + 2\pi}}\left[1 + \frac{8}{\alpha} + O\left(\frac{1}{\alpha^2}\right)\right]\;.$$ To leading order in $1/\alpha$, we plug the above asymptotic expansions into [Eq. ]{} and find: $$\label{solbesselalphainf}
\Phi = C_1\left(\frac{{\rm e}k}{3\sqrt{\alpha}}\right)^{3\alpha/2}\frac{y^{3\alpha}}{\sqrt{3\pi\alpha}} - 2C_2\left(\frac{{\rm e}k}{3\sqrt{\alpha}}\right)^{-3\alpha/2}\frac{y}{\sqrt{3\pi\alpha}}\;.$$ We assume the following initial conditions on the potential $\Phi$ in $\eta = \eta_{\rm tr}$: $\Phi\left(\eta_{\rm tr},k\right) = \Phi_{\rm tr}(k)$ and $\Phi'\left(\eta_{\rm tr},k\right) = 0$. The reason for this choice is that up to $\eta = \eta_{\rm tr}$ the potential behaves like in a CDM dominated Universe, i.e. it is constant. Solution (\[solbesselalphainf\]) then becomes: $$\label{solbesselalphainf2}
\frac{\Phi}{\Phi_{\rm tr}(k)} = \frac{1}{1 - 3\alpha}\left(\frac{y}{1 + z_{\rm tr}}\right)^{3\alpha} - \frac{3\alpha}{1 - 3\alpha}\frac{y}{1 + z_{\rm tr}}\;.$$ For the calculation of the integrals (\[ClISW\]) and (\[ThetalISW\]) consider only the contribution proportional to $y$, i.e. $$\label{solbesselalphainf2approx}
\frac{\Phi}{\Phi_{\rm tr}(k)} \approx \frac{y}{1 + z_{\rm tr}}\;,$$ since it is the dominant one for $\alpha \to \infty$. Moreover, assume that the primordial power spectrum $\Delta_{\rm R}^2$ is the Harrison-Zel’dovich scale invariant one and that it propagates invariated up to $\eta_{\rm tr}$. For convenience, define $$D \equiv \frac{k^3\left|\Phi_{\rm tr}(k)\right|^2}{2\pi^2} = \frac{9}{25}\Delta_{\rm R}^2\;;$$ combining the integrals (\[ClISW\]) and (\[ThetalISW\]) with the approximated solution (\[solbesselalphainf2approx\]) and changing the integration variable from the conformal time to the redshift we write: $$\label{ClISWalphainf}
\frac{l(l + 1)C_l^{\rm ISW}}{2\pi D} = \frac{8l(l + 1)}{\left(1 + z_{\rm tr}\right)^2}\int_{0}^{\infty}\frac{{{\rm d}}k}{k}\left[\int_{0}^{z_{\rm tr}}{{\rm d}}z\;{\rm j}_{l}(kz)\right]^2\;.$$ Taking into account that ${\rm j}_{l}(kz) = \sqrt{\tfrac{\pi}{2kz}}\;{\rm J}_{l + 1/2}(kz)$, we write [Eq. ]{} as follows: $$\label{ClISWalphainf2}
\frac{l(l + 1)C_l^{\rm ISW}}{2\pi D} = \frac{4\pi l(l + 1)}{\left(1 + z_{\rm tr}\right)^2}\int_{0}^{\infty}\frac{{{\rm d}}k}{k^2}\int_{0}^{z_{\rm tr}}\frac{{{\rm d}}u}{\sqrt{u}}\int_{0}^{z_{\rm tr}}\frac{{{\rm d}}v}{\sqrt{v}}\;{\rm J}_{l + 1/2}(ku){\rm J}_{l + 1/2}(kv)\;.$$ Consider now the following case of the Weber-Schafheitlin type integrals [@AS]: $$\label{WSformula}
\int_{0}^{\infty}{{\rm d}}t\;\frac{{\rm J}_{l + 1/2}(at){\rm J}_{l + 1/2}(bt)}{t^2} = \frac{1}{4}\frac{b^{l + 1/2}}{a^{l - 1/2}}\frac{\Gamma(l)}{\Gamma(l + 3/2)\Gamma(3/2)}{\rm F}\left(l,-\frac{1}{2};l + \frac{3}{2};\frac{b^2}{a^2}\right)\;,$$ where ${\rm F}$ is the Gauss hypergeometric function. Note that formula (\[WSformula\]) holds true only if $b < a$. We perform the $k$ integration in [Eq. ]{} according to [Eq. ]{} and find the following expression: $$\label{ClISWalphainf3}
\frac{l(l + 1)C_l^{\rm ISW}}{2\pi D} = \frac{4\sqrt{\pi}\;l(l + 1)}{\left(1 + z_{\rm tr}\right)^2}\int_{0}^{z_{\rm tr}}\frac{{{\rm d}}u}{u^l}\int_{0}^{u}{{\rm d}}v\;v^l\frac{\Gamma(l)}{\Gamma(l + 3/2)}{\rm F}\left(l,-\frac{1}{2};l + \frac{3}{2};\frac{v^2}{u^2}\right)\;,$$ where we have modified the integration range of $v$ in order to satisfy the condition $v < u$ and thus be allowed to apply [Eq. ]{}. Being the integrand function of [Eq. ]{} symmetric with respect to the line $v = u$, in [Eq. ]{} we have recovered the correct value of the integral by multiplying by a factor 2.
Expanding ${\rm F}$ in a hypergeometric series and performing the $u$ and $v$ integrations, we find the following series expansion for the ISW contribution: $$\label{ClISWalphainf4}
\frac{l(l + 1)C_l^{\rm ISW}}{2\pi D} = -\frac{l(l + 1)z_{\rm tr}^2}{\left(1 + z_{\rm tr}\right)^2}\sum_{n=0}^{\infty}\frac{\Gamma(l + n)\Gamma(n - 1/2)}{(l + 2n + 1)\Gamma(n + 1)\Gamma(l + n + 3/2)}\;,$$ which can be recast in the following more compact form: $$\label{ClISWalphainf5}
\frac{l(l + 1)C_l^{\rm ISW}}{2\pi D} = \frac{2\sqrt{\pi}\;z_{\rm tr}^2}{\left(1 + z_{\rm tr}\right)^2}\frac{\Gamma(l + 1)}{\Gamma(l + 3/2)}\;{}_{3}{\rm F}_{2}\left(l,-\frac{1}{2},\frac{l + 1}{2};l + \frac{3}{2}, \frac{l + 3}{2}; 1\right)\;,$$ where ${}_{3}{\rm F}_{2}$ is a generalised hypergeometric function [@Erdelyi].
The asymptotic behaviour of [Eq. ]{} for large values of $l$ has the following form: $$\label{ClISWalphainfasympt}
\frac{l(l + 1)C_l^{\rm ISW}}{2\pi D} \sim \frac{2\pi z_{\rm tr}^2}{\left(1 + z_{\rm tr}\right)^2}\frac{1}{l}\;,$$ which is computed in Appendix \[App\]. It can be also directly obtained from the calculations of Kofman and Starobinsky (see Eq. (12) of [@Kofman:1985fp]).
In Fig. \[FigISW\] we plot [Eq. ]{} and the relative asymptotic expansion [Eq. ]{} as functions of $l$ for $z_{\rm tr} = 0.22$ and $z_{\rm tr} = 0.79$. The former value is the best fit from the SNIa data analysis performed in the next section.
![Plot of the ISW contribution to the angular power spectrum $l(l + 1)C_l/2\pi$ normalised to $D$ (solid lines) and of its asymptotic form for large $l$’s (dashed lines). Upper panel: $z_{\rm tr} = 0.22$. Lower panel: $z_{\rm tr} = 0.79$.[]{data-label="FigISW"}](Figures/Iswztr022.eps "fig:"){width="0.7\columnwidth"}\
![Plot of the ISW contribution to the angular power spectrum $l(l + 1)C_l/2\pi$ normalised to $D$ (solid lines) and of its asymptotic form for large $l$’s (dashed lines). Upper panel: $z_{\rm tr} = 0.22$. Lower panel: $z_{\rm tr} = 0.79$.[]{data-label="FigISW"}](Figures/ISWztr079.eps "fig:"){width="0.7\columnwidth"}
The black line in the lower panel of Fig. \[FigISW\] could be associated to those drawn in Fig. 5 of [@Urakawa:2009jb], where the ISW effect contribution is computed for the gCg up to $\alpha = 10$. Note that the $\alpha\to\infty$ contribution is smaller than the $\alpha = 0$ one, which corresponds to the $\Lambda$CDM case and was computed for the first time by Kofman and Starobinsky [@Kofman:1985fp]. We expected this result for the following reason. When $\alpha \to \infty$ the Jeans wavenumber diverges. Therefore, according to [@Bertacca:2007cv], the contribution to [Eq. ]{} proportional to the fourth power of the speed of sound does not exist and the behaviour of $l(l + 1)C_l^{\rm ISW}/2\pi$ depends principally on the background evolution. The latter is pretty much similar to the $\Lambda$CDM one. The relevant difference is that for the gCg $\alpha\to\infty$ model the transition from the CDM-like phase to the dS one is very sharp whereas for the $\Lambda$CDM is much smoother. Since in the former case the CDM-like phase lasts longer, the intensity of the ISW effect is lesser.
Finally, note also how in Fig. 5 of [@Urakawa:2009jb] the trend of a decreasing ISW effect contribution can be already distinguished for $\alpha = 10$ and large $l$’s.
Background evolution for large $\alpha$ and SNIa data analysis
==============================================================
In this section we address more quantitatively the behaviour of the background evolution for large values of $\alpha$. Let us consider Friedmann equation (\[gcgH2alphainf\]) to leading order in $\alpha$: $$\label{Hparam}
\frac{H^{2}}{H_{0}^{2}} \sim \left\{
\begin{array}{cl}
\left(\dfrac{1 + z}{1 + z_{\rm tr}}\right)^{3} & \mbox{ for } z > z_{\rm tr}\\ \\
1 & \mbox{ for } z \leq z_{\rm tr}
\end{array}
\right.\;.$$ As we pointed out in the previous section, [Eq. ]{} mimics the expansion of a pure CDM Universe for $z > z_{\rm tr}$ and the one of a dS Universe for $z \leqslant z_{\rm tr}$. Note that, within this scenario, the present equation of state parameter is $w_0 = -1$, in contrast with the $w_0 \approx -0.7$ of the $\Lambda$CDM model.
We now analyse the background evolution given by [Eq. ]{} on the basis of the 157 nearby SNIa of the Constitution set [@Hicken:2009dk]. The supernovae data consist in an array of distance moduli $\mu$ defined as: $$\label{mu}
\mu = m - M = 5\log\left(\frac{D_{\rm L}}{{\rm Mpc}}\right) + 25\;,$$ where $m$ and $M$ are, respectively, the apparent and the absolute magnitudes and $D_{\rm L}$ is the luminosity distance expressed in Mpc units: $$\label{d}
D_{\rm L}(z) = c(1 + z)\int_{0}^{z}\frac{{{\rm d}}z'}{H(z')} = \frac{c(1 +
z)}{H_0}\int_{0}^{z}\frac{{{\rm d}}z'}{E(z')}\;,$$ where $E(z)$ is the Hubble parameter normalised to $H_0$.
The integral in [Eq. ]{} can be exactly solved for the Hubble parameter given in [Eq. ]{}: $$\label{ldist}
\int_{0}^{z}\frac{{{\rm d}}z'}{E(z')} = \left\{
\begin{array}{cl}
\left(3z_{\rm tr} + 2\right) - 2\left(1 + z\right)^{-1/2}\left(1 + z_{\rm tr}\right)^{3/2} & \mbox{ for } z > z_{tr}\\ \\
z & \mbox{ for } z \leq z_{tr}
\end{array}
\right.\;.$$ Following [@Riess:1998cb], we assume flat priors for $z_{\rm tr}$ and $h$ and assume that the distance moduli are normally distributed. The probability density function (PDF) of the parameters has then the following form [@Lupton1993]: $$\label{pdf}
p\left(z_{\rm tr},h|\mu_o\right) = Ce^{-\chi^2\left(h,z_{\rm tr}\right)/2}\;,$$ where $\mu_o$ is the set of the observed distance moduli, $$\label{chisquared}
\chi^2\left(h,z_{\rm tr}\right) = \sum_{i=1}^n \left[\frac{\mu_{o,i} - 5\log\left(D_{\rm L}/{\rm Mpc}\right) - 25}{\sigma_{\mu_{o,i}}}\right]^2$$ and the normalisation constant $C$ has the following form: $$\frac{1}{C} = \int\;{{\rm d}}z_{\rm tr}\int\;{{\rm d}}h\;e^{-\chi^2\left(h,z_{\rm tr}\right)/2}\;,$$ where the integration ranges over the parameters are, in principle, $h \in (-\infty,\infty)$ and $z_{\rm tr} \in (-1,\infty)$. However, we choose the more reasonable ranges $z_{\rm tr} \in (0,2)$ and $h \in (0.5,0.9)$.
In [Eq. ]{} $\sigma_{\mu_{o,i}}$ are the estimated errors in the individual distance moduli, including uncertainties in galaxy redshifts and also taking into account the dispersion in supernova redshifts due to peculiar velocities.
After marginalization over $h$, i.e. integrating [Eq. ]{} in $h \in (0.5,0.9)$, we show in Fig. \[Fig2\] the PDF for the parameter $z_{\rm tr}$.
![Plot of the PDF [*vs*]{} the parameter $z_{\rm tr}$ after marginalization over $h$.[]{data-label="Fig2"}](Figures/pdfplot.eps){width="0.7\columnwidth"}
The most probable value is $z_{\rm tr} = 0.222$. At the 68% confidence level $z_{\rm tr} \in \left(0.198,0.246\right)$, at the 95% confidence level $z_{\rm tr} \in \left(0.174,0.270\right)$ and at the 99% confidence level $z_{\rm tr} \in \left(0.154,0.290\right)$.
Summary and Conclusions
=======================
In the present paper we have investigated the production of ISW effect within the generalised Chaplygin gas cosmological model. Thanks to an argument based on the Mészáros effect it is possible to find the new constraint $\alpha \gtrsim 350$. For this range of values, the Jeans wavenumber is sufficiently large so that the resulting ISW effect is not strong. Indeed, through a direct calculation, we have found a confirmation of the above qualitative constraint because in the limit $\alpha \to \infty$ the ISW effect contribution to the CMB angular power spectrum is very similar to the one computed for $\alpha = 0$, i.e. for the $\Lambda$CDM model.
We have then addressed the background evolution of the Universe for $\alpha \to \infty$ and we have found that the model behaves like CDM at early times and then abruptly passes to a dS phase. Taking advantage of the SNIa Constitution set analysis, we have placed the transition at a redshift $z_{\rm tr} = 0.22$.
In conclusion, it seems that the gCg model has some chances of being viable not only for very small values of $\alpha$ but also for very large ones (we have here limited our discussion to the ISW effect only). However, it must be pointed out that in both cases a degeneracy problem appears: $i )$ for $\alpha \to 0$, it is well-known that the gCg model degenerates into the $\Lambda$CDM; $ii)$ for $\alpha \to \infty$ the degeneration takes place into a “step-transition” CDM-dS model. Note that in the second case, the degeneration is not complete. In fact the “original” CDM-dS model has a vanishing speed of sound. In the corresponding limit of the gCg, instead, the speed of sound diverges, so the scenario is completely different. A more complete analysis of such picture would therefore be interesting and could perhaps be performed in the framework of the [*Cuscuton*]{} model introduced in [@Afshordi:2006ad; @Afshordi:2007yx].
Calculation of the asymptotic behaviour of the ISW effect contribution for large $l$’s {#App}
======================================================================================
Consider [Eq. ]{}, recast in the following way: $$\label{ClISWalphainf5new}
\frac{l(l + 1)C_l^{\rm ISW}}{2\pi D} = \frac{2\sqrt{\pi}z_{\rm tr}^2}{\left(1 + z_{\rm tr}\right)^2}\frac{\Gamma(l + 1)}{\Gamma(l + 3/2)}\;{}_{3}{\rm F}_{2}\left(\frac{l + 1}{2}, l,-\frac{1}{2};l + \frac{3}{2}, \frac{l + 3}{2}; 1\right)\;.$$ Adopt the following transformation for the ${}_{3}{\rm F}_{2}$ function [@Bailey]: $$\label{3F2trans}
{}_{3}{\rm F}_{2}\left(a, b, c; e, f; 1\right) = \dfrac{\Gamma(e)\Gamma(f)\Gamma(s)}{\Gamma(a)\Gamma(s + b)\Gamma(s + c)}\;{}_{3}{\rm F}_{2}\left(e - a, f - a, s; s + b, s + c; 1\right)\;,$$ where $s = e + f - a - b - c$ and $\{a, b, c, e, f\}$ are arbitrary parameters. Employ [Eq. ]{} for the generalised hypergeometric function in [Eq. ]{} and find: $$\label{asympt}
\frac{l(l + 1)C_l^{\rm ISW}}{2\pi D} = \dfrac{2\sqrt{\pi}z_{\rm tr}^2}{\left(1 + z_{\rm tr}\right)^2}\dfrac{\Gamma\left(\frac{l + 3}{2}\right)\Gamma\left(l + 1\right)\Gamma(3)}{\Gamma\left(\frac{l + 1}{2}\right)\Gamma(5/2)\Gamma(l + 3)}\;{}_{3}{\rm F}_{2}\left(1, \frac{l}{2} + 1, 3; \frac{5}{2}, l + 3; 1\right)\;.$$ Making explicit the series expansion of the generalised hypergeometric function we obtain $$\label{asympt2}
\frac{l(l + 1)C_l^{\rm ISW}}{2\pi D} = \frac{2\sqrt{\pi}z_{\rm tr}^2}{\left(1 + z_{\rm tr}\right)^2}\dfrac{\Gamma\left(\frac{l + 3}{2}\right)\Gamma(l + 1)}{\Gamma\left(\frac{l + 1}{2}\right)\Gamma(\frac{l}{2} + 1)}\sum_{n=0}^{\infty}\dfrac{\Gamma\left(\frac{l}{2} + 1 + n\right)\Gamma(n + 3)}{\Gamma(n + 5/2)\Gamma(l + 3 + n)}\;.$$ We approximate the Gamma functions for large $l$’s by means of Stirling formula [@AS] and find $$\label{asympt3}
\frac{l(l + 1)C_l^{\rm ISW}}{2\pi D} \sim \frac{\sqrt{\pi}z_{\rm tr}^2}{\left(1 + z_{\rm tr}\right)^2}\;\dfrac{1}{l}\;\sum_{n=0}^{\infty}\dfrac{\Gamma(n + 3)}{2^{n}\Gamma(n + 5/2)}\;.$$ The series in [Eq. ]{} can be summed: $$\label{seriesPi}
\sum_{n=0}^{\infty}\dfrac{\Gamma(n + 3)}{2^{n}\Gamma(n + 5/2)} = 2\sqrt{\pi}\;,$$ and from [Eq. ]{} we obtain the result of [Eq. ]{}.
[99]{}
A.Y. Kamenshchik, U. Moschella and V. Pasquier, [*An alternative to quintessence*]{}, , . M. Bordemann and J. Hoppe, [*The Dynamics of relativistic membranes. 1. Reduction to two-dimensional fluid dynamics*]{}, . J. Hoppe, [*Supermembranes in four-dimensions*]{}, . R. Jackiw and A. P. Polychronakos, [*Supersymmetric Fluid Mechanics*]{}, . I. Zlatev, L. M. Wang and P. J. Steinhardt, [*Quintessence, Cosmic Coincidence, and the Cosmological Constant*]{}, . A.G. Riess [*et al.*]{} \[Supernova Search Team Collaboration\], [*Observational Evidence from Supernovae for an Accelerating Universe and a Cosmological Constant*]{}, , . S. Perlmutter [*et al.*]{} \[Supernova Cosmology Project Collaboration\], [*Measurements of Omega and Lambda from 42 High-Redshift Supernovae*]{}, , . P. Wu and H. Yu, [*Generalized Chaplygin gas model: constraints from Hubble parameter versus redshift data*]{}, , . T. Davis [*et al.*]{}, [*Scrutinizing exotic cosmological models using ESSENCE supernova data combined with other cosmological probes*]{}, , . J. Lu, L. Xu, B. Chang and Y. Gui, [*Observational constraints and geometrical diagnostics for generalized Chaplygin gas*]{}, . J. Lu, L. Xu, J. Li, B. Chang, Y. Gui and H. Liu, [*Constraints on modified Chaplygin gas from recent observations and a comparison of its status with other models*]{}, . T. Barreiro, O. Bertolami and P. Torres, [*WMAP5 constraints on the unified model of dark energy and dark matter*]{}, , . H.B. Sandvik, M. Tegmark, M. Zaldarriaga and I. Waga, [*The end of unified dark matter?*]{}, , . D. Carturan and F. Finelli, [*Cosmological Effects of a Class of Fluid Dark Energy Models*]{}, , . R. Bean and O. Dore, [*Are Chaplygin gases serious contenders to the dark energy throne?*]{}, , . L. Amendola, F. Finelli, C. Burigana and D. Carturan, [*WMAP and the generalized Chaplygin gas*]{}, , . L. Amendola, I. Waga and F. Finelli, [*Observational constraints on silent quartessence*]{}, , . T. Giannantonio and A. Melchiorri, [*Chaplygin gas in light of recent integrated Sachs-Wolfe effect data*]{}, 2006 , . L.M.G. Beca, P.P. Avelino, J.P.M. de Carvalho and C.J.A.P. Martins, [*Role of baryons in unified dark matter models*]{}, , . D. Bertacca and N. Bartolo, [*ISW effect in the unified dark matter scalar field cosmologies: an analytical approach*]{}, . E. Babichev, V. Mukhanov and A. Vikman, [*k-Essence, superluminal propagation, causality and emergent geometry*]{}, . V. Gorini, A.Y. Kamenshchik, U. Moschella, O.F. Piattella and A.A. Starobinsky, [*Gauge-invariant analysis of perturbations in Chaplygin gas unified models of dark matter and dark energy*]{}, , . J.C. Fabris, S.V.B. Goncalves, H.E.S. Velten and W. Zimdahl, [*Matter Power Spectrum for the Generalized Chaplygin Gas Model: The Newtonian Approach*]{}, , . J.C. Fabris, S.V.B. Goncalves, H.E.S. Velten and W. Zimdahl, [*Newtonian Approach to the Matter Power Spectrum of the Generalized Chaplygin Gas*]{}, . Y. Urakawa and T. Kobayashi, [*A note on observational signatures in superluminal unified dark matter models*]{}, . L. Kofman and A.A. Starobinsky, [*Effect of the cosmological constant on large scale anisotropies in the microwave background*]{}, \[\]. R.K. Sachs and A.M. Wolfe, [*Perturbations of a cosmological model and angular variations of the microwave background*]{}, . J.M. Bardeen, [*Gauge–invariant cosmological perturbation*]{}, . V.F. Mukhanov, H.A. Feldman and R.H. Brandenberger, [*Theory of cosmological perturbation*]{}, . W.T. Hu, [*Wandering in the background: A Cosmic microwave background explorer*]{}, . P. Meszaros, [*The behaviour of point masses in an expanding cosmological substratum*]{}, . S. Weinberg, [*Cosmological fluctuations of short wavelength*]{}, , . P. Coles and F. Lucchin, [*Cosmology: The Origin and evolution of cosmic structure*]{}, 1995 [*Chichester, UK: Wiley (1995) 449 p*]{} O. F. Piattella, D. Bertacca, M. Bruni and D. Pietrobon, [*Unified Dark Matter models with fast transition*]{}, . E. Komatsu [*et al.*]{} \[WMAP Collaboration\], [*Five-Year Wilkinson Microwave Anisotropy Probe Observations: Cosmological Interpretation*]{}, . J. Dunkley [*et al.*]{} \[WMAP Collaboration\], [*Five-Year Wilkinson Microwave Anisotropy Probe (WMAP) Observations: Likelihoods and Parameters from the WMAP data*]{}, . M. Abramowitz and I.A. Stegun, [*Handbook of Mathematical Functions With Formulas, Graphs, and Mathematical Tables*]{}, 1972 [*Dover*]{} A. Erdélyi, W. Magnus, F. Oberhettinger and F.G. Tricomi, [*Higher Transcendental Functions, Vol I*]{}, 1981 [*New York: Krieger*]{} M. Hicken [*et al.*]{}, [*Improved Dark Energy Constraints from 100 New CfA Supernova Type Ia Light Curves*]{}, . R. Lupton, [*Statistics in Theory and Practice*]{}, 1993 [*Princeton University Press, Princeton, New Jersey*]{} N. Afshordi, D.J.H. Chung and G. Geshnizjani, [*Cuscuton: A Causal Field Theory with an Infinite Speed of Sound*]{}, . N. Afshordi, D.J.H. Chung, M. Doran and G. Geshnizjani, [*Cuscuton Cosmology: Dark Energy meets Modified Gravity*]{}, . W.N. Bailey, [*Generalized Hypergeometric Series*]{}, 1935 [*Cambridge University Press*]{}
[^1]: <http://lambda.gsfc.nasa.gov/>
|
---
abstract: '**Controlling the propagation and polarization vectors in linear and nonlinear optical spectroscopy enables to probe the anisotropy of optical responses providing structural symmetry selective contrast in optical imaging. Here we present a novel tilted antenna-tip approach to control the optical vector-field by breaking the axial symmetry of the nano-probe in tip-enhanced near-field microscopy. This gives rise to a localized plasmonic antenna effect with significantly enhanced optical field vectors with control of both *in-plane* and *out-of-plane* components. We use the resulting vector-field specificity in the symmetry selective nonlinear optical response of second-harmonic generation (SHG) for a generalized approach to optical nano-crystallography and -imaging. In tip-enhanced SHG imaging of monolayer MoS$_2$ films and single-crystalline ferroelectric YMnO$_3$, we reveal nano-crystallographic details of domain boundaries and domain topology with enhanced sensitivity and nanoscale spatial resolution. The approach is applicable to any anisotropic linear and nonlinear optical response, and provides for optical nano-crystallographic imaging of molecular or quantum materials.**'
author:
- 'Kyoung-Duck Park'
- 'Markus B. Raschke'
bibliography:
- 'TiltSHG.bib'
title: ' Polarization control with plasmonic antenna-tips: A universal approach for optical nano-crystallography and vector-field imaging '
---
Symmetry selective optical imaging of e.g. crystallinity, molecular orientation, and static or dynamic ferroic order and polarization is desirable, yet as of today access to these internal material propeties on the micro- to nano-scale has not been provided in optical microscopy in a systematic way. Molecular vibrations, phonons, excitons, and spins in their interaction with light give rise to an anisotropic linear and nonlinear optical response. This optical response is sensitive to the direction of the wavevector and the polarization of the optical driving fields and correlated with the structural symmetries of the material. In reflection or transmission measurements of far-field optical imaging and spectroscopy, the transverse projection of the optical field as determined by the laws of linear and nonlinear reflection and refraction gives access to the optical selection rules associated with the materials symmetries [@anastassakis1997; @fiebig2000; @najafov2010; @yin2014], yet with limited degrees of freedom constrained by the wavevector conservation in far-field optics.
In contrast, wavevector conservation is lifted in near-field scattering depending on structure and orientation of the nano-objects as scattering element. In combination with near-field imaging based on tip-scattering, in scanning near-field microscopy and spectroscopy, one can increase the degrees of freedom with the choice of incident and detected wavevector, independent from the active control of the local polarization through an engineered antenna-tip response. However, to date, most scanning near-field microscopy studies have focused on a surface normal oriented antenna-tip in tip-enhanced near-field microscopy [@gerton2004; @yano2009; @zhang2013chemical] based on the hypothesis of maximum field enhancement in this configuration.
While this conventional tip geometry is useful to selectively detect an *out-of-plane* (tip parallel) polarized response, it reduces the detection sensitivity for *in-plane* (tip perpendicular) polarization. Artificial tip engineering for enhanced *in-plane* sensitivity limits spatial resolution, universal applicability, and/or requires a complex tip fabrication process [@lee2007; @olmon2010; @burresi2009; @kihm2011]. These limitations in measuring the *in-plane* optical response restrict the range of optical techniques and sample systems. Specifically, predominantly two-dimensional (2D) quantum systems, such as graphene [@gerber2014; @fei2012; @chen2012], transition metal dichalcogenides (TMDs) [@park2016tmd; @bao2015; @li2015], epitaxial thin films [@damodaran2017], important classes of transition metal oxides of layered materials [@kalantar2016], all with dominant *in-plane* excitations are difficult to probe. Therefore, to broadly extend the range of near-field microscopy application to the characterization of nano-photonic structures and metasurfaces [@kildishev2013], or optical nano-crystallography and -imaging of anisotropic samples [@berweger2009; @muller2016], a new approach with extended antenna-tip vector-field control is desirable.
Here, we demonstrate a generalizable approach to control the excitation and detection polarizability for both *in-plane* and *out-of-plane* vector-fields in nano-imaging, with enhanced sensitivity, and without loss of spatial resolution. We break the axial symmetry of a conventional Au wire tip by varying its tilt angle with respect to the sample surface. By variation of tilt angle, we control the ratio of *in-plane* and *out-of-plane* polarization. This oblique angle of the tip axis creates a spatial confinement for free electron oscillation and gives rise to significantly increased field enhancement in both polarization directions resulting from localized surface plasmon resonance (LSPR) effects [@talley2005; @sanders2016].
Second-harmonic generation (SHG) microscopy provides structural insight into materials through the nonlinear optical response, such as crystal symmetry, orientation, defect states, stacking angle, and the number of layers [@kumar2013; @seyler2015; @hsu2014]. We take advantage of the near-field polarization control in both excitation and detection field in symmetry selective tip-enhanced SHG imaging, as an example, applied to different quantum materials. To quantify the enhanced sensitivity of the tilted tip and to image nano-crystallographic properties, we perform grain boundary (GB) mapping of monolayer MoS$_2$, as a model system of layered 2D materials. This is achieved by the reduced nonlinear optical susceptibility and modified selection rule at the grain boundaries. In addition, on single crystal YMnO$_3$, by mapping both *in-plane* and *out-of-plane* nonlinear optical susceptibility $\chi$$^{(2)}_{ijk}$ components [@fiebig2002; @neacsu2009], we obtain ferroelectric domain nano-imaging facilitated by the local phase-sensitive detection [@neacsu2009], with enhanced SHG sensitivity. These experimental results demonstrate a substantial gain in image content from a simple yet effective modification to the conventional tip-enhanced imaging approach. The approach is expected to greatly enhance the sensitivity of optial nano-spectroscopy in any linear and nonlinear optical modality and then extends the application space of optical nano-imaging to a wider range of materials.\
[**Experiment**]{}
The experiment is based on tip-enhanced spectroscopy [@park2016tmd], with side illumination of the electrochemically etched Au tip manipulated in a shear-force AFM as shown schematically in Fig. 1a. The sample surface can be tilted by variable angle with respect to the tip axis from 0$^{\circ}$ to 90$^{\circ}$. Excitation light provided from a Ti:sapphire oscillator (FemtoSource Synergy, Femtolasers Inc., with $\tau$ $\sim$11 fs pulse duration, center wavelength of 800 nm, 78 MHz repetition rate, and $<$ 2 mW power) is focused onto the tip-sample junction using an objective lens (NA = 0.8, W.D. = 4 mm), with polarization and dispersion control. The backscattered SHG signal is polarization selected and detected using a spectrometer (SpectraPro 500i, Princeton Instruments, f = 500 mm) with a charge-coupled device (CCD) camera (ProEM+: 1600 eXcelon3, Princeton Instruments).
In excitation and detection, we define *p* and *s* polarization as light polarized parallel and perpendicular with respect to the plane formed by *k*-vector and tip axis. In *p*$_{in}$*p*$_{out}$ (*p* polarized excitation and *p* polarized detection) configuration, the broken axial symmetry gives rise to a tip-SHG response with expected power (Fig. 1b) and polarization (Fig. 1c) dependence of the SHG response. In SHG nano-imaging, the intrinsic tip-SHG response can be discriminated from the tip-sample coupled response through polarization and tip-sample distance dependent measurements.
Note that a tilted tip geometry has been used in several cases of top-illumination tip-enhanced Raman spectroscopy (TERS) [@stadler2010; @chan2011]. However, only for the purpose of ease of tip-illumination without any vector-field control.\
[**Vector-field control with plasmonic antenna tip**]{}
To characterize the local optical field enhancement with respect to the tilt angle of the Au tip, we calculate the expected optical field distribution using finite-difference time-domain (FDTD) simulations (Lumerical Solutions, Inc.) for our experimental conditions. Fig. 2a and e show the *in-plane* optical field maps ($|\textbf{\textit{E}}_x|^2$ and Re\[***E***$_x$\]) for surface normal tip orientation ($\theta$$\rm{_{tip}}$ = 90$^{\circ}$) with an SiO$_2$ substrate, with excitation (800 nm) polarization perpendicular with respect to the tip axis. A weak ***E***$_x$ field confinement at the apex is seen resulting from the transverse local antenna mode [@talley2005; @sanders2016].
To achieve an efficient local plasmon antenna effect, we model the tilted tip with excitation polarization parallel with respect to the tip axis. Fig. 2b and f show calculated $|\textbf{\textit{E}}_x|^2$ and Re\[***E***$_x$\] distributions for the 35$^{\circ}$ tilted tip orientation ($\theta$$\rm{_{tip}}$ = 35$^{\circ}$), exhibiting $\sim$6 times stronger *in-plane* optical field intensity enhancement compared to sample surface normal orientation (Fig. 2a). Notably, for this tilt angle, also the *out-of-plane* vector-field is significantly enhanced as seen in Fig. 2c-d and g-h.
To characterize a systematic change of vector-field enhancement, we calculate the *in-plane* and *out-of-plane* optical field intensity with respect to tilt angle. Fig. 3a and b show simulated vector-field intensity profiles for $|\textbf{\textit{E}}_x|^2$ and $|\textbf{\textit{E}}_z|^2$ at the sample plane (the distance between tip and sample is set to 0.5 nm, see Fig. S1-S9 for full data set of the tilt angle dependent $|\textbf{\textit{E}}_x|^2$ and $|\textbf{\textit{E}}_z|^2$). For the small (10$^{\circ}$ $\leq$ $\theta$$\rm{_{tip}}$ $\leq$ 30$^{\circ}$) and large (60$^{\circ}$ $\leq$ $\theta$$\rm{_{tip}}$ $\leq$ 90$^{\circ}$) tilt angles, the field confinement is not significantly enhanced compared to conventional tip orientation ($\theta$$\rm{_{tip}}$ = 90$^{\circ}$) due to the overdamped resonance of the electrons oscillation in a semi-infinite tip structure. In this case, the Au tip cannot sustain antenna-like *in-plane* and *out-of-plane* surface plasmon polaritons (SPPs) [@sanders2016]. On the other hand, the field confinement is significantly enhanced for the tilt angles between 30$^{\circ}$ and 60$^{\circ}$ because geometrically confined free electrons give rise to an appreciable LSPR effect, as illustrated in Fig. 3c. Note that the Au tip with the SiO$_2$ substrate provides for larger vector-field enhancement than the free standing tip because the SiO$_2$ substrate gives rise to an induced dipole coupling between the tip and sample (see Fig. S1-S9, discussing the substrate effect) [@notingher2005].
\[t!\]
--------------------------------- -------------------------------------- -------------------------------------- -------------------------------------- --------------------------------------
$\theta$$\rm{_{tip}}$ = 90$^{\circ}$ $\theta$$\rm{_{tip}}$ = 35$^{\circ}$ $\theta$$\rm{_{tip}}$ = 90$^{\circ}$ $\theta$$\rm{_{tip}}$ = 35$^{\circ}$
$\lambda$$\rm{_{exc}}$ = 532 nm 28 290 40 630
$\lambda$$\rm{_{exc}}$ = 633 nm 23 180 42 250
$\lambda$$\rm{_{exc}}$ = 800 nm 16 90 25 100
--------------------------------- -------------------------------------- -------------------------------------- -------------------------------------- --------------------------------------
: Comparison of *in-plane* and *out-of-plane* optical field intensity enhancement for conventional ($\theta$$\rm{_{tip}}$ = 90$^{\circ}$) and tilted ($\theta$$\rm{_{tip}}$ = 35$^{\circ}$) tip for selected excitation wavelengths, [color[red]{}exhibiting larger field enhancement of the tilted tip at the resonance excitation for both $|\textbf{\textit{E}}_x|^2$ and $|\textbf{\textit{E}}_z|^2$.]{}[]{data-label="tab:tilt"}
Fig. 3d shows calculated absorption and scattering cross-section spectra for the tilted Au tip ($\theta$$\rm{_{tip}}$ = 35$^{\circ}$) near the SiO$_2$ substrate. The LSPR of the tilted tip is at $\sim$550 nm near the interband transition of gold (2.4 eV), with modified spectral shape and linewidth due to the elongated structure, and correspondingly modfied radiative damping [@grigorchuk2012]. Table \[tab:tilt\] shows comparisons of the resulting *in-plane* and *out-of-plane* vector-field intensity enhancement for conventional tip ($\theta$$\rm{_{tip}}$ = 90$^{\circ}$) and tilted tip ($\theta$$\rm{_{tip}}$ = 35$^{\circ}$) for selected on and off resonance excitation wavelengths. As can be seen, the tilted tip results in a much larger optical field enhancement for both $|\textbf{\textit{E}}_x/\textbf{\textit{E}}_0|^2$ ($\textbf{\textit{I}}$$^{35^\circ}_{\omega}$/$\textbf{\textit{I}}$$^{90^\circ}_{\omega}$ $\sim$ 6-10) and $|\textbf{\textit{E}}_z/\textbf{\textit{E}}_0|^2$ ($\textbf{\textit{I}}$$^{35^\circ}_{\omega}$/$\textbf{\textit{I}}$$^{90^\circ}_{\omega}$ $\sim$ 4-16) for all wavelengths, with the largest effect on resonance. Based on these results, we understand that the tilted tip induces a strongly localized plasmon resonance to both *in-plane* and *out-of-plane* directions by creating a spatial confinement for free electrons oscillation in contrast to a reduced resonance effect for surface normal tip orientation.\
[**Nonlinear optical nano-crystallography and nano-imaging**]{}
As illustrated in Fig. 4a, we perform tip-enhanced nano-SHG imaging with the 35$^{\circ}$ tilted antenna-tip for single-layer MoS$_2$ films grown on a SiO$_2$/Si substrate, as a model system of *$\bar{6}$m2* point group possessing pure *in-plane* $\chi$$^{(2)}_{ijk}$ tensor elements [@malard2013; @li2013; @kumar2013].
For comparison, we first perform conventional far-field imaging to determine crystal orientation angle ($\theta$$\rm{_{co}}$) and grain boundary of the TMD crystals [@yin2014; @cheng2015]. Fig. 4b and c show far-field SHG images with polarization selections of *p*$_{in}$*p*$_{out}$ and *p*$_{in}$*s*$_{out}$, respectively. From the nonvanishing $\chi$$^{(2)}_{ijk}$ tensor elements and excitation condition, the induced second-order polarization for crystals with $\theta$$\rm{_{co}}$ = 0$^{\circ}$ (C1 of Fig. 4b-c) is given by $\textbf{\textit{P}}$$_y$(2$\omega$) = 2$\varepsilon_0$$\chi$$^{(2)}_{yyy}$$\textbf{\textit{E}}$$_y$($\omega$)$^2$, where $\textbf{\textit{E}}$$_{i=x, y, z}$($\omega$) are the electric field components at the laser frequency (see Supporting Information for detailed matrix representations and calculations). Therefore, the SHG signal of crystals with $\theta$$\rm{_{co}}$ = 0$^{\circ}$ is polarized parallel to the excitation polarization ($\omega$), and these crystals are clearly observed in *p*$_{in}$*p*$_{out}$ configuration, as shown in Fig. 4b. In contrast, crystals with $\theta$$\rm{_{co}}$ = 90$^{\circ}$ (C2 of Fig. 4b-c) are seen most clearly in *p*$_{in}$*s*$_{out}$ configuration (Fig. 4c) since the induced SHG polarization is given by $\textbf{\textit{P}}$$_y$(2$\omega$) = -2$\varepsilon_0$$\chi$$^{(2)}_{yxx}$$\textbf{\textit{E}}$$_x$($\omega$)$^2$. This polarization dependence on crystallographic orientation is also confirmed in far-field SHG anisotropy measured with rotating analyzer (Fig. 4e).
Only $\textbf{\textit{E}}$$_{x}$($\omega$) contributes to the SHG signal in *p*$_{in}$*s*$_{out}$ configuration. Therefore, with $\textbf{\textit{I}}$$_{2\omega}$ $\propto$ $|$***P***(2$\omega$)$|$$^2$ $\propto$ $|$***E***($\omega$)$|$$^4$, we can calculate the enhanced SHG intensity using the 35$^{\circ}$ tilted tip ($\textbf{\textit{I}}$$^{35^\circ}_{2\omega, \textup{MoS$_2$}}$) compared to the conventional surface normal oriented tip ($\textbf{\textit{I}}$$^{90^\circ}_{2\omega, \textup{MoS$_2$}}$) from the FDTD simulation. As shown in Fig. 4d, the spatially integrated $|$$\textbf{\textit{E}}$$_x$($\omega$)$|$$^4$ for the 35$^{\circ}$ tilted tip at the sample plane is ${\sim}$28 times larger than that of the surface normal oriented tip ($\theta$$\rm{_{tip}}$ = 90$^{\circ}$), i.e., $\textbf{\textit{I}}$$^{35^\circ}_{2\omega, \textup{MoS$_2$}}$/$\textbf{\textit{I}}$$^{90^\circ}_{2\omega, \textup{MoS$_2$}}$ $\sim$ 28.
Fig. 4f shows a measured tip-enhanced nano-SHG image in *p*$_{in}$*s*$_{out}$ configuration for a small area selected within the far-field image. Far-field images of *p*$_{out}$ and *s*$_{out}$ detection are magnified in Fig. 4g and h for comparison. As demonstrated previously [@yin2014; @cheng2015], some GBs are visualized in the far-field SHG images due to the constructive (or destructive) interference between SHG signals of adjacent crystals. However, this interference contrast at GBs is observed only for specific crystal orientation or polarization condition [@cheng2015].
In contrast, in the tip-enhanced SHG image (Fig. 4f), a full GB map is obtained with pronounced SHG contrast. For example, while GB2 is observed in both far- and near-field images, the additional GBs (GB1, GB3, and GB4) are only seen in the near-field image. In contrast to the far-field response, a full GB map can be obtained regardless of crystal orientation and interference.
In order to assess the full benefit of increased both *in-* and *out-of-plane* field confinement (Fig. 2d), we then perform tip-enhanced nano-SHG imaging on single-crystalline *x*-cut YMnO$_3$, as a model system of *6mm* point group with both *in-plane* and *out-of-plane* nonlinear optical susceptibility [@fiebig2002; @neacsu2009]. We first deduce the microscopic sample orientation from far-field SHG anisotropy measurement, as shown in Fig. 5a. Based on this information, we probe the ferroelectric $\chi$$^{(2)}_{zxx}$ = $\chi$$^{(2)}_{zyy}$ tensor elements in *p*$_{in}$*s*$_{out}$ tip-enhanced near-field microscopy configuration. The corresponding SHG polarization is then given by $\textbf{\textit{P}}$$_z$(2$\omega$) = 2$\varepsilon_0$$\chi$$^{(2)}_{zxx}$($\textbf{\textit{E}}$$_x$($\omega$)$^2$+$\textbf{\textit{E}}$$_y$($\omega$)$^2$) (see Supporting Information for detailed matrix representations and calculations). The measured intensity $\textbf{\textit{I}}$$_{2\omega}$ is proportional to $|$$\textbf{\textit{E}}$$_x$($\omega$)$^2$+$\textbf{\textit{E}}$$_y$($\omega$)$^2$$|$$^2$ = $|$$\textbf{\textit{E}}$$_x$($\omega$)$|$$^4$ + $|$$\textbf{\textit{E}}$$_y$($\omega$)$|$$^4$ + 2$|$$\textbf{\textit{E}}$$_x$($\omega$)$|$$^2$$|$$\textbf{\textit{E}}$$_y$($\omega$)$|$$^2$. From the spatially integrated $|$$\textbf{\textit{E}}$$_x$($\omega$)$|$$^4$, $|$$\textbf{\textit{E}}$$_y$($\omega$)$|$$^4$, $|$$\textbf{\textit{E}}$$_x$($\omega$)$|$$^2$, and $|$$\textbf{\textit{E}}$$_y$($\omega$)$|$$^2$ values at the sample plane for the 35$^{\circ}$ tilted and surface normal oriented tips (Fig. 5b), we can estimate the tip-enhanced SHG intensity ratio of $\textbf{\textit{I}}$$^{35^\circ}_{2\omega, \textup{YMnO$_3$}}$/$\textbf{\textit{I}}$$^{90^\circ}_{2\omega, \textup{YMnO$_3$}}$ $\sim$ 27.
Fig. 5c and d show the resulting tip-enhanced SHG images under tilted tip ($\theta$$\rm{_{tip}}$ = 35$^{\circ}$) and surface normal tip ($\theta$$\rm{_{tip}}$ = 90$^{\circ}$) configuration. We observe a high contrast image of the domains only with the tilted tip. The details of contrast are due to the interference between the tip-enhanced SHG from a single domain and residual far-field SHG from multiple domains giving rise to a local phase-sensitive signal [@neacsu2009]. From that image, we can obtain the corresponding ferroelectric domain map exhibiting an alternating ferroelectric polarization pattern as expected for this crystallographic orientation (Fig. 5e). In addition, we observe the 3-fold symmetric vortices of the domains (red boxes) as expected for hexagonal manganites [@chae2012; @jungk2010] which provides information for the understanding of topological behaviors of ferroics.
In summary, a conventional surface normal oriented tip geometry in tip-enhanced near-field microscopy gives limited polarization control in both the intrinsic far-field excitation and the extrinsic near-field nano-optical response. Furthermore, for surface normal tip orientation, the antenna mode driven into a semi-infinite tip structure results in reduced field enhancement due to overdamping, which gives rise to reduced efficiency for both *in-plane* and *out-of-plane* nano-optical response. Our work presents a simple but powerful solution to control the vector-field of a nano-optical antenna-tip. We show that the optical field confinement can be systematically controlled by tuning the tip orientation angle with respect to the sample surface, to enhance the *in-plane* optical field (***E***$_x$) confinement for investigation of 2D materials. Surprisingly, rather than an associated decrease in *out-of-plane* sensitivity with increasing tilt angle, the *out-of-plane* optical field (***E***$_z$) is also enhanced with an even larger enhancement factor than ***E***$_x$. We find that at an optimized angle near 35$^{\circ}$ with details depending on tip material, sample, and excitation wavelength, the broken axial symmetry provides for a more sensitive nano-probe beyond conventional near-field microscopy tip for all optical modalities and any sample. The vector-field controllability of plasmonic antenna-tip not only allows probing selective polarization components of samples by simply changing its tilting angle but also this strongly confined vector-field gives access to anomalous nanoscale light-matter interactions such as exciton-plasmon coupling [@park2017dark], electron-phonon coupling [@jin2016], and strong coupling [@chikkaraddy2016] in a range of photoactive molecules and quantum materials.\
[**Acknowledgements**]{}
The authors would like to thank Joanna M. Atkin for insightful discussions. We thank Xiaobo Yin and Manfred Fiebig for providing the MoS$_2$ sample and the YMnO$_3$ sample, respectively. We acknowledge funding from the U.S. Department of Energy, Office of Basic Sciences, Division of Material Sciences and Engineering, under Award No. DE-SC0008807. We also acknowledge support provided by the Center for Experiments on Quantum Materials (CEQM) of the University of Colorado.\
[**Competing financial interests**]{}
The authors declare no competing financial interests.
|
---
abstract: 'In-situ ${\gamma}$-ray measurements were performed using a portable high purity germanium spectrometer in Hall-C at the second phase of the China Jinping Underground Laboratory (CJPL-II) to characterise the environmental radioactivity background below 3 MeV and provide ambient ${\gamma}$-ray background parameters for next generation of China Dark Matter Experiment (CDEX). The integral count rate of the spectrum was 46.8 cps in the energy range of 60 to 2700 keV. Detection efficiencies of the spectrometer corresponding to concrete walls and surrounding air were obtained from numerical calculation and Monte Carlo simulation, respectively. The radioactivity concentrations of the walls in the Hall-C were calculated to be ${6.8\pm1.5}$ Bq/kg for ${^{238}}$U, ${5.4\pm0.6}$ Bq/kg for ${^{232}}$Th, ${81.9\pm14.4}$ Bq/kg for ${^{40}}$K. Based on the measurement results, the expected background rates from these primordial radionuclides of future CDEX experiment were simulated in unit of counts per keV per ton per year (cpkty) for the energy ranges of 2 to 4 keV and around 2 MeV. The total background level from primordial radionuclides with decay products in secular equilibrium are ${7.0 \times 10^{-2}}$ and ${3.1 \times 10^{-3}}$ cpkty for energy ranges of 2 keV to 4 keV and around 2 MeV, respectively.'
address:
- 'Key Laboratory of Particle and Radiation Imaging (Ministry of Education) and Department of Engineering Physics, Tsinghua University, Beijing 100084'
- 'College of Nuclear Science and Technology, Beijing Normal University, Beijing 100875'
author:
- 'H. Ma'
- 'Z. She'
- 'W. H. Zeng'
- 'Z. Zeng'
- 'M. K. Jing'
- 'Q. Yue'
- 'J. P. Cheng'
- 'J. L. Li'
- 'H. Zhang'
bibliography:
- './CJPL.bib'
title: 'In-situ gamma-ray background measurements for next generation CDEX experiment in the China Jinping Underground Laboratory'
---
\[orcid=0000-0003-1243-7675\]
In-situ ${\gamma}$-ray measurements ,Environmental radioactivity ,Underground laboratory ,Rare event physics ,CJPL
Introduction
============
####
The environmental background in the underground laboratories mainly includes cosmic muons, muon induced particles, ${\gamma}$ rays from rocks and concrete, and neutrons from the (${\alpha}$, n) reaction and spontaneous fission of radionuclides in the surrounding materials. Since the cosmic rays and cosmogenic radioactivity are drastically decreased with thick rock overburden as shielding [@Mei2006], these underground laboratories [@Votano2012; @Smith2012; @Akerib2004; @Ianni2016; @Gostilo2020] become favorable sites for rare event searching experiments requiring ultra-low background environment. Moreover, the muon and neutron background in underground laboratories have been studied comprehensively in literatures [@Chazal1998; @Robinson2003; @Wu2013; @Abgrall2017; @Hu2016].
####
The China Jinping Underground Laboratory (CJPL) locates in a traffic tunnel under Jinping Mountain, southwest of China [@Cheng2017] with about 2400 m rock overburden vertically. CJPL-I has been in normal operation and provided low background environment for two dark matter experiments, CDEX [@Yue2014; @Jiang2018; @Yang2018; @Yang2019; @Liu2019; @She2020] and PandaX [@Cui2017] since Dec. 2010. The second phase of CJPL (CJPL-II) was proposed to meet the growing space demands from rare event searching experiments. The construction of CJPL-II started in 2014 and the cavern excavation was completed in Jun. 2017. For better control of background radioactivity of CJPL-II, raw materials of concrete and other construction material were screened and selected by low-background ${\gamma}$-ray high purity germanium (HPGe) spectrometer named GeTHU [@Zeng2014] before used.
####
In this paper, in-situ ${\gamma}$-ray HPGe spectrometry was applied to measure the environmental radioactivity of the walls of the Hall-C of the CJPL-II and validate the results of background control measures with GeTHU. This method is widely used in environmental ${\gamma}$-ray measurements in underground laboratories for its advantage of not requiring sample preparation [@Malczewski2012; @Malczewski2013; @Zeng2014Environmental]. This portable HPGe spectrometer was calibrated to obtain the angular response and its detection efficiency of walls of Hall-C based on Beck formula [@Beck1972]. Since ${^{222}}$Rn and its daughters in the air induce main background for ${\gamma}$-ray measurement, we employed Monte Carlo simulation with GEANT4 [@Agostinelli2003] to eliminate their contributions accordingly. The next section will describe the setup of in-situ measurements in CJPL-II and the derivation of detection efficiencies in detail. The results of measured spectrum and radioactivity concentration will be presented and discussed in the third section. The last section will present the expected background level induced by the primordial radionuclide in the concrete walls for the future CDEX experimental facility.
Experiment and methods
======================
In-situ Measurements
--------------------
####
The portable coaxial HPGe detector of 27% relative detection efficiency manufactured by ORTEC was used in the in-situ ${\gamma}$-ray measurements of CJPL-II and its location under measurements in Hall-C is shown in Fig. \[CJPL-II\]. The level of radon, emanated naturally from the concrete and rocks, was measured by an AlphaGUARD radon monitor produced by Saphymo Gmbh which was placed next to the HPGe detector. The dimensions of Hall-C were obtained by a laser range finder.
####
The layout of CJPL-II tunnels and experimental halls are illustrated in Fig. \[CJPL-II\]. The CJPL-II has four experimental halls and each hall is divided into two sub-halls with numbers as postfix. The China Dark Matter Experiment (CDEX), with scientific goals of searching for light DM, operates germanium detector array in CJPL-I currently. Since the future CDEX experiment will be located in the Hall-C [@Ma2017] and the dimensions of four experimental halls and their construction materials are almost the same. In-situ measurements in this work were carried out in Aug. 2019 in the Hall-C of CJPL-II.
Detector calibration
--------------------
####
The relationship between the rates of ${\gamma}$-ray peaks and the radionuclide activities can be described as Eq. \[Beck\] and the numerical calculations are applied to determine the detection efficiency for different ${\gamma}$-ray peaks [@Beck1972]. $$\label{Beck}
\frac{N_f}{A} = f(E, \theta) = \frac{N_f}{N_0} \times \frac{N_0}{\phi} \times \frac{\phi}{A} ,$$ where ${N_f}$ is rate of the ${\gamma}$-ray peak and ${A}$ is the specific activity of the radionuclide ( with the unit of Bq ${\cdot}$ kg${^{-1}}$ for the concrete). ${f(E,\theta)}$ is the coefficient representing angular response factor multiplying the effective front area and ${\theta}$ is the zenith angle between the location of radionuclides and germanium detector. ${\phi / A}$ is the ${\gamma}$-ray flux received at detector’s location per unit specific activities of nuclides. ${N_0 / \phi}$ is the detector response to the incident ${\gamma}$ rays at ${\ang{0}}$ (called the effective front area) while ${N_f / N_0}$ is the detector angular response. In this work, the last two were measured in the angular calibration experiment.
####
As shown in Eq. \[integration\], we integrate the detection efficiencies with the dimensions of Hall-C. i.e., the six surfaces of Hall-C and their geometry boundaries are set as upper or lower bounds in integration. To consider the penetrations of ${\gamma}$ rays, the upper limits of integral related to the concrete thickness is set to be the actual thickness (0.2 m). Following similar procedures described in Ref. [@Zeng2014Environmental], the numerical conversion factors are derived. $$\label{integration}
\begin{aligned}
F_{loc} = K \int_{z_{min}}^{z_{max}}\int_{y_{min}}^{y_{max}}\int_{x_{min}}^{x_{max}} f(E, \theta) g_{loc}(x,y,z) dx dy dz,
\end{aligned}$$ where the ${F_{loc}}$ is the numerical conversion factor for different surfaces of the Hall-C. ${K}$ is the converting factor to convert fitting parameters from calibrations into experimental measurements. ${(x,y,z)_{min}}$ and ${(x,y,z)_{max}}$ are the lower bound and upper bound of integral, which are related to the different surfaces of the Hall-C. ${g_{loc}(x,y,z)}$ is the coefficient representing the attenuation effect from concrete and air.
####
The angular correction factors and the effective front areas are determined through angular calibration experiment with a mixed calibration source containing ${^{241}}$Am, ${^{137}}$Cs, ${^{60}}$Co and ${^{152}}$Eu. The ${f(E,\theta)}$ surface is fitted by Eq. \[fFunction\]. The relative biases between the fitting curve and the experimental data points are treated as the systematic uncertainties of factors. $$\begin{aligned}
f(E, \theta)& = A_{EF} \cdot e^{Q_2(\ln{E})^2+Q_1(\ln{E})+Q_0} \cdot (P_5 \theta^5 \\
& + P_4 \theta^4 + P_3 \theta^3 + P_2 \theta^2 +P_1 \theta +P_0),
\end{aligned}
\label{fFunction}$$ where ${A_{EF}}$ represents the effective front area, and ${Q}$ or ${P}$ with numerical subscripts are parameters determined by fitting while ${\theta}$ is zenith angle. ${E}$ is the energy of ${\gamma}$ ray.
####
Substituting the ${f(E,\theta)}$ in the Eq. \[integration\], one gets the numerical conversion factors for certain walls and certain ${\gamma}$-ray energy. The angular response of detector fluctuates with a relatively small range between ${\ang{0}}$ and ${\ang{150}}$ but decreases rapidly after ${\ang{160}}$, since most ${\gamma}$ rays from the back of detector are shielded by the liquid nitrogen dewar and cold finger.
Results and discussion
======================
Spectrum analysis
-----------------
####
The energy spectra with the integral rate of 46.8 cps between 60 and 2700 keV was measured in experimental Hall-C with the in-situ ${\gamma}$-ray spectrometer as shown in Fig. \[spectrum\]. The ${\gamma}$-ray peaks from primordial radionuclides are labeled.
####
None artificial radionuclides are found in this spectrum. From Fig. \[spectrum\](a), the ${\gamma}$-ray peaks (295 keV and 352 keV) are greater than the 186 keV ${\gamma}$-ray peak which is emitted by ${^{226}}$Ra. However, it reverses when measuring with the concrete samples, so the extra contribution in these ${\gamma}$-ray peaks should come from the ${^{222}}$Rn spread across the Hall-C, which is confirmed by the concentration of ${^{222}}$Rn measured by radon monitor. The contribution comes from ${^{222}}$Rn must be removed before calculating the concentrations of primordial radionuclides.
####
To extract the integral rates for different ${\gamma}$-ray peaks, the energy spectrum in certain energy range is fitted using a single Gaussian peak plus a linear continuum. The energy resolution of these Gaussian peaks can be fitted with $$\centering
\label{FWHM}
FWHM(E) = 0.039\times \sqrt{E} + 0.47,$$ where the ${FWHM}$ is the Full Width Half Maximum of a Gaussian peak while ${E}$ is the energy. Both of them are in unit of keV.
####
The detection efficiencies of the spectrometer placed in Hall-C are listed in Table \[detectionEff\]. Without ventilation system yet in CJPL-II, the Radon concentration was measured to be ${214.5\pm25.7}$ Bq/m${^3}$, contributing a major background to the gamma-ray measurement. Thus, the detection efficiencies for its characteristic ${\gamma}$-ray peaks are simulated with Geant4.10.5.p01 assuming that the ${^{222}}$Rn together with daughters with short half-lives in equilibrium distributed in the air uniformly. Since the space of Hall-C is more than 20000 m${^3}$, the simulation adopted biasing techniques, Russian roulette and splitting method, to improve statistics in this work. The photon will participate in this game when it arrives the geometry boundaries. When the cosine distance between the momentum of photon and the vector points to the detector is positive, it will split. Otherwise, it might be killed according to the result of Russian roulette.
-------- -------------------------- -------------------------------
Energy Concrete Radon
keV ${10^{-3}}$(cts/(Bq/kg)) ${10^{-4}}$(cts/(Bq/m${^3}$))
295.4 22.9 7.9
351.9 43.7 11.7
609.3 47.6 2.8
1120.3 11.3 -
583.1 31.6 -
911.1 23.5 -
969.1 13.6 -
1460.8 6.5 -
-------- -------------------------- -------------------------------
: Detection efficiencies for various ${\gamma}$ rays and the characteristic ${\gamma}$-ray peaks from ${^{222}}$Rn and its daughters in the Hall-C of CJPL-II.[]{data-label="detectionEff"}
Ambient radionuclide concentration
----------------------------------
####
The concentrations of primordial radionuclides are calculated following the Eq. \[concentration\] and listed in Table \[results\] with measured results of selected concrete and rock samples for comparison. The background contributions of detector itself are not subtracted. $$\label{concentration}
C = \frac{R - C_{Rn}V\varepsilon}{F_{loc}},$$ where ${C}$ is the concentration of certain radionuclide, and ${R}$ is the integral rate of a specific ${\gamma}$-ray peak. ${C_{Rn}}$ is the concentration of Rn in the Hall-C. ${V}$ is the volume of Hall-C while the ${\varepsilon}$ is the radon detection efficiency related to the ${\gamma}$-ray energies.
####
The uncertainties were estimated by error propagation according to Eq. \[concentration\]. The uncertainties of peak area include the counting statistics and fitting uncertainties while the uncertainties of radon concentrations were directly provided by AlphaGAURD. Although the uncertainty of the experimental hall’s volume is hard to evaluate due to uneven walls of the hall, the relative uncertainty is assumed to be 10% conservatively. The uncertainties of radon detection efficiencies are ignored.
------------ -------- ----------------- ---------------- -----------------
Nuclides Energy In-situ
(keV) measurements Concrete Rock
(Bq/kg) (Bq/kg) (Bq/kg)
295.4 ${3.8\pm3.6}$ ${4.9\pm0.07}$ ${1.35\pm0.03}$
351.9 ${4.8\pm3.1}$ ${5.1\pm0.03}$ ${1.44\pm0.02}$
609.3 ${7.1\pm2.7}$ ${4.1\pm0.03}$ ${1.23\pm0.02}$
1120.3 ${11.5\pm2.6}$ ${4.2\pm0.07}$ ${1.27\pm0.05}$
911.1 ${5.4\pm1.0}$ ${3.3\pm0.04}$ ${1.45\pm0.04}$
969.1 ${5.6\pm1.2}$ ${3.3\pm0.11}$ ${1.41\pm0.08}$
583.1 ${5.1\pm0.9}$ ${2.9\pm0.03}$ ${1.2\pm0.03}$
${^{40}}$K 1460.8 ${81.9\pm14.3}$ ${39.1\pm0.4}$ ${17.3\pm0.3}$
------------ -------- ----------------- ---------------- -----------------
: The comparison between in-situ measurements and sample measurements by a germanium spectrometer (the uncertainties include the statistic uncertainties and uncertainties introduced by the Gaussian fitting process). []{data-label="results"}
####
The concentration of ${^{232}}$Th and ${^{238}}$U in CJPL-II together with the measured results in other underground laboratories are listed in Table \[comparison\] for comparison. The contamination is less than those of CJPL-I because of the stricter material screening and selection during the construction.
####
Comparing the in-situ measurements results with sample measurements, the concentration of ${^{40}}$K differs significantly from that of concrete sample. The in-situ measurements actually assess the average concentrations across the hall while the concrete samples are just pieces of the surrounding concrete walls. The measured concentrations for various concrete samples differ greatly especially for ${^{40}}$K (4.0 - 157.0 Bq/kg).
--------------------------------- ---------------- --------------- -------------- --
Underground Lab ${^{238}}$U ${^{232}}$Th ${^{40}}$K
(Bq/kg) (Bq/kg) (Bq/kg)
Gran Sasso [@Malczewski2013] ${9.5\pm0.3}$ ${3.7\pm0.2}$ ${70\pm2}$
Modane [@Malczewski2012] ${22.8\pm0.7}$ ${6.7\pm0.2}$ ${91\pm3}$
Boulby [@Malczewski2013] ${7.1\pm0.2}$ ${3.9\pm0.1}$ ${120\pm2}$
Sanford [@Akerib2020] ${29\pm15}$ ${13\pm3}$ ${220\pm60}$
CJPL-I [@Zeng2014Environmental] 18.0 7.6 36.7
CJPL-II
(this work)
--------------------------------- ---------------- --------------- -------------- --
: The concentrations of primordial radionuclides in different underground laboratories.[]{data-label="comparison"}
Background assessment for CDEX-100
==================================
####
The next phase of CDEX experiment named CDEX-100 will operate about 100 kg germanium detector array for enlarging the target mass and study background characteristics in the liquid nitrogen environment [@Ma2019]. The CDEX germanium detector array will be installed in a cryotank, whose volume is about 1725 m${^3}$, immersed in the LN${_2}$ at Hall-C of CJPL-II, and the 6.5 m-thick LN${_2}$ will act as the passive shielding from ambient radioactivity.
####
Importing the measured specific activities of primordial radionuclides, the background simulation was conducted with a dedicated Monte Carlo framework called Simulation and Analysis of Germanium Experiment (SAGE) based on Geant4.10.5.p01 [@Agostinelli2003] for CDEX-100 experiment. The radionuclides are assumed to be uniformly distributed in the concrete wall close to the cryotank as shown in the Fig. \[simulation\]. The gap between the inner and outer tanks is filled with the perlite as heat insulator.
![The schematic illustration of the simulation system geometry in this work. The concrete, perlite as heat insulator and liquid nitrogen are drawn in gray, orange and blue, respectively. And the germanium detector array is imposed with the screenshot in Geant4. The inner and outer containers are shown with black thick lines. All of these rectangles in this figure are the section views of cylinders.[]{data-label="simulation"}](./SimulationGeometry.pdf){width="\linewidth"}
####
Thanks to the 6 m thick liquid nitrogen, gamma rays from primordial radionuclides could hardly penetrate. The uniformly distributed source for certain radionuclide at the inner surface of the concrete are assumed in the simulation rather than a volume source actually to get a higher simulation efficiency. The emitting efficiency of gamma-rays in the concrete wall was simulated separately and used to determine the surface source activity. The aforementioned biasing techniques were implemented to reduce statistic uncertainty.
####
The regions of interests (ROIs) in CDEX-100 are the energy ranges from 2 to 4 keV for light dark matter detection and around 2 MeV (2014 to 2064 keV for ${^{76}}$Ge neutrinoless double beta decay). The background rates of the two ROIs are simulated for all germanium detectors without any vetos or pulse shape discrimination. For ${^{238}}$U, the background rates are ${ 6.0\times 10^{-3}}$ and ${3.5 \times 10^{-4}}$ cpkty for these energy ranges, while ${6.5 \times 10^{-2}}$ and ${2.6 \times 10^{-3}}$ cpkty for ${^{232}}$Th. Due to the lower Q value, ${^{40}}$K will only contribute ${4.7 \times 10^{-5}}$ cpkty to the range of 2 to 4 keV.
Conclusion
==========
####
CJPL-II is a deep underground laboratory designed for future large-scale rare event searching physics experiments. We have applied in-situ ${\gamma}$-ray HPGe spectrometry to measure the ${\gamma}$-ray background in the Hall-C of CJPL-II. The numerical calculation based on the calibration of angular responses and Monte Carlo simulation were used to obtain detection efficiencies of ${\gamma}$ rays from walls and surrounding air. After reduction of radon contribution, the radioactivity concentrations in the concrete are characterized as ${6.8\pm1.5}$ Bq/kg for ${^{238}}$U, ${5.4\pm0.6}$ Bq/kg for ${^{232}}$Th and ${81.9\pm14.4}$ Bq/kg for ${^{40}}$K, competitive with other underground laboratories. The background level from primordial radionuclides is simulated for the future CDEX-100 experiment. The background rates from primordial radionuclides are ${7.1 \times 10^{-2}}$ and ${3.0 \times 10^{-3}}$ cpkty in the ROIs of 2 to 4 keV and 2014 to 2064 keV, respectively. Other intrinsic backgrounds from the experimental setup will be studied in future work to get a whole background map in CDEX-100 experiment.
Acknowledgement {#acknowledgement .unnumbered}
===============
We gratefully acknowledge support from National Key R&D Program of China (No. 2017YFA0402201), National Natural Science Foundation of China (No. 11725522 & 11675088), and Tsinghua University Initiative Scientific Research Program (No.20197050007).
|
---
abstract: 'We consider quasinormal modes of the massive Dirac field in the background of a Schwarzschild-Tangherlini black hole. Different dimensions of the spacetime are considered, from $d=4$ to $d=9$. The quasinormal modes are calculated using two independent methods: WKB and continued fraction. We obtain the spectrum of quasinormal modes for different values of the overtone number and angular quantum number. An analytical approximation of the spectrum valid in the case of large values of the angular quantum number and mass is calculated. Although we don’t find unstable modes in the spectrum, we show that for large values of the mass, the quasinormal modes can become very slowly damped, giving rise to quasistationary perturbations.'
author:
- 'Jose Luis Blázquez-Salcedo'
- Christian Knoll
bibliography:
- 'QNMDiracTang.bib'
title: 'Slowly damped quasinormal modes of the massive Dirac field in $d$-dimensional Tangherlini spacetime'
---
Introduction
============
The properties of higher dimensional black holes have attracted much interest since several quantum gravity theories, such as string theory, brane world models, and the AdS/CFT correspondence, proposed the existence of more than four spacetime dimensions [@Emparan:2008eg; @2012bhhd.book.....H]. Of special interest in this context is the interaction of the black holes with several types of matter fields, in particular with test fields such as scalars and fermionic fields (Dirac spinors).
The interaction of such test fields in the background of a black hole has been widely investigated in the literature. In particular, the quasinormal mode decomposition of a field perturbation. This analysis allows to calculate the resonant frequencies and damping times that govern the emission of radiation in a curved spacetime. In particular during the ringdown phase of a time-dependent field perturbation. The quasinormal mode analysis allows to test the mode stability of the solutions of the Einstein equations under small perturbations. In addition the spectrum has applications in the AdS/CFT correspondence, since the modes are related with the poles of the correlation functions. For a comprehensive review see for example [@Konoplya:2011qq; @Ferrari:2007dd; @Berti:2009kk].
Focusing on the fermionic fields, the analysis of the Dirac equation benefits from the fact that it is known to be separable into radial and angular parts in a number of geometries describing rotating black holes. The separability has been related to hidden symmetries of the metric background [@Frolov:2017kze; @PhysRevD.19.1093], and introduces an angular operator with its corresponding quantum number. This is well known for the Kerr black hole [@10.2307/79011; @1984RSPSA.391...27C], and it also holds in the presence of charge [@Mukhopadhyay:2000ss] and cosmological constant [@Belgiorno:2008hk]. In higher dimensions, the separability has been studied for the 5D Myers-Perry black hole [@Wu:2008df] and in the more general case of the higher dimensional Kerr-NUT-dS black hole [@Oota:2007vx].
Regarding hidden symmetries, the Killing-Yano tensors for the most general charged rotating geometries was constructed in [@Chervonyi:2015ima]. The separation of the Maxwell equations in the Myers-Perry geometry using Killing-Yano tensors was recently achieved in [@Lunin:2017drx].
Naively one may be tempted to think that a Dirac spinor could be excited to form a stable configuration around a black hole. Such a configuration would mimic an atom, with the event horizon surrounded by a stationary fermionic field [@0143-0807-28-3-007]. However it has been shown that, under some generic assumptions, there do not exist stable solutions to the Einstein-Dirac equations with fermionic hair, even when supplementing the system with other fields as well [@Finster:1999mn; @Finster:1998ak; @Finster:1999ry; @Finster:2002vk]. The spinor field either falls into the black hole or vanishes at infinity (being radiated away from the horizon). To further cement this point, there are various proofs for the nonexistence of purely real frequencies in the spectrum of quasinormal modes of the Dirac field. For the Schwarzschild spacetime see [@Batic:2017qcr; @Lasenby:2002mc] and for the five dimensional Myers-Perry spacetime see [@Daude:2012wq]. However there are known examples of exotic stable configurations, which have been proposed as dark matter contributors [@Dokuchaev:2013kra; @Dokuchaev:2014vda]. [Let us note here that, recently, Dirac stars (the fermionic equivalent of boson and Proca stars, with no horizon) have been constructed, using a pair of fermionic fields instead of a single field [@Herdeiro:2017fhv].]{}
The nonexistence of stable solutions to the Einstein-Dirac equations with fermionic hair and the lack of superradiance for spinors [@Brito:2015oca; @Maeda:1976tm; @Iyer:1978du] indicates that, in terms of the quasinormal mode analysis, the perturbations in the Dirac field will decay with time. The massless quasinormal mode spectrum of Dirac spinors in the geometry of the Schwarzschild black hole was investigated in [@Jing:2005dt] using the continued fraction and Hill-determinant methods. They found that the fundamental quasinormal modes become evenly spaced for large angular quantum number. The quasinormal modes also become evenly spaced as the overtone number is increased. It was also shown that the angular quantum number affected the real part of the frequencies but had almost no influence on the imaginary part. The massive modes were investigated in [@Cho:2003qe] using a WKB approach, although only small mass values of the field were studied. It was observed that when the mass is increased, the real part of the quasinormal frequencies increases as well, but interestingly the absolute value of the imaginary part decreases. This indicates that massive Dirac fields decay slower in time than in the massless case.
The massless modes in the higher dimensional Schwarzschild/Tangherlini spacetime were investigated in [@Cho:2007zi] using a WKB approach. In this study it is shown that for higher dimensions the damping time increases. Using the Pöschl-Teller potential approximation the modes of the massless field were also studied in the Reissner-Nordström de-Sitter black hole [@Jing:2003wq] and in the Schwarzschild-de Sitter black hole [@Zhidenko:2003wq] (here also the WKB approximation up to sixth order was used). In both cases an increase in the cosmological constant led to a decrease in the absolute value of the imaginary part of the quasinormal frequencies, making the modes more slowly damped. For Schwarzschild-de Sitter it was mentioned that an increase in the cosmological constant also led to a decrease of the real part of the frequencies. For the Reissner-Nordström-de-Sitter black hole the absolute value of the frequency decreased when increasing the angular quantum number, but became larger when increasing the charge of the black hole and the overtone number. An analytical investigation in the asymptotic spectrum (meaning high overtone number) of a Dirac field in the geometry of the Schwarzschild-AdS spacetime with numerical checking of the results was done in [@Arnold:2013gka]. The quasinormal modes of Weyl spinors in the BTZ black hole were calculated in [@Cardoso:2001hn].
Investigation of the quasinormal modes for the massive field in the geometry of the Kerr black hole was carried out in [@Dolan:2015eua]. One of the interesting results is that in the rapidly rotating case the decay rate of low frequency co-rotating quasinormal modes are suppressed in the (bosonic) superradiant regime. The scattering of massive Dirac fields in the Schwarzschild black hole was investigated in [@0264-9381-15-10-018]. Analytical expressions for the phase shift in the scattering of massive fermions can be found for the Schwarzschild black hole in [@Cotaescu:2014jca] and for the Reissner-Nordström black hole in [@Cotaescu:2016aty]. Recently, analytical solutions were obtained in [@Sporea:2015wsa] describing quasinormal modes in the near-horizon regime.
As we mentioned above, it has been generally noted that increasing the mass of a field leads to longer lived modes. In particular, for scalar fields this was noted in [@Konoplya:2004wg; @0264-9381-9-4-012; @Zhidenko:2006rs] and for vector fields in [@Konoplya:2005hr]. In the case of vector fields, some quasinormal modes show the curious behaviour of decreasing the frequency as the mass increases, eventually becoming a pulse. For large enough values of the vector field mass the modes can cease existing.
In this paper we will investigate the quasinormal modes of the Dirac field in the geometry of the $d$-dimensional Tangherlini spacetime. The focus will be on the behaviour of the field for large masses. We will see that indeed, the field follows this general behaviour, with longer lived modes for larger values of the mass.
The structure of the paper will be as follows. In Section \[S1\], first we will introduce the conventions we use in the paper, and then we will derive the differential equations governing the radial part of the field, with a study of the asymptotic behaviour of the perturbation. In order to generate the quasinormal modes, we will use two independent methods: the method of continued fractions with the Nollert improvement [@Leaver:1985ax; @Nollert:1993zz] and the WKB method up to third order [@Iyer:1986np], presenting in both cases all the necessary equations in Section \[S2\]. In Section \[S3\] we present the results, where we start with the analysis for large angular quantum number. Here we combine the numerical methods with analytical results obtained in the limit of large angular quantum number and large fermionic mass. We also present results for $l=0$, studying the fundamental state and the first excitation. In Section \[S4\] we finish with some conclusions and an outlook.
Dirac equation in Tangherlini spacetime {#S1}
=======================================
Let us begin with a short note on conventions. Our sign convention for the metric of special relativity is $\eta \stackrel{*}{=} \mathrm{diag}[1,-1, \dots, -1]$. We will use the Einstein summation convention, always summing over the whole possible range for the indices when not otherwise stated. We will use greek letters for coordinate components of tensors and latin letters for components in the orthonormal frame, for example $\mathbf{g} = g_{\mu \nu} \mathbf d x^\mu \otimes \mathbf d x^\nu = \eta_{a b} \bm{\omega}^a \otimes \bm{\omega}^b$. To distinguish between components in the coordinates and in the orthonormal frame we will give components in the frame a hat, so $v_i$ is the $i$-th component of $\mathbf{v}$ in the coordinates and $v_{\hat i}$ is the $i$-th component of $\mathbf{v}$ in the orthonormal frame.
The metric of the $d$-dimensional Schwarzschild-Tangherlini spacetime is $$\begin{aligned}
\mathrm d s^2 = f(r) \, \mathrm d t^2 - \frac{1}{f(r)} \, \mathrm d r^2 - r^2 \, \mathrm d \Omega^2_{d-2} \, ,\end{aligned}$$ with $f(r) = 1-(\mu/r)^{d-3}$, $\mu$ being related to the mass of the black hole and $\mathrm d \Omega^2_{d-2}$ being the line element of the $d-2$ dimensional sphere. We use the vielbein $$\begin{aligned}
\bm{\omega}^{\hat 0} = \sqrt{f(r)} \, \mathbf d t \, , \; \bm{\omega}^{\hat 1} = \frac{1}{\sqrt{f(r)}} \, \mathbf d r \, , \; \bm{\omega}^{\widehat{i+1}} = r \, \bm{\omega}^{\hat i}_{d-2} \, ,\end{aligned}$$ with $\bm{\omega}^{\hat i}_{d-2}$ being a vielbein of the $d-2$ dimensional sphere and $i$ ranging from 1 to $d-2$. This allows us to write the Dirac equation $$\begin{aligned}
\mathcal{D} \Psi = && \left( \frac{\mathrm{i}}{\sqrt{f}} \gamma^{\hat 0} \partial_t + \mathrm{i} \sqrt{f} \gamma^{\hat 1} \left[ \partial_r + \frac{\mathrm d}{\mathrm d r} \ln r^{(d/2) - 1} f(r)^{1/4} \right] \right. \nonumber \\
&& \left. + \frac{\mathrm i}{r} \gamma^{\hat 0} \gamma^{\hat 1} \mathcal{K}_{d-2} - m \mathbb{E} \right) \Psi = 0 \, ,\end{aligned}$$ with $m$ being the mass of the Dirac field, $\mathcal{K}_{d-2}$ being the angular operator and $\mathbb{E}$ being the unit operator.
It is true that $[\mathcal{D},\mathcal{K}_{d-2}]=0$, so that one can split the spinor as follows $$\begin{aligned}
\Psi = \frac{r}{r^{d/2} f(r)^{1/4}} \, \mathrm{e}^{- \mathrm{i} \omega t} \, \phi(r) \otimes \Theta_\kappa \, ,
\label{Psi_exp}\end{aligned}$$ with [@CAMPORESI19961] $$\begin{aligned}
\mathcal{K}_{d-2} \Theta_\kappa = \kappa \Theta_\kappa = \pm \left(l + \frac{d-2}{2} \right) \Theta_\kappa \, ,
\label{kappa_def}\end{aligned}$$ and $l \in \mathbb N_0$ being the angular quantum number. It is worth mentioning that, for a given value of $l$, $\kappa$ can be positive or negative, and we will see that in the case of massive fields, each sign results in a branch of modes that possess different properties.
In addition, from equation (\[Psi\_exp\]) we can see that we are focusing on a mode decomposition of the time dependent perturbation by introducing the eigenfrequency $\omega$. With this Ansatz the resulting differential equation for the radial part $\phi$ is $$\begin{aligned}
\left( \frac{\omega}{\sqrt{f}} \gamma^{\hat 0} + \mathrm{i} \sqrt{f} \gamma^{\hat 1} \frac{\mathrm d}{\mathrm d r} + \frac{\mathrm i \kappa}{r} \gamma^{\hat 0} \gamma^{\hat 1} - m \mathbb{E} \right) \phi = 0 \, .
\label{radial_eq_1}\end{aligned}$$
It is convenient to work with normalised quantities such as $$\begin{aligned}
x = r / \mu \, , \; \Omega = \mu \omega \, , \; \eta = \mu m \, ,\end{aligned}$$ which simplifies equation (\[radial\_eq\_1\]) to $$\begin{aligned}
\left( \frac{\Omega}{\sqrt{f}} \gamma^{\hat 0} + \mathrm{i} \sqrt{f} \gamma^{\hat 1} \frac{\mathrm d}{\mathrm d x} + \frac{\mathrm i \kappa}{x} \gamma^{\hat 0} \gamma^{\hat 1} - \eta \mathbb{E} \right) \phi = 0 \, .
\label{radial_eq_2}\end{aligned}$$ In addition, it can be convenient to change variables to the tortoise coordinate $z$, defined by the relation $$\begin{aligned}
\frac{\mathrm d}{\mathrm d z} = f \frac{\mathrm d}{\mathrm d x},\end{aligned}$$ which transforms equation (\[radial\_eq\_2\]) to $$\begin{aligned}
\left( \Omega \gamma^{\hat 0} + \mathrm{i} \gamma^{\hat 1} \frac{\mathrm d}{\mathrm d z} + \frac{\mathrm i \kappa \sqrt{f}}{x} \gamma^{\hat 0} \gamma^{\hat 1} - \eta \sqrt{f} \, \mathbb{E} \right) \phi = 0 \, . \label{eqn:diff.eq-tortoise}\end{aligned}$$
In order to study the quasinormal modes of this system, we need to set a number of physically relevant boundary conditions at the horizon and at spatial infinity. It is convenient to express these conditions in terms of the probability current $\mathbf j$, which has to be conserved, $\mathrm d \ast \mathbf j = 0$. Let us choose as four dimensional volume $\cal{V}$ the spatial hypersurface $V(t)$ orthogonal to the Killing vector $\partial_t$ outside the horizon $\cal{H}$ of the black hole, an $\epsilon$-distance away from the horizon, translated for a time $\Delta t$ from an initial time $t_0$. Then we can express the conservation law as $$\begin{aligned}
0 = \int_{\cal{V}} \mathrm d \ast \mathbf j = \int_{\partial \cal{V}} \ast \mathbf j =&& \underbrace{\left( \int_{V(t=t_0 + \Delta t)} - \int_{V(t=t_0)} \right) \ast \mathbf j}_{<0} \nonumber \\
&&+ \int\limits_{S_{d-2}^\infty \times \Delta t} \ast \mathbf{j} - \int\limits_{{\cal{H}_\epsilon} \times \Delta t} \ast \mathbf{j} \, ,\end{aligned}$$ where $S_{d-2}^\infty$ is a $d-2$-sphere at spatial infinity and $\mathcal{H}_\epsilon$ is a $d-2$-sphere at $r=\mu + \epsilon$. We know that the first summand is smaller than zero, because we will have a decaying field. Thus the second and third terms together must be greater than zero. To achieve this, one possibility is to require the following conditions to the probability current: $$\begin{aligned}
\int\limits_{{\cal{H}_\epsilon} \times \Delta t} \ast \mathbf{j} \, &&= \int\limits_{\Delta t} \mathrm d t \, \left. j^r \right|_{r=\mu+\epsilon} \int\limits_{\cal{H}_\epsilon} \mathrm{d} \Sigma_r
%\stackrel{!}{<}
<
0 \, , \nonumber \\
\int\limits_{S_{d-2}^\infty \times \Delta t} \ast \mathbf{j} \, &&= \int\limits_{\Delta t} \mathrm d t \, \left. j^r \right|_{r \rightarrow \infty} \int\limits_{S_{d-2}^\infty} \mathrm{d} \Sigma_r
%\stackrel{!}{>}
>
0 \, .
\label{bc_flux}\end{aligned}$$ The integration surfaces are intrinsic geometric objects being generated by the orbits of Killing vectors. The requirements to the current at the boundaries (\[bc\_flux\]) imply that the field flows into the black hole at the horizon ($r \rightarrow \mu \Rightarrow x \rightarrow 1 \Rightarrow z \rightarrow - \infty$) , meaning that $j^r < 0$ there (we have taken the limit $\epsilon \rightarrow 0$ here). At spatial infinity ($r \rightarrow \infty \Rightarrow x \rightarrow \infty \Rightarrow z \rightarrow \infty$) the field should flow outward, so $j^r > 0$ there.
Let us now choose the following representation for the Clifford algebra and the spinor $$\begin{aligned}
\gamma^{\hat 0} = \left[ \begin{array}{cc} 0 & 1 \\ 1 & 0 \end{array} \right] \, , \; \gamma^{\hat 1} = \left[ \begin{array}{cc} 0 & 1 \\ -1 & 0 \end{array} \right] \, , \; \phi = \left[ \begin{array}{c} \phi_1 \\ \phi_2 \end{array} \right], \label{eqn:Clifford-representation}\end{aligned}$$ which is appropriate in order to simplify several expressions, in particular later in section III. For instance, with this choice the radial probablity current $j^{\hat 1} = j^{\hat r} \propto j^r$ is just proportional to $| \phi_2 |^2 - | \phi_1 |^2$. This allow us to rewrite the requirements from equation (\[bc\_flux\]) as a set of boundary conditions for the radial part of the spinor, which constrains the behaviour of the leading terms at the boundaries.
At the horizon, from equations (\[bc\_flux\]) and (\[eqn:Clifford-representation\]) we obtain that $| \phi_2 |^2 - | \phi_1 |^2<0$, which implies that the radial part of the spinor has to behave like $$\begin{aligned}
\left[ \begin{array}{c} \phi_1 \\ \phi_2 \end{array} \right] && \approx \left[ \begin{array}{c} 1 \\ \frac{2 \mathrm{i} \sqrt{d -3}}{4 \mathrm{i} \Omega - d + 3} \, \mathrm{e}^{\frac{d-3}{2} z} \end{array} \right] \mathrm{e}^{- \mathrm{i} \Omega z} \nonumber \\
&&\approx \left[ \begin{array}{c} 1 \\ \frac{2 \mathrm{i} \sqrt{d -3}}{4 \mathrm{i} \Omega - d + 3} \, \sqrt{x-1} \end{array} \right] (x-1)^{- \frac{\mathrm{i} \Omega}{d - 3}} \, .
\label{bc_horizon}\end{aligned}$$
Similarly, at infinity, equations (\[bc\_flux\]) and (\[eqn:Clifford-representation\]) implies that $| \phi_2 |^2 - | \phi_1 |^2>0$, which results in the following behaviour of the radial part of the spinor $$\begin{aligned}
\left[ \begin{array}{c} \phi_1 \\ \phi_2 \end{array} \right] && \approx \left[ \begin{array}{c} \Omega \pm \chi \\ \eta \end{array} \right] z^{\mp \alpha_z} \, \mathrm{e}^{\mp \mathrm{i} \chi z} \nonumber \\
&& \approx \left[ \begin{array}{c} \Omega \pm \chi \\ \eta \end{array} \right] x^{\mp \alpha_x} \, \mathrm{e}^{\mp \mathrm{i} \chi x} \, ,
\label{bc_infinity}\end{aligned}$$ with $\chi = \sqrt{\Omega^2 - \eta^2}$, the upper sign for $\Re (\Omega)<0$, the lower sign for $\Re (\Omega) > 0$ and the constants $\alpha_z$ and $\alpha_x$ are defined such that $$\begin{aligned}
\alpha_z &&= \begin{cases} \frac{\mathrm i}{2} \frac{\eta^2}{\chi} \; &\text{, for} \; d=4 \\ 0 &\text{, otherwise} \end{cases} \, , \nonumber \\
\alpha_x &&= \begin{cases} \alpha_z + \mathrm i \chi = \frac{\mathrm i}{2} \frac{2 \Omega^2 - \eta^2}{\chi} &\text{, for} \; d=4 \\ 0 &\text{, otherwise} \end{cases} \, . \end{aligned}$$
Note that $d=4,5$ present a distinct behaviour asymptotically. When the representation given in equation (\[eqn:Clifford-representation\]) is used with the differential equation (\[eqn:diff.eq-tortoise\]), it results in the following second order differential equation for the functions $\phi_{1,2}$ $$\begin{aligned}
(\mathrm{i} \kappa &&\mp x \eta) x^2 \frac{\mathrm d^2}{\mathrm d z^2} \phi_{1,2} + x F_\mp \frac{\mathrm d}{\mathrm d z} \phi_{1,2} + \left\{ (\mathrm i \kappa \mp x \eta) x^2 \Omega^2 \right. \nonumber \\
&& \left. \pm \mathrm i \Omega x F_\mp - (\mathrm i \kappa \mp \eta x) (\kappa^2 + \eta^2 x^2) f \right\} \phi_{1,2} = 0 \, , \label{eqn:diff.second.order}\end{aligned}$$ with $$\begin{aligned}
F_\mp (z) = \mathrm i \kappa f - \frac{x (\mathrm i \kappa \mp \eta x)}{2 f} \frac{\mathrm d f}{\mathrm d z} \, ,\end{aligned}$$ the upper sign for $\phi_1$ and the lower sign for $\phi_2$. [Note that since we are working with another representation, this differential equation is different from the one obtained in [@Cho:2003qe].]{}
Numerical methods {#S2}
=================
Because of the lack of analytical solutions to the equation (\[eqn:diff.second.order\]) subject to the boundary conditions (\[bc\_horizon\]) and (\[bc\_infinity\]), we need to employ numerical methods in order to obtain the spectrum of quasinormal modes of the massive Dirac field.
We will employ two independent techniques in our calculations: the continued fraction method with the Nollert improvement, and a third order WKB method.
Continued fraction method
-------------------------
From the asymptotic behaviour of the spinor that we have obtained in equation (\[bc\_infinity\]), we can factorize the behaviour of the Ansatz functions at infinity $$\begin{aligned}
\left[ \begin{array}{c} \phi_1 \\ \phi_2 \end{array} \right] = x^{\alpha_x} (x-1)^{-\frac{\mathrm{i} \Omega}{d-3}} \mathrm{e}^{ \mathrm{i} \chi x} \left[ \begin{array}{c} \psi_1 \\ \sqrt{x-1} \, \psi_2 \end{array} \right] \, ,\end{aligned}$$ with $\psi_1$ and $\psi_2$ unknown functions of $x$, and where we have assumed the $\Re(\Omega) > 0$ behaviour in the various exponents (i.e. the lower sign in equation (\[bc\_infinity\])). In addition, it is convenient to change variables to the compactified coordinate $y := 1 - \frac{1}{x}$ with $y \in [0,1] $. Then the second order differential equation (\[eqn:diff.second.order\]) gives as a result the following system of equations for $\psi_1$ and $\psi_2$:
$$\begin{aligned}
&&K_\mp (1-y)^4 f^2 \frac{\mathrm d^2}{\mathrm d y^2} \psi_{1,2} \nonumber \\
&&+ \Bigg\{ G K_\mp \mp \mathrm i \kappa (1-y)^2 f \Bigg\} (1-y)^2 f \frac{\mathrm d}{\mathrm d y} \psi_{1,2} \nonumber \\
&&+ \Bigg\{ \left( C f^2 + \Omega^2 \mp \frac{\mathrm i \Omega}{2} (1-y)^2 \frac{\mathrm d f}{\mathrm dy} - f K_- K_+ \right. \nonumber \\
&& \;\;\;\;\;\;\;\;\; \left. + \frac{1}{2} B (1-y)^2 f \frac{\mathrm d f}{\mathrm d y} \right) K_\mp \nonumber \\
&& \;\;\;\;\;\;+ \left( \Omega \mp \mathrm i B f \right) \kappa (1-y)^2 f \Bigg\} \psi_{1,2} = 0 \, , \label{eqn:CF}\end{aligned}$$ with the upper sign for $\psi_1$, the lower sign for $\psi_2$ and for simplicity we have defined the following functions
$$\begin{aligned}
K_\pm &&= \eta \pm \mathrm i \kappa (1-y) \, , \, A = (x-1)^{\tilde \alpha} x^{\alpha_x} \mathrm e^{\mathrm i \chi x} \, , \nonumber \\
B &&= \frac{\mathrm d}{\mathrm d x} \ln A = - \frac{\mathrm i \Omega}{d - 3} \frac{1-y}{y} + \alpha_x (1-y) + \mathrm i \chi \, , \nonumber \\
C &&= \frac{1}{A} \frac{\mathrm d^2}{\mathrm d x^2} A \nonumber \\
&&= \left( \left[ \frac{\tilde{\alpha}}{y} + \alpha_x \right] (1-y) + \mathrm i \chi \right)^2 - \left[ \frac{\tilde{\alpha}}{y^2} + \alpha_x \right] (1-y)^2 \, , \nonumber \\
G &&= 2f(B+y-1)+\frac{1}{2}(1-y)^2\frac{df}{dy}
\nonumber \\
\tilde{\alpha} &&= \begin{cases} - \frac{\mathrm i \Omega}{d - 3} &\text{, for} \; \psi_1 \\ - \frac{\mathrm i \Omega}{d - 3} + \frac{1}{2} &\text{, for} \; \psi_2 \end{cases} \, .\end{aligned}$$
Note that the zeros of the coefficient of $\mathrm d^2 \psi_{1,2} / \mathrm d y^2$ for $4 \le d \le 9$ are either at $y=0$ or outside the unit circle $|y|<1$ in the complex $y$-plane. However, for $d>9$ the zeros of the function $f(y) = 1 - (1-y)^{d-3}$ will lie inside the unit circle. Thus one can expand the functions $\psi_{1,2}$ in a power series in $y$, convergent on the whole range $y \in [0,1]$ of interest on the real axis only for $4 \le d \le 9$. For $d \ge 10$ one has to analytically continue the functions through midpoints [@Rostworowski:2006bp]. We will not do this here, hence in the following we will restrict to the cases $4 \le d \le 9$. All coefficient functions of these differential equations are just polynomials in $y$. Thus the resulting recurrence relations for the coefficients of the expansion will be of finite order, namely $2 d - 3$.
We will use the continued fraction method to determine the complex frequencies $\Omega$ which lead to physical solutions of the system (\[eqn:CF\]) [@Leaver:1985ax]. The method with the Nollert improvement is described in [@Nollert:1993zz]. Given a recurrence relation of order $N$ for the coefficients $f_{n}$ $$\begin{aligned}
\sum\limits_{k=0}^{\min \{ N, n\}} a^{(N)}_k (n) \, f_{n-k} = 0 \, ,\end{aligned}$$ one can calculate the coefficients of the recurrence relation of order $N-1$. These coefficients for $n < N, 0 \le k \le n$ are given by $a^{(N-1)}_k (n) = a^{(N)}_k (n)$ , and for $n \ge N$ by
$$\left[ \begin{array}{c} a^{(N-1)}_0 (n) \\ a^{(N-1)}_1 (n) \\ a^{(N-1)}_2 (n) \\ \vdots \\ a^{(N-1)}_{N-2} (n) \\ a^{(N-1)}_{N-1} (n) \end{array} \right] = \left[ \begin{array}{cccccc} 0 & 0 & 0 & \cdots & 0 & a^{(N)}_0 (n) \\ - a^{(N)}_N (n) & 0 & 0 & \cdots & 0 & a^{(N)}_1 (n) \\ 0 & -a^{(N)}_N (n) & 0 & \cdots & 0 & a^{(N)}_2 (n) \\ \vdots & & \ddots & & \vdots & \vdots \\ 0 & \cdots & 0 & -a^{(N)}_N (n) & 0 & a^{(N)}_{N-2} (n) \\ 0 & \cdots & 0 & 0 & -a^{(N)}_N (n) & a^{(N)}_{N-1} (n) \end{array} \right] \, \left[ \begin{array}{c} a^{(N-1)}_0 (n-1) \\ a^{(N-1)}_1 (n-1) \\ a^{(N-1)}_2 (n-1) \\ \vdots \\ a^{(N-1)}_{N-2} (n-1) \\ a^{(N-1)}_{N-1} (n-1) \end{array} \right] \,.$$
To avoid large numbers in the numerics one can normalise after each step by dividing for example with $a_{N-1}^{(N-1)}(n-1)$, provided that $a_{N-1}^{(N-1)}(n-1) \neq 0$.
Thus it is possible to reduce the recurrence relation to a relation of order two $$\begin{aligned}
a^{(2)}_0 (1) \, f_1 + a^{(2)}_1 (1) \, f_0 &&= 0 \, , \nonumber \\
a^{(2)}_0 (n) \, f_n + a^{(2)}_1 (n) \, f_{n-1} + a^{(2)}_2 (n) \, f_{n-2} &&= 0 \, .\end{aligned}$$ with $n \ge 2$. Re-expressing this with $\Delta_n := f_n / f_{n-1}$ gives $$\label{eqn:Delta.1}
a^{(2)}_0 (1) \, \Delta_1 + a^{(2)}_1 (1) = 0 \, ,$$ $$\label{eqn:Delta.n}
\Delta_{n-1} = \frac{- a^{(2)}_2(n)}{a^{(2)}_1 (n) + a^{(2)}_0 (n) \, \Delta_n} \, ,$$ with $n \ge 2$. Using equation (\[eqn:Delta.n\]) in equation (\[eqn:Delta.1\]) results in the following continued fraction equation $$\begin{aligned}
a^{(2)}_1 (1) \, - \, \frac{a^{(2)}_0(1) \, a^{(2)}_2(2)}{a^{(2)}_2(2) - \frac{a^{(2)}_0 (2) \, a^{(2)}_2 (3)}{a^{(2)}_1 (3) -
\raisebox{-2.5pt}{$\ddots$}
\frac{a^{(2)}_0 (n-1) \, a^{(2)}_2 (n)}{a^{(2)}_1(n) -
\raisebox{-3.0pt}{$\ddots$}
}}} = 0 \, . \label{eqn:contfract}\end{aligned}$$
The coefficients $a^{(2)}_k(n)$ will depend on $\Omega$. Thus one can approximate the above continued fraction up to a certain depth and try to minimize the above difference by changing $\Omega$. Let the depth of the approximation be $K-1$. Then the last fraction will be given by $$\begin{aligned}
\raisebox{-2.5pt}{$\ddots$}
\frac{a^{(2)}_0 (K-1) \, a^{(2)}_2 (K)}{a^{(2)}_1 (K) + a_0^{(2)} (K) \, \Delta _K} \, .\end{aligned}$$ Instead of approximating $\Delta_K$ by zero, the Nollert improvement uses the original recurrence relation to give an asymptotic approximation of $\Delta_K$ in $K \gg 1$.
The coefficients $a^{(2d-1)}_k (K)$ and $\Delta_K$ are expanded in powers of $1/\sqrt{K}$ and $K-k \approx K$ for $0 \le k \le 2d-1 \ll K$ is used. Let the expansion of $\Delta_K$ be $$\begin{aligned}
\Delta_K = C_0 + \frac{C_1}{\sqrt{K}} + \mathcal{O} \left( \frac{1}{K} \right).\end{aligned}$$ Following Nollert [@Nollert:1993zz] we choose $C_0 = -1$ and $\Re ( C_1 ) > 0$. The other coefficients are then uniquely determined by the resulting equations.
By using this approach with equation (\[eqn:CF\]), we obtain two continued fraction relations. Unless stated otherwise, to generate the quasinormal mode frequencies $\Omega$, we search with both equations separately, due to the lack of an obvious supersymmetry between the two equations when the fermionic mass is not zero [@Cho:2007zi].
The method is implemented in Maple. The initial mass value is $\eta = 10^{-6}$. The initial guess for the initial mass value in the complex $\Omega$-plane is chosen close to the large $\kappa$ analytic approximation for massless modes from [@Cho:2007zi]. The ratio of the lefthand side of equation (\[eqn:contfract\]) at a resonance to the surrounding area in the complex $\Omega$-plane is $\mathcal O(10^{-4})$.
WKB Method
----------
In addition to the continued fraction approach, we will use a third order WKB method. In this case it is convenient to factorize the spinor as $$\begin{aligned}
\phi_{1,2} = \mathrm{e}^{g_\mp} \, \nu_{1,2} \, ,\end{aligned}$$ with $$\begin{aligned}
\frac{\mathrm d}{\mathrm d z} g_\mp = - \frac{F_\mp}{2 x (\mathrm i \kappa \mp x \eta)} \, ,\end{aligned}$$ the upper sign for $\phi_1$ and the lower sign for $\phi_2$. Equation (\[eqn:diff.second.order\]) is then reduced to the second order differential equation $$\begin{aligned}
\frac{\mathrm d^2}{d z^2} \nu_{1,2} + \left( \Omega^2 - V_{\text{eff}, \mp} \right) \nu_{1,2} = 0 \, ,
\label{eq_WKB}\end{aligned}$$ with the effective potential $$\begin{aligned}
V_{\text{eff},\mp} = &&\mp \frac{\mathrm i \Omega}{x ( \mathrm i \kappa \mp x \eta)} \, F_\mp + \frac{1}{2} \left(\frac{F_\mp}{2 x (\mathrm i \kappa \mp x \eta)} \right)^2 \nonumber \\
&&+ \frac{1}{2} \frac{\mathrm d}{\mathrm d z} \left( \frac{F_\mp}{2 x (\mathrm i \kappa \mp x \eta)} \right) + \frac{\kappa^2 + \eta^2 x^2}{x^2} \, f \label{eqn:completeeffpot}\end{aligned}$$ where again the upper sign is for $\nu_1$ and the lower sign is for $\nu_2$. This second order equation is in the gestalt of a Schrödinger-like equation. The boundary conditions are outgoing waves at $z \rightarrow \pm \infty$. This problem was solved semi-analytically in [@Iyer:1986np] with a WKB approach up to third order. The equation for the complex frequencies $\Omega$ is [@Iyer:1986nq; @0264-9381-9-4-012] $$\begin{aligned}
\Omega^2 = [V_0 + ({-2V_0^{(2)}})^{1/2}\Lambda ] - \mathrm i \lambda (-2 V_0^{(2)} )^{1/2} (1 + \Sigma) \, , \label{eqn:WKBeqn}\end{aligned}$$ where we have defined the following functions: $$\begin{aligned}
\Lambda = && \frac{1}{({-2V_0^{(2)}})^{1/2}}\left\{\frac{1}{8} \left( \frac{V_0^{(4)}}{V_0^{(2)}} \right) \left( \frac{1}{4} + \lambda^2 \right) \right. \nonumber \\
&& \left. - \frac{1}{288} \left( \frac{V_0^{(3)}}{V_0^{(2)}} \right)^2 (7 + 60 \lambda^2)\right\} \, , \nonumber \\
\Sigma = && \frac{-1}{2 V_0^{(2)}} \left\{ \frac{5}{6912} \left( \frac{V_0^{(3)}}{V_0^{(2)}} \right)^4 (77 + 188 \lambda^2) \right. \nonumber \\
&& - \frac{1}{384} \left( \frac{V_0^{(3) \, 2} V_0^{(4)}}{V_0^{(2) \, 3}} \right) (51 + 100 \lambda^2) \nonumber \\
&& + \frac{1}{2304} \left( \frac{V_0^{(4)}}{V_0^{(2)}} \right)^2 (67 + 68 \lambda^2 ) \nonumber \\
&& + \frac{1}{288} \left( \frac{V_0^{(3)} V_0^{(5)}}{V_0^{(2) \, 2}} \right) (19 + 28 \lambda^2) \nonumber \\
&& \left. - \frac{1}{288} \left( \frac{V_0^{(6)}}{V_0^{(2)}} \right) (5 + 4 \lambda^2) \right\} \, ,\end{aligned}$$ $\lambda = n + 1/2$ and $V_0^{(k)}$ being the $k$’th derivative of $V_{\text{eff}, \mp}$ evaluated at the point where the extremum $\mathrm d / \mathrm d z \, V_{\text{eff}, \mp} = 0$ is found. It should be noted, that the WKB method had been improved up to 6th order in [@Konoplya:2003ii] and recently modified and improved up to 13th order in [@Matyjasek:2017psv].
The potential $V_{\text{eff}, \mp}$ is complex, thus one cannot expect to find a point with $\mathrm d / \mathrm d z \, V_{\text{eff}, \mp} = 0$ on the real $z$-axis. We will thus make an analytical continuation of $z$ to the complex plane. We will also search using $x$ instead of $z$, because it is not possible to give an analytically closed expression for $x=x(z)$ for all dimensions $d$.
The method is implemented in Maple. For each given set of parameters we first numerically calculate the location of the extremum $x_0$ in the complex $x$-plane. From the set of values $x_0$ for which $\Re(x_0) > 1$ we choose the one with the smallest $|\Im(x_0)|$. After the extremum is found, the left- and righthand sides of equation (\[eqn:WKBeqn\]) are evaluated and the absolute value of the difference of these is calculated. The search in the complex $\Omega$-plane aims to minimize this difference. The initial mass value is $\eta = 0$. The initial guess for the initial mass value in the complex $\Omega$-plane is chosen according to the large $\kappa$ analytic approximation for massless modes from [@Cho:2007zi]. The ratio of the difference of the left- and righthandside of equation (\[eqn:WKBeqn\]) at a resonance to the surrounding area in the complex $\Omega$-plane is $\mathcal O(10^{-9})$.
We only report on results that we cross checked with the continued fraction method. [We have calculated some parts of the spectrum using a shooting method, where the differential equations (\[eqn:diff.second.order\]) are solved with the proper boundary conditons (\[bc\_horizon\]) and (\[bc\_infinity\]). Although we have been able to reproduce the general behaviour of the spectrum with the mass, the precision obtained with this approach was not as good as the precision obtained with the continued fraction method and the third order WKB. Hence in the following figures we will only display results from these last two methods.]{}
Results {#S3}
=======
An analytical approximation for large angular quantum number and mass {#sec.large.kappa}
---------------------------------------------------------------------
Before presenting the numerical results obtained by applying the methods we have described, let us comment on an analytical result that can be derived from the WKB approach. In the limit of large mass and angular quantum number ($\eta \gg 1$ and $|\kappa| \gg 1$), the second order equation (\[eq\_WKB\]) for $\nu_{1,2}$ becomes $$\begin{aligned}
\frac{\mathrm d^2}{\mathrm d z^2} \nu_{1,2} + \left( \Omega^2 - \frac{\kappa^2 + \eta^2 x^2}{x^2} \, f \right) \nu_{1,2} = 0 \, . \label{eq_large_eta_kappa}\end{aligned}$$ We can define an effective potential $$\begin{aligned}
V_\text{eff}(x) = \frac{\kappa^2 + \eta^2 x^2}{x^2} f(x) = \left( \eta^2 + \frac{\kappa^2}{x^2} \right) f(x) \, . \label{eqn:eff.potential}\end{aligned}$$ Note that this approximation to the effective potential is quadratic in $\kappa$ and thus unaffected by the sign of $\kappa$. This means that for large values of the angular number $l$, the two branches of modes that result from equation (\[kappa\_def\]) will become very close to each other.
The effective potential possesses an extremum determined by the equation $$\begin{aligned}
\eta^2 (d-3) x^2 + \kappa^2 \left( d-1 - 2 x^{d-3} \right) = 0 \, . \label{eqn:large.mass.angular.extremum}\end{aligned}$$ Hence following the standard procedure we can obtain a first order WKB approximation for the eigenvalue $\Omega$ in terms of the extremum of the potential, given by $$\begin{aligned}
\Omega^2 = V_0 - \mathrm{i} \left( n + \frac{1}{2} \right) (-2 V_0^{(2)} )^{1/2} \, , \label{eqn:large.mass.angular.omega}\end{aligned}$$ where $n$ is the overtone number. This formula allow us to obtain an analytical approximation of $\Omega$ for large values of $\eta$ and $|\kappa|$. However, the relation has some limitations. For instance, when $\eta \gg 1$ and $|\kappa| \ll \eta$, the potential of equation (\[eq\_large\_eta\_kappa\]) does not have an extremum for finite $x>1$. Thus the above approximation (\[eqn:large.mass.angular.omega\]) breaks down when $|\kappa|$ is small and $\eta$ is large.
However, for $\eta = 0$ the equation (\[eqn:large.mass.angular.omega\]) becomes the eikonal $|\kappa| \gg 1$ limit of the massless Dirac modes, described before in [@Cho:2007zi]. Even more, in practice equation (\[eqn:large.mass.angular.omega\]) is a reasonable approximation to the frequency $\Omega$ for all $\eta$ as long as $|\kappa| \gg 1$ and the extremum (\[eqn:large.mass.angular.extremum\]) is not found at infinity. Note that because of the particular dependence of equation (\[eqn:large.mass.angular.extremum\]) on the dimension $d$, it can be solved analytically only for $4\leq d \leq 7$ and $d=9$. For $d=8$ the extremum has to be determined numerically.
We will now argue that the effective potential using the first order WKB approximation indicates the existence of frequencies with arbitrary small imaginary part for finite mass values $\eta$ in $d=4$ and $5$. If the second derivative of the potential vanishes in equation (\[eqn:large.mass.angular.omega\]) and $V_0 > 0$, then the resulting frequency will be real. These conditions can be cast in the form $$\begin{aligned}
0 &= \frac{(d-5) x^d + (d-1) x^3}{x^{d+3}} \, , \nonumber \\
\eta^2 &= \kappa^2 \frac{6 x^d + (1-d) d x^3}{(3-d)(2-d) x^5} \, .
\label{zero_pot}\end{aligned}$$ For $d=4$ these result in a finite $x_{00}^{d=4} = 3$ and $\eta_{00}^{d=4} = |\kappa| / \sqrt{3}$ , [where $x_{00}^{}$ and $\eta_{00}^{}$ are the zeros of the equations (\[zero\_pot\]) for the given value of $d$]{}. For $d \ge 5$ the first equation does not have a finite solution $x_{00} > 1$. Still $x_{00}^{d \ge 5} \rightarrow \infty$ is a formal solution to the first equation. This results in a finite $\eta_{00}^{d=5} = |\kappa|$ only for $d=5$. For $d \ge 6$ also $\eta_{00}^{d \ge 6} \rightarrow \infty$ for the formal solution $x_{00}^{d \ge 5} \rightarrow \infty$.
The value of the potential at these points is in four dimensions $V^{d = 4}_\text{eff.} (x^{d=4}_{00}, \eta^{d=4}_{00}) = 8 \kappa^2 / 27 > 0$ and in five dimensions $V^{d=5}_\text{eff.}(x^{d=5}_{00}, \eta^{d=5}_{00}) = \kappa^2 > 0$. Thus in four and five dimensions one can find frequencies with vanishing imaginary part at finite mass $\eta$ in the first order WKB approximation. Of course in five dimensions the WKB method must break down close to these frequencies due to $x_0 \rightarrow \infty$ there. Although in the four dimensional case we do not see indications of a breakdown of the WKB method close to $x^{d=4}_{00} < \infty$, we will see that in practice both numerical schemes become troublesome close to $\eta^{d=4}_{00} = |\kappa| / \sqrt{3}$. However, as one approaches these points, in principle one should be able to generate frequencies with imaginary parts as small as desired for finite mass values. Note that the above behaviour is independent of the overtone number $n$ in the first order WKB approximation.
Observe also that one can construct the effective potential given in equation (\[eqn:eff.potential\]) from the geodesic equations of a massive particle of mass $m$. The Lagrangian $\mathcal{L}$ for a massive particle in the Schwarzschild-Tangherlini spacetime is given by $$\begin{aligned}
\mathcal L = 1 = f(r) \dot{t}^2 - \frac{1}{f(r)} \dot{r}^2 - r^2 \dot{\Omega}^2 \, ,\end{aligned}$$ where $\dot \Omega$ encodes all the angular dependencies. Of course one can choose a plane for the orbit and one has the constants of motion $l = r^2 \dot \Omega$ and $e = f(x) \dot{t}$. Substituting these in the above Lagrangian, multiplying with $m^2 \mu^2$, changing to the normalised quantities $\eta = m \mu, L = m l, x = r / \mu$ and rearranging gives $$\begin{aligned}
\mu^2 \eta^2 \dot{x}^2 + \left(\eta^2 + \frac{L^2}{x^2} \right) f(x) = \eta^2 e^2 \, .\end{aligned}$$ We can thus define the effective potential of geodesic motion $$\begin{aligned}
V_\text{eff.}^\text{geo.} (x) = \left(\eta^2 + \frac{L^2}{x^2} \right) f(x) \stackrel{L \leftrightarrow \kappa}{=} V_\text{eff.} (x) \, .\end{aligned}$$ This should be expected, considering that to lowest order in $\hbar$ a WKB approximation for the Dirac equation results in the equations for geodesic motion [@Audretsch:1981wf; @Rudiger:1981uu].
The case with large angular quantum number: numerical results for $l=10$
------------------------------------------------------------------------
In this section we present results for $l=10$, and different values of the spacetime dimension and fermionic mass. This can be seen as an example of the behaviour of the quasinormal modes for large values of the angular quantum number, [since as we will see below, the previous analytical result already describes very accurately the spectrum of modes. ]{}
The calculated quasinormal modes in the complex plane can be seen in Figure \[fig:RIOl10\], where we show the imaginary part versus the real part of $\Omega$. All these modes correspond to the fundamental state, for different values of the dimension $d$ and mass $\eta$. In different colors we show different spacetime dimensions $d$, ($d=4$ in purple, $d=5$ in blue, $d=6$ in green, $d=7$ in orange, $d=8$ in brown and $d=9$ in red). Each point represents a mode with a different value of the fermionic mass $\eta$, with the case $\eta=0$ (marked with a triangle) corresponding to the value with lowest frequency $\Re(\Omega)$, and largest $|\Im(\Omega)|$. Note that all these modes have $\Im(\Omega)<0$, so the Dirac field perturbation decays with time. The crosses and points represent numerical values calculated using the third order WKB method with $\mathrm{sgn} (\kappa) = +1$ and $\mathrm{sgn} (\kappa ) = -1$, respectively. The solid lines represent the analytical results obtained from the large $\kappa$ limit to the potential (\[eqn:eff.potential\]) in the approximation (\[eqn:large.mass.angular.omega\]). We can see that even though in the approximation we assumed Dirac fields with $\eta \gg 1$, in practice the massless case is very well approximated, and the analytical approximation works very well for all values of the mass $\eta$: the relation (\[eqn:large.mass.angular.omega\]) works as an eikonal approximation for the Dirac quasinormal modes, even for small and intermediate values of the mass.
In Figure \[fig:RIOl10\] we can see that increasing the mass of the Dirac field has the effect of increasing the frequency of the mode $\Re(\Omega)$, while the absolute value of the imaginary part $\Im(\Omega)$ decreases (meaning that the damping time of the perturbation increases). Also we can appreciate how increasing the dimension has the generic effect of increasing the value of $\Re(\Omega)$ and $|\Im(\Omega)|$. With regards to the sign of $\kappa$, in this figure one can observe that for a fixed value of the mass $\eta > 0$ the analytical approximation lies always inbetween the full numerical values obtained for each one of the branches. This means that $\Re (\Omega_{\mathrm{sgn}(\kappa)=+1}) < \Re (\Omega_\text{ana}) < \Re (\Omega_{\mathrm{sgn}(\kappa)=-1})$ and $|\Im (\Omega_{\mathrm{sgn}(\kappa)=+1})| < |\Im (\Omega_\text{ana})| < |\Im (\Omega_{\mathrm{sgn}(\kappa)=-1})|$ , where $\Omega_\text{ana}$ are the quasinormal modes from the large $\kappa$ approximation using first order WKB. In any case, as it was expected in the previous section, for $l=10$ both signs of $\kappa$ are always close to the analytical approximation.
Note that the cases with $d=4$ and $d=5$ possess a different behaviour from the rest of dimensions considered. In $d=4, 5$, the numerical analysis indicates that there exists a value of $\Re(\Omega)$ for which the $\Im(\Omega)$ reaches a critical value, as was indicated by the argument made in section \[sec.large.kappa\]. However, the numerical methods break down in this region and we mark these values with a red circle in Figure \[fig:RIOl10\]. This is not the case in $d\geq6$, where $|\Im(\Omega)|$ decreases smoothly as the frequency increases.
In Figure \[fig:Ibeh\] we present the scaled value of $\Im(\Omega)$ (normalized to the massless value of the imaginary part $\Im(\Omega_0)$). Here we can appreciate the critical behaviour for $d=4,5$. In $d=4$ for a critical value of the fermionic mass $\eta_c$, the imaginary part of $\Omega$ goes to zero in the analytical approximation. The numerical results show that the branch of modes disappears at a finite $\eta$, which is slightly smaller than the critical fermionic mass predicted by the analytical appriximation. In $d=5$ however, the numerical analysis concides very well with the analytical approximation, showing that the $\Im(\Omega)$ can become arbitrarily small as we reach the critical value of the fermionic mass. However the numerical analysis becomes problematic very close to the critical value, indicating that this branch of modes also disappears before reaching this value.
[We want to comment here that the existence of this critical value of the fermionic mass in the spectrum of the $d=4$ case was already noted in [@Cho:2003qe] by analysing the shape of the potential. Also it was observed that more massive fields lead to more slowly damped modes. Our results are compatible with these observations.]{}
In the cases with $d\geq6$ the behaviour changes, and we can see that the modes exist for arbitrary values of the fermionic mass. The imaginary part decreases monotonically to zero as the mass becomes larger and larger. Hence the damping time of these modes can be increased without limit for very large values of the mass $\eta$.
Some insight into the particular behaviour of the cases with $d = 4, 5$ can be obtained by studying the dependence of $\Re(\Omega)$ on the mass $\eta$. In Figure \[fig:d45\_Phase\] we show the real part of $\Omega$ as a function of $\eta$ for these two values of the spacetime dimension for positive $\kappa$. Here we show the modes computed using both the WKB method (circle points) and the continued fraction method (solid line). The dashed lines represent the analytical approximation (\[eqn:eff.potential\]) and (\[eqn:large.mass.angular.omega\]), and the green line marks the limit in which $\Re(\Omega)=\eta$.
The numerical methods break down precisely when this line is approached (red circles), and we cannot trust results obtained beyond this point. Although the analytical approximation can be extended to generate some modes beyond $\Re(\Omega)=\eta$, in this case it is probably an artifact of the approximation.
In order to show this singular behaviour, in Figure \[fig:d45\_Rex\] we present the real part of the point $x_0$ satisfying $V_{\text{eff}}^\prime (x_0) = 0$, as a function of $\eta$ for $\mathrm{sgn}(\kappa)=+1$. The black ($d=4$) and blue ($d=5$) lines correspond to the analytical approximation (\[eqn:eff.potential\]), while the circle points correspond to values calculated using the complete effective potential from equation (\[eqn:completeeffpot\]). We can see in this Figure that the real part of $x_0$ diverges for $d=5$ as we approach the critical value of the mass $\eta_{00}^{d=5} = \kappa$ (marked with vertical red line). One can also see that although the values for $\Re(x_0)$ stay finite in $d=4$, there is a transition to another branch of zeros of the potential. However the numerical methods cannot obtain results with good enough precision once this second branch of zeros is reached.
Hence, since both numerical methods cannot generate modes with good precision close to $\Re(\Omega)=\eta$, and the analytical approximation breaks down in $d=5$ when this limit is crossed, we conjecture that this branch of fundamental $l=10$ modes ceases to exist in $d=4, 5$ when the mass of the Dirac field is large enough and the $\Re(\Omega)=\eta$ limit is reached.
In a summary, from the results we have presented in this section we can learn the following:
For $d\ge6$, as the mass of the fermionic field is increased, the imaginary part of the $l=10$ fundamental mode decreases. The mode seems to exist for arbitrary large values of the mass $\eta$, and $\Im(\Omega)$ can become arbitrarily small. Hence it is possible to find massive modes with arbitrary large damping times (which we can interpret as quasistationary states for large fermionic mass). The analytical approximation of the spectrum gives very reasonable results for all values of the mass. Since in this regime the frequency increases linearly with the mass, the slowly damped field will oscillate very rapidly with $\Re(\Omega)\simeq \eta \gg 1$
For $d=4, 5$, the behaviour is different from the higher dimensional case, since the branch of modes originating from the massless fundamental mode seems to disappear at a finite value of the fermionic mass. However, both the approximation and the numerics indicate that this limit is singular, and the mode is lost before reaching the critical mass.
In $d=4$, although the analytical approximation predicts that arbitrary small values of $\Im(\Omega)$ can be obtained at a finite critical $\eta$, the branch of modes in the numerical approach is lost before this point is reached.
In $d=5$ surprisingly, the analytical approximation of the spectrum matches very well with the numerics for all values of the mass lower than the critical $\eta$. For mass values arbitrarily close to this critical point, in principle very small values of $\Im(\Omega)$ can be generated. These can be interpreted as quasistationary states with intermediate values of the fermionic mass. In this regime the field oscillates with a frequency $\Re(\Omega)\simeq \eta$.
[Note that for larger values of $l$, the full numerical results are in fact closer to the spectrum predicted by the analytical formula (for $l=10$ the numerics only deviate from the analytical formula less than a $5\%$, for all values of the mass and dimension considered). So the analytical aproximation gives very reasonable results for $l\ge 10$. ]{}
As a final note in this section we would like to point that the results we have presented here coincide, within the expected precision, with the results obtained in the recent work [@KonoplyaKerr2017]. Here the authors study the massive Dirac modes in the background of the Kerr black hole, and obtain similar results for the spectrum of the Dirac field in the background of a Schwarzchild black hole in the static limit.
The behavior for different overtone number $n$: numerical results for $l=10$ in five dimensions
-----------------------------------------------------------------------------------------------
As we commented in the previous section \[sec.large.kappa\], in the lowest order approximation and independently of the overtone number, one can see that $\Omega \rightarrow \kappa + \mathrm{i} \cdot 0$ for $\eta \rightarrow \eta_{00}^{d=5} = |\kappa|$, with $n<l$. However it is not clear if this feature is retained for all overtone numbers when one goes beyond the analytical approximation. To investigate this issue, in this section we explore how the overtone number $n$ affects the critical behaviour of the $l=10$ quasinormal modes in $d=5$ for $\mathrm{sgn}(\kappa)=+1$.
In Figure \[fig:d5nRIO\] we present the imaginary part vs. the real part of the frequency for different overtone numbers $n$. The circles are the numerical results using the third order WKB approximation (\[eqn:WKBeqn\]) using the full effective potential (\[eqn:completeeffpot\]), and cross-checked with the CF method. The solid lines are the analytical values in the large $\kappa$ approximation of the effective potential (\[eqn:eff.potential\]) also using the third order WKB approximation (\[eqn:WKBeqn\]). The analytical approximation is shown for the whole range of its validity. However, for $n\ge 1$, the branches of modes calculated from the numerics do not reach the $\Im{(\Omega)}=0$ value.
In Figure \[fig:d5nbeh\] we make a similar plot, showing the imaginary part of the frequency vs. the mass $\eta$ for different overtone numbers $n$. Again it can be seen that for $n\ge 1$, the branches of modes calculated from the numerics stop at a value of $\eta$ always below the critical value of $\eta$ predicted from the analytical approximation.
In the analytical approximation, for all displayed overtone numbers, the frequencies develop arbitrary small imaginary parts for finite mass $\eta$. For the analytical approximation we can thus define a critical mass $\eta_c(n)$ for which $\Im (\Omega) \rightarrow 0$. One can observe that for all displayed overtone numbers $\eta_c(n) < \eta_{00}^{d=5} = \kappa = 11.5$.
However the numerical analysis using WKB and CF methods indicates that in fact the branch of quasinormal modes disappears before reaching this particular value of $\eta_c(n)$. Beyond this value of the fermionic mass, our current methods are not able to produce quasinormal modes with good enough precision.
Thus the indicated universal behaviour for the frequency is not completely retained by higher order WKB methods. In fact, as we have seen, the critical mass value depends on the overtone number, $\eta_c(n)$, being smaller than the predicted value from the lower order WKB method, $\eta_{00}^{d=5}$. Even more, the full numerical approach indicates that the branch of modes disappears at even lower values of the fermionic mass [^1].
Numerical results for $l=0$
---------------------------
In this section we present the spectrum for angular quantum number $l=0$. We calculate the fundamental and first excited modes for $4 \le d \le 9$ with the continued fraction method.
In Figure \[fig:RIOl0n0\] we present $\Im(\Omega)$ vs $\Re(\Omega)$ for the fundamental mode. Note that all the modes calculated have negative imaginary part, meaning they also decay in time. Marked with black triangles are the quasinormal modes for $\eta=0$. Two branches of modes bifurcate from these points: one for $\mathrm{sgn}(\kappa)=+1$, shown with continuous lines, and one for $\mathrm{sgn}(\kappa)=-1$, shown with dashed lines. For fixed values of $\eta$, the branch with $\mathrm{sgn}(\kappa)=+1$ presents always smaller values of the real and imaginary parts of $\Omega$ than the branch with $\mathrm{sgn}(\kappa)=-1$. Also one can see that as the dimension $d$ increased, the fundamental mode increases both in $\Re(\Omega)$ and in $|\Im(\Omega)|$, and approximately by the same amount for a fixed value of the mass $\eta$.
When $\eta$ is increased, the absolute value of the imaginary part of the frequency $|\Im (\Omega)|$ becomes generally smaller. For $6 < d \le 9$, this seems to lead to modes that approach the real axis asymptotically, independently of the sign of $\kappa$. However, in four and five dimensions our results indicate that the modes stop existing at a certain value of $\eta$, right before hitting the real axis. These features are qualitatively very similar to the behaviour obtained in the large $l$ limit, and in particular for $l=10$ in the previous section (see Figure \[fig:RIOl10\]) and to the generic features of the ground state of a scalar field [@Zhidenko:2006rs]. Interestingly, in Figure \[fig:RIOl0n0\] we can observe that the branch of modes in $d$ dimensions with negative $\kappa$ seems to approach the branch of modes in $d+1$ dimensions with positive $\kappa$.
Here again the numerical methods we employ break down for very small values of $|\Im (\Omega)|$, and in practice we cannot generate arbitrary small values of the imaginary part (specially in the positive $\kappa$ branches). As the mass approaches the critical value, the results produced from the equation for $\psi_1$ in relation (\[eqn:CF\]) deviates more and more from the results for $\psi_2$, and the depth of the continuous fraction method has to be increased in order stabilize the computed frequencies. We only show results that we are able to cross-check using both equations, with less than a $0.5\%$ of difference in the calculated $\Omega$ (except close to the critical points of the $d=4, 5$ cases with positive $\kappa$, where we relax it to a $2\%$ of difference).
In Figure \[fig:Rbehl0n0\] we show the real part of the frequency as a function of the mass. In this figure we can clearly see that for large $\eta$, the branch of negative $\kappa$ of a given dimension $d$ approaches the branch of positive $\kappa$ in $d+1$ dimensions. This is thus also true for the imaginary part. Another feature we can observe in this figure is that, in the $\mathrm{sgn}(\kappa)=+1$ branch, the minimum value of the frequency no longer resides at the massless case, but at some configuration with non-zero fermionic mass. Interestingly, for $d=4, 5$, and for $d=6$ with $\mathrm{sgn}(\kappa)=+1$, the real part of the frequency can become smaller than the mass of the field, crossing the line $\Re(\Omega)=\eta$. This is different from what we observed for $l=10$ in Figure \[fig:d45\_Phase\]. Thus in some cases the Dirac field can be trapped in the gravitational field of the black hole.
As in the previous section, we would like to point out here that the results we have presented for the $d=4$ case coincide with the results obtained in the recent work [@KonoplyaKerr2017] within the required precision.
To conclude this section, we present some results regarding the first excitation ($n=1$) of the $l=0$ modes, although for large values of the mass the continued fraction method does not allow us to obtain the same precision as with the fundamental mode. In Figure \[fig:RIOl0n1\] we show $\Im(\Omega)$ vs $\Re(\Omega)$ for the first excited mode, and in Figure \[fig:Rbehl0n1\] we show the real part of $\Omega$ as a function of the mass for these modes. It is worth noting that some of the branches of the first excited mode of the spinor field possesses the generic behaviour of a vector field mode, see for example [@Konoplya:2005hr]. In the positive $\kappa$ branches of the $d=4...8$, increasing $\eta$ decreases the real part of the frequency $\Re( \Omega)$, while the imaginary part does not change much. In the negative $\kappa$ branches however, the real part increases, and only starts to decrease for relatively large values of the mass in the $d=4..7$ cases. Interestingly, the $d=8$ branch with negative $\kappa$, and both branches with $d=9$ seems to avoid the imaginary axis, so the similarity with the vector field modes is lost in these particular cases.
Conclusion {#S4}
==========
In this paper we have calculated the quasinormal modes of the massive Dirac field in the Schwarzschild-Tangherlini black hole in $d=4$ to $9$. We have implemented the calculation of the modes with two independent methods: the continued fraction method, and the third order WKB method. In addition, we have obtained an analytical approximation of the spectrum, which formally applies to the case of large fermionic mass and large angular quantum number. However in practice, we have seen that the approximation works rather well for arbitrary values of the mass, provided the angular quantum number is large enough.
As an example, we have investigated carefully the particular case of the $l=10$ fundamental mode. The frequency and the damping time of the mode increases to arbitrarily large values as the mass of the Dirac field increases. This is a feature that has been observed in other massive fields. Interestingly, for $d=4, 5$, the mode seems to disappear as the damping time of the mode rises at a critical value of the mass with $\eta=\Re(\Omega)$. While for $d=4$, the mode seems to disappear at a finite value of $\Im(\Omega)$, while for $d=5$ the damping time seems to grow arbitrarily as the critical value of the fermionic mass is reached. For $d\ge6$ however, the mode seems to exist for arbitrary values of the mass. These results, together with the analytical approximation obtained from the limit of large mass and angular quantum number, indicate that quasistationary perturbations for intermediate values of the mass and frequencies can be found for $d=4, 5$ for large values of the angular quantum number, while quasistationary perturbations with very large values of the mass can be found for $d\ge6$ with also large values of the frequency.
In addition, the effect of the overtone number has been explored for the particular case of $d=5$ and $l=10$. Interestingly, the full numerical analysis deviates significantly from the eikonal approximation as the mass is increased, and indicates that the branches of excited modes reach a critical value of the fermionic mass where they cease to exist. This critical value of the mass decreases with the overtone number.
We also present results for $l=0$ and the first two overtone numbers, $n=0, 1$. The behaviour of the Dirac field was analogous to a scalar field for the fundamental mode, and similar to a vector field for the first excitation for $d < 8$ and $d = 8$ with positive $\kappa$. Also there exist gravitationally trapped modes with the real part of the frequency smaller than the mass $\Re(\Omega) < \eta$.
We were also able to observe, that the spectrum for the massive spinor depended on the sign of $\kappa$. In general the inequality $\Re (\Omega_{\mathrm{sgn}(\kappa)=+1}) < \Re (\Omega_{\mathrm{sgn}(\kappa)=-1})$ for a fixed value of mass $\eta > 0$ seems to hold.
The disappearance of the modes when the mass reaches a critical value may indicate the starting of another branch of modes for higher values of the mass. This branch of modes could have very small values of the imaginary part. However, another numerical approach is probably necessary, since with our current methods we are not able to study such values of the eigenmodes with large values of the fermionic mass. For instance, it may be more appropriate to change to another representation of the Dirac spinor.
Acknowledgements
================
The authors want to thank Jutta Kunz, Roman Konoplya and Alexander Zhidenko for discussions. CK and JLBS gratefully acknowledge support by the DFG funded Research Training Group 1620 “Models of Gravity”, and the FP7, Marie Curie Actions, People, International Research Staff Exchange Scheme (IRSES-606096).
[^1]: A similar behaviour with the overtone number $n$ seems to be present in the $d=4$ case. However the WKB and continued fraction methods don’t allow us to obtain results with good enough precision, and another approach should be used to fully understand this case.
|
---
author:
- 'Wessel Valkenburg,'
- Bin Hu
bibliography:
- 'refs.bib'
title: 'Initial conditions for cosmological N-body simulations of the scalar sector of theories of Newtonian, Relativistic and Modified Gravity'
---
Introduction
============
The expansion history of the universe seems to be a smooth function of time, which can hence only provide a handle on a limited number of parameters in the cosmological model. In order to further quantify Inflation, Dark Matter and Dark Energy, and distinguish a variety of candidate models, hope is placed on probing the nonlinear growth of perturbations in the universe, considered the ongoing preparations for the Euclid satellite [@2014SPIE.9143E..0HL].
The current state of the art for nonlinear structure formation in the universe is the numerical simulation of gravitational clustering of a large fixed number of particles in a finite sized box: N-body simulations. A plethora of codes solves Newtonian dynamics endowed with a friction term that accounts for the cosmic expansion ([*e.g.*]{} [@Teyssier:2001cp; @Springel:2005mi; @Bryan:2013hfa]). Progress is made in relativistic simulations as well [@Adamek:2014xba]. The next layer of complexity is the addition of perturbations in the exotic form of matter mentioned above, Dark Energy, or often under the alternative moniker of Modified Gravity [@Llinares:2008ce; @Baldi:2008ay; @Zhao:2010qy; @Baldi:2013iza; @Llinares:2013jza].
[ In order to push simulations to the next level, accounting for dynamical modifications of gravity or Dark Energy at arbitrary redshifts, an obvious first step is to discuss the starting point of such simulations, which is the goal of this paper. ]{}
Numerical simulations are inevitably discrete in space and time, and it is not clear what the consequences of this discreteness are for the results, or how closely the results reflect nonlinear structure in a continuous universe [@Pen:1997up; @Baertschiger:2001eu; @Joyce:2004qz; @Sirko:2005uz; @Prunet:2008fv; @Joyce:2008kg; @Carron:2014wea; @Colombi:2014zga]. Such discreteness effects are not the subject of this paper. [ However, in Appendix \[sec:discrete\], we summarise the known methods for realising discrete samples of continuous fields, and we list the choices that the simulator needs to make.]{}
In most of the literature on simulations, priority is given to the details and technicalities of solving for the nonlinear dynamics. Setting the initial conditions is always done using Zel’dovich’ approximation (explained in the main text), sometimes up to second order [@Crocce:2006ve]. Most importantly, to our knowledge, [*all*]{} simulations set their initial conditions at a time when no perturbations in exotic matter are present. However, if one wants a more complete description, the full parameter range of the exotic models needs to be tested, including the range in which Dark Energy has significant dynamical perturbations (see [@Clifton:2011jh] for a review); where the quasi-static approximation for its perturbations breaks down [@Noller:2013wca; @Brito:2014ifa; @Sawicki:2015zya; @Winther:2015pta].
The aim of this paper is to describe all that is necessary to set the correct initial conditions for arbitrary cosmological simulations under arbitrary metric theories of gravity, varying from the inclusion of only dust to the inclusion of species that in their linear description are fully imperfect fluids. [ For the sake of clarity, we include discussions of previously known topics where necessary, providing the relevant references.]{} We do not address the subject of setting initial conditions for matter species that do not simply form a sheet in 6-dimensional phase space, such as for example standard-model neutrinos and photons above a certain resolution, which are described accurately by linear perturbations although not by fluid dynamics. [ We release a code, [FalconIC]{} [^1], which can be used to generate discrete initial conditions of any cosmological [*fluid*]{}. The code links against any version of both Boltzmann codes [camb]{} [@Lewis:1999bs] and [class]{} [@Blas:2011rf], including [EFTCamb]{} [@Hu:2013twa; @Raveri:2014cka], such that no separate running of those codes is necessary, and initial conditions for any fluid can be generated at arbitrary scales, of arbitrary size (fully parallelised using MPI and OpenMP), for arbitrary cosmological parameters. ]{}
Simulations discretise the cosmological fluids in a regime where the fluid description is still valid, on the onset of shell crossing and the onset of the need for a particle description, when the sheet in phase space starts folding or even tearing. Therefore, at the first time steps, N-body simulations should reproduce the linear theory described by fluid dynamics. [ However, the fact that the linear discussion in this paper addresses (imperfect) fluids, does not at all imply that these discrete realisations can only be used for fluids. In other words, the Effective Field Theory of Modified Gravity describes some matter which in the linear regime may look like an imperfect fluid. Particles whose free-streaming length is smaller than the simulation resolution, but whose dynamics is described as in imperfect fluid, such as standard model neutrinos, can be realised just as well following our description.]{}
In summary, [ the following points are the ingredients for discrete realisations of arbitrary imperfect fluids]{},
- Any linearly perturbed quantity (‘charge’) can be translated into a displacement field of equal charge vertices, by a coordinate transformation whose Jacobian equals the original charge perturbations.
- The scalar quantity that defines the positions of particles with velocity $u^\mu$, for an arbitrary fluid, is $n^{\bare}\equiv-\int d\tau\, \nabla_i u^i$, where $i$ runs over spatial coordinates.
- For newtonian simulations, positions are given by $n\equiv n^\bare / \sqrt{g^{(3)}}$, with $g^{(3)}$ the determinant of the spatial part of the metric.
- In the presence of pressure or heat flux, the particles have varying masses $m$ and internal energies $T$, defined by $$\begin{aligned}
m =& \frac{\bar \rho}{\bar n}\left( 1 + \Delta_\rho - \Delta_n \right) = \frac{\bar \rho}{\bar n}\left( 1 + \Delta_\rho^{\bare} - \Delta_n^{\bare} \right), \\
T =& \frac{\bar P}{\bar n}\left( 1 + \Delta_P - \Delta_n \right) = \frac{\bar P}{\bar n}\left( 1 + \Delta_P^{\bare} - \Delta_{n}^{\bare} \right),\end{aligned}$$ with $\Delta_a\equiv \delta a / \bar a$ for any quantity $a$ with background value $\bar a$.
- For adiabatic perturbations, all quantities are set by the same random seed multiplied by their respective transfer functions. Isocurvature perturbations are introduced by combining multiple random seeds.
- The linear displacement field for a dust fluid in a universe endowed with a scalar modification of gravity is scale dependent, and hence the trajectories of particles are not straight lines.
- The scalar modification itself can be described in both Lagrangian and Eulerian representations, whichever of the two may prove more convenient in nonlinear simulations.
[ The bending of Dark Matter trajectories at the linear level, implies that simulation codes such as Ramses [@Teyssier:2001cp] or Enzo [@Bryan:2013hfa] which only take particle velocities as input, need to be adapted to include particle positions as well, since particle positions are no longer simply a time-dependent function times their velocities.]{}
In section \[sec:anyPlainDisp\] we start by explaining how an arbitrary density field is related to a displacement field, under the assumption of absence of vorticity[^2]. In section \[sec:mainIC\] we then show how to define the displacement field under an arbitrary metric for an arbitrary (im)perfect fluid, with the correct velocities emerging. In section \[sec:nrlim\], we briefly comment on the Newtonian approximation, necessary for Newtonian simulations. In section \[sec:qsa\] we show how particle trajectories of linear perturbations do not follow straight lines, already in the quasi-static approximation. Finally, in section \[sec:demgeftLPT\] we apply our prescription to a parameterisation of the Effective Field Theory of Modified Gravity, in synchronous comoving coordinates.
Throughout this paper, we will refer to the Conformal Newtonian gauge as the Longitudinal gauge, since we prefer to reserve the word Newtonian for Newtonian N-body simulations. We use units with the speed of light $c=1$, and we use the Einstein summation convention. All equations with perturbative quantities are expanded up to linear order.
Density–displacement duality\[sec:anyPlainDisp\]
================================================
![[*Approximating Zel’dovich*]{}: a discretised displacement field based on an image of Y. Zel’dovich plus gaussian noise, multiplying the displacement with increasing factors from left to right, starting at zero.[]{data-label="fig:face"}](facephase1 "fig:"){width="24.00000%"} ![[*Approximating Zel’dovich*]{}: a discretised displacement field based on an image of Y. Zel’dovich plus gaussian noise, multiplying the displacement with increasing factors from left to right, starting at zero.[]{data-label="fig:face"}](facephase2 "fig:"){width="24.00000%"} ![[*Approximating Zel’dovich*]{}: a discretised displacement field based on an image of Y. Zel’dovich plus gaussian noise, multiplying the displacement with increasing factors from left to right, starting at zero.[]{data-label="fig:face"}](facephase3 "fig:"){width="24.00000%"} ![[*Approximating Zel’dovich*]{}: a discretised displacement field based on an image of Y. Zel’dovich plus gaussian noise, multiplying the displacement with increasing factors from left to right, starting at zero.[]{data-label="fig:face"}](facephase4 "fig:"){width="24.00000%"}
A displacement field can be regarded as coordinate transformation from some (virtual) coordinate system with a constant charge per coordinate volume to a (physical) coordinate system with a perturbed density (of charges, mass, etc.). When the perturbations are small, and the unperturbed quantity is denoted by an overbar, the coordinate transformation to obtain the displacement field that is associated with some density field $\rho(\vec x) = \bar \rho (1+ D(\vec x))$ can be defined by, $$\begin{aligned}
\bar \rho (1+D(\vec x)) d^{N-1}x =& \bar \rho d^{N-1} x' ,\\
(1+D(\vec x)) =& \left| \frac{\partial x^{i}}{\partial x^{j'}} \right| ,\end{aligned}$$ such that an observer at fixed $\vec x'$ is co-moving with the charges $\rho$. For small $D(x^k)$ and in absence of vorticity, the coordinate transformation is solved by $$\begin{aligned}
x^{i} =& x^{i'} - \frac{\vec \nabla' }{{\nabla'}^2} D(\vec x') \nonumber\\
=& x^{i'} - \frac{\vec \nabla }{\nabla^2} D(\vec x) + \mathcal{O}(D^2),\end{aligned}$$ where ${\nabla'}^2 \equiv \sum_{i'} \partial^2_{i'}$ and $$\begin{aligned}
\frac{1}{\nabla^2}D(\vec x)=\int \frac{d^{N-1}k\,d^{N-1}\tilde x}{\left(2\pi\right)^{N-1}} \,\, k^{-2} D\left(\vec {\tilde x}\right)\,\, e^{i\vec k\cdot\left(\vec x - \vec{ \tilde x }\right)} \end{aligned}$$ This relation is independent of the theory that gives rise to the density field[^3], as exemplified in Fig. \[fig:face\]. However, since we set vorticity to zero, the Poisson equation that determines the Newtonian potential is equal to the equation that gives the deformation tensor $\partial x^i/\partial x^{j'}$. One can identify the scalar $D(\vec x)$ with the Newtonian potential, and obtain the Zel’dovich Approximation [@Novikov:2010ta; @Zeldovich:1969sb].
In the $x'$ coordinate system, the equal-weight particles are not displaced, such that this is a Lagrangian coordinate system[^4]: the coordinates are at rest with the charges, which do not necessarily have to be masses.
If the density field has a continuous time dependence, $\rho(\vec x) \rightarrow \rho(\tau, \vec x)$, the velocities $\vec v^{(c)}$ of the virtual equal-weight particles are simply given by the time derivative of the field, $$\begin{aligned}
\vec v^{(c)}(\tau, \vec y) = - \frac{\vec \nabla }{\nabla^2}\partial_t \rho(\tau, \vec x).\label{eq:pseudoVelField}\end{aligned}$$
When one is dealing with a curved manifold, care needs to be taken with the meaning of the displacement and density fields. For the displacements to correspond to the coordinate positions of particles on an arbitrary manifold, the density field used for the coordinate transformation must be the ‘bare’ density of particles in coordinate space, not taking into account the curvature of space. If one were to take the proper density perturbations as the generator for the coordinate transformation, one would obtain the displacement of particles in a (purely spatially) transformed coordinate gauge, in which the trace of the perturbations of the spatial metric vanishes.
Instead of employing the purely spatial density–displacement duality above, one may perform a similar coordinate transformation using the proper energy density on the spacetime, $$\begin{aligned}
\rho(x)\sqrt{-g}d^Nx = \bar\rho(1+D(x))\sqrt{-g}d^Nx = \bar\rho\sqrt{-g'}d^Nx',\end{aligned}$$ in which case the $x'$ coordinate system acquires the meaning of a synchronous comoving gauge, associated with the species of $\rho$, as explained in [@Rampf:2013ewa; @Rigopoulos:2013nda; @Rigopoulos:2014rqa]. Such a transformation makes explicit where General Relativity comes into play compared to Newtonian dynamics, but is not useful for the purpose of this work.
When a problem is discretised [ (see Appendix \[sec:discrete\])]{}, the Eulerian description amounts to following a density field on a regular grid in $\vec x$ at positions $\vec x_{ijk}$, while the Lagrangian description amounts to following a displacement field for points on a regular grid in $\vec x'$, which translates to a curved mesh in $\vec x$-space. Here ‘regular’ does not necessarily mean Cartesian, because for example glass initial conditions [@White:1994bn] or alternatives [@Hansen:2006hj] take an irregular partitioning of $\vec x'$-space.
In general, at any time $dM(\tau, \vec x) = \bar\rho d^3x' = \bar\rho \left| \frac{\partial x^{i'}}{\partial x^{j}} \right| d^3x$. This expression continues to hold in situations where $\vec x'(\vec x)$ is not single valued and $\delta(x) \geq \mathcal{O}(1)$, [*i.e.*]{} the phase-space sheet has folded [@Abel:2011ui]. It must be noted that its evaluation becomes nontrivial then.
Linear displacement and velocity fields in metric theories of Gravity\[sec:mainIC\]
===================================================================================
Geometry
--------
The next section applies to any metric theory of gravity, with a covariant derivative defined by, $$\begin{aligned}
\nabla_\mu v^\nu =& \partial_\mu v^\nu + \Gamma^{\nu}_{\mu\lambda}v^\lambda,\\
\Gamma^{\nu}_{\mu\lambda} =& \tfrac{1}{2} g^{\nu\alpha}\left(\partial_{\mu}g_{\lambda\alpha} + \partial_{\lambda} g_{\alpha\mu} - \partial_{\lambda} g_{\mu\alpha} \right),\end{aligned}$$ with the Christoffel connection $\Gamma^{\nu}_{\mu\lambda}$. We do not refer to the equations that source the metric with a matter configuration. That is, we define all modifications of gravity as those that change the Einstein equations and / or those that can be written as additional matter content inside the energy-momentum tensor.
Hydrodynamics
-------------
At the linear level and hence at the Cauchy surface of N-body simulations, the cosmic density fields under consideration can be described as fluids. Let us hence first layout the definitions for relativistic hydrodynamics. All contents of this section can be traced back to Refs. [@1959flme.book.....L; @Kodama:1985bj; @Ma:1995ey; @Sawicki:2012re].
This paper focusses on dynamics entirely attributable to scalar quantities. All is perturbed about a background Friedmann-Lemaître-Tolman-Bondi solution, denoted by overbars, such that background quantities have only time dependence while perturbed quantities (indicated by $\Delta$) have time and space dependence. Hereafter we drop the time and space dependence in most functions. For a fluid with four-velocity $U^{\mu} = dx^\mu / \sqrt{-ds^2} $, $U^\mu U_\mu = -1$, define the transverse projector[^5], $$\begin{aligned}
\perp_{\mu\nu}=g_{\mu\nu}+U_{\mu}U_{\nu},\end{aligned}$$ which projects into the plane orthogonal to $U^{\mu}$, the spatial slices for an observer comoving with the fluid. The unperturbed $\bar U^i=0$, and $\partial_i \delta U^i \equiv \theta$.
The energy momentum tensor $T_{\mu\nu}$ then carries the following information,
- energy density $\rho = \bar\rho + \delta \rho = \bar\rho(1+\Delta_\rho) \equiv U^{\mu}U^{\nu}T_{\mu\nu}$,
- pressure $P = \bar P +\delta P= \bar P(1+\Delta_P)\equiv \tfrac{1}{3}\perp^{\mu\nu}T_{\mu\nu}$,
- energy flow (or heat transfer) $q^{\mu} \equiv \perp^{\mu\nu}U^{\lambda}T_{\nu\lambda}$,
- anisotropic shear perturbation $\Sigma^{\mu\nu}$,
and is decomposed as $$\begin{aligned}
T^{\mu}_{{\phantom{\mu}}\nu} =& \rho U^{\mu} U_{\nu} + P \perp^{\mu}_{{\phantom{\mu}}\nu} + U^{\mu} q_{\nu} + U_{\nu} q^{\mu}+ \Sigma^{\mu}_{{\phantom{\mu}}\nu}.\end{aligned}$$ Owing to the scalar nature of the system one can further simplify with $$\begin{aligned}
q^\mu=&-a(\tau)\left(\rho + P\right)\perp^{\mu\nu}\frac{\nabla_{\nu}}{\nabla^2}q,\\
\Sigma^{\mu\nu}=& -\tfrac{3}{2}a(\tau)^2\left(\rho + P\right)\left(\perp_{\mu\lambda}\perp_{\nu\alpha}-\tfrac{1}{3}\perp^{\mu\nu}\perp^{\lambda\alpha}\right) \frac{\nabla_{\lambda}\nabla_{\alpha}}{\nabla^2}\sigma, \end{aligned}$$ where we chose pre-factors for convenience, such that $\sigma$ corresponds to the definition in Ma & Bertschinger [@Ma:1995ey] when $q=0$.
In an arbitrary gauge, the scalar part of the metric can be written as [@Bardeen:1980kt], $$\begin{aligned}
\frac{ds^2}{a(\tau)^2}=-(1+2A) d\tau^2 - 2 B_i d\tau\,dx^i + \left[(1+2H_L)\eta_{ij} + 2h^T_{ij} \right]dx^idx^j ,\end{aligned}$$ where $\eta_{\mu\nu}$ is the Minkowsky metric, and $$\begin{aligned}
B_i =& \int \frac{d^3k}{\left(2\pi\right)^3}\frac{k_i}{k}B_{{\vec k}} e^{i\vec k \cdot \vec x},\\
h^T_{ij} =& \left[\frac{\partial_i\partial_j}{\nabla^2} - \tfrac{1}{3}\eta_{ij}\right]H_T,\end{aligned}$$ where all scalar potentials $A$, $B$, $H_L$ and $H_T$ are small and spacetime dependent[^6].
Any number density in the frame of an observer comoving with the fluid at velocity $U^{\mu}$ is a scalar $n=\bar n + \delta n = \bar n(1 + \Delta_n)$, and the number transport is $n^{\mu} = - n U^{\mu}$. If the number is conserved, we have at the linear level and independent of whether the fluid’s energy is conserved, $$\begin{aligned}
\hspace{2cm}
\nabla_\mu\, n^{\mu} =& \, 0, \hspace{2cm} \mbox{[number conservation]}\\
\bar n \propto & a^{-3}, \\
\dot \Delta_n =& -(\theta + 3 \dot H_L),\label{eq:ParticleConservation} \hspace{1.63cm} \mbox{[linear order]}\end{aligned}$$ where an overdot denotes a derivative with respect to conformal time $\tau$. The number can be associated with microscopic particles, but just as well with macroscopic simulation vertices (also often referred to as ‘particles’).
The energy conservation equation $\nabla_\mu T^{\mu\nu} = 0$ corresponds to the continuity and Euler equations, at linear level in Fourier space[^7] $$\begin{aligned}
\dot{\bar\rho} =& - 3\mathcal{H}\left(\bar\rho + \bar P\right),\\
\dot\Delta_\rho =& \left(1 + w\right)\left( q - \theta - 3\dot H_L\right) + 3\mathcal{H}\left(w - c_s^2\right)\Delta_\rho,\label{eq:energyConservation}\\
\dot \theta + k \dot B - \dot q=& k^2 A +k^2 \Delta_P - \mathcal{H} (1-3w) (\theta + kB - q)-\frac{\dot w }{w+1}(\theta + kB - q) - k^2\sigma,\label{eq:velocityEquation}\end{aligned}$$ where $\mathcal{H}\equiv \dot a(\tau)/a(\tau)$ and $w(\tau)\equiv\frac{\bar P}{\bar \rho}$. Note that at the linear level, the continuity equation is only sensitive to the trace of the spatial perturbations of the metric, $3H_L$. The scalar potentials $A$ and $B$ only affect the Euler equation for the velocities, while the transverse traceless potential $H_T$ does not enter the energy conservation equations at the linear level.
Constant or variable mass per vertex during the fluid phase
-----------------------------------------------------------
The phase-space of a fluid is discretised for the purpose of a numerical simulation, upon the start of which a particle picture may be followed, which goes beyond the fluid description when perturbations become nonlinear and trajectories start crossing. This means that a mesh is laid out in real space with a velocity associated to each vertex of the mesh, the ‘particles’. [ For a summary of the existing methods for discretisation, see Appendix \[sec:discrete\].]{}
Although this it is not a fundamental condition, all numerical cosmological simulations known to the authors are based on a fixed number of vertices in phase-space (particles) which is preserved throughout the simulation. This is not to be confused with the Poisson solver, which during the simulation at any time can choose an adaptive mesh to approximate the gravitational potential(s) at the desired resolution, provided the positions of the fixed number of particles.
If the vertices of the fluid are comoving with the velocity field of the fluid, then their proper number density follows Eq. . Comparing Eq. to Eq. , it is clear that the special case of a fluid with constant equation of state $\dot w=0$, constant sound speed $w=c_s^2$ which for this case is equal to $c_s^2=\delta P /\delta\rho$, and zero energy flow $q=0$, can be modelled discretely by only accounting for the positions and velocities of vertices, associating an energy density to each vertex with, $$\begin{aligned}
\Delta_\rho = (1+w) \Delta_n. \hspace{2cm} \mbox{[$\dot w = q = 0$, $w=c_s^2$, $\nabla_{\mu} T^{\mu\nu}=0$ ]}\end{aligned}$$ Note that the fluid need [*not*]{} be perfect, as $\sigma$ need not be zero for this condition, which is why the statement above may apply to other fluids than dust. [ The quantity $\Delta_\rho$ is to be evaluated on the discrete vertices on which initial conditions are generated.]{}
If any of the three conditions is broken during the time span of the simulation, one needs to model the dynamical energy density of the vertices. In other words, the ‘particles’ do not have a ‘constant mass’, and energy density and pressure at each vertex are given by equating $\rho(\tau, \vec x) = m(\tau, \vec x) n(\tau, \vec x)$ and $p(\tau, \vec x) = n(\tau, \vec x)T(\tau, \vec x) $, where $m$ can be thought of as mass and $T$ as temperature, although strictly speaking they can have other meanings, $$\begin{aligned}
m =& \frac{\bar \rho}{\bar n}\left( 1 + \Delta_\rho - \Delta_n \right), \\
T =& \frac{\bar P}{\bar n}\left( 1 + \Delta_P - \Delta_n \right) = \frac{\bar \rho}{\bar n}\left(w + \frac{\delta P}{\delta\rho}\Delta_\rho - w\Delta_n\right).\end{aligned}$$ [ It should be clear now that $m$ and $T$ are evaluated at the discrete set of vertices (‘particles’).]{} One is free to set for example $\frac{\bar \rho}{\bar n} = 1$ at some given time. Moreover, depending on the extent to which a gauge is fixed, one may have further freedom to set $\Delta_m = 0$ at a convenient time of choice, which amounts to different choices of cutting the density field into particles.
Bare densities and velocities\[sec:bare\]
-----------------------------------------
A simulation acts in a coordinate space. The bare density of vertices is at any time step given by the number of vertices inside a given volume in coordinate space [@Adamek:2013wja]. This is related to the proper density by, $$\begin{aligned}
n^{\bare} d^3x = n \sqrt{g^{(3)}} d^3x,\end{aligned}$$ where $g^{(3)} = \det g_{ij} = a(\tau)^6(1+6H_L) + \mathcal{O}(\epsilon^2)$, the determinant of the spatial part of the metric, the three-metric, such that[^8] $$\begin{aligned}
\Delta_n = \Delta_n^{\bare} - 3H_L.\label{eq:dnbare}\end{aligned}$$ Thus, the displacement fields are expressed in terms of coordinates (and not proper distances), such that they must be generated in coordinate space based on the bare densities. Density fields, on the other hand, continue to express proper densities and need no notion of bare density.
A displacement field based on number density $n$, must be consistent with the theory at any time step. This implies that the motion along a displacement field at different time steps must reproduce the correct velocities. Indeed, we find, $$\begin{aligned}
\theta^{\bare} = -\dot \Delta_n^{\bare} = -\dot\Delta_n - 3\dot H_L = \theta, \label{eq:baredot}\end{aligned}$$ as per Eq. and Eq. .
Note that the definitions of mass and temperature are unaffected by the notion of bare density, $$\begin{aligned}
m =& \frac{\bar \rho}{\bar n}\left( 1 + \Delta_\rho - \Delta_n \right) = \frac{\bar \rho}{\bar n}\left( 1 + \Delta_\rho^{\bare} - \Delta_n^{\bare} \right), \\
T =& \frac{\bar P}{\bar n}\left( 1 + \Delta_P - \Delta_n \right) = \frac{\bar P}{\bar n}\left( 1 + \Delta_P^{\bare} - \Delta_{n}^{\bare} \right) .\end{aligned}$$
In summary, particle positions in a [*relativistic*]{} simulation are obtained from applying the density–distance duality to bare number densities, while particles masses and internal energies are set through the equations above. [ Again, these quantities are to be evaluated on the discrete positions of the simulation particles.]{}
Adiabatic and isocurvature initial conditions for multiple fluids and fields
----------------------------------------------------------------------------
Now that we have shown in brief how to relate a solution to the stress-energy tensor to a displacement field for a quantity described in a Lagrangian picture, in arbitrary coordinates, we can address the problem of discretising Lagrangian quantities and Eulerian quantities simultaneously, even though the perturbations are strictly at different coordinates. The coordinates $\{t, \vec x\}$ are the coordinates of the simulation. An Eulerian quantity’s phase space is sampled and simulated on a regular grid in $\vec x$-space, [*i.e.*]{} on $\vec x_{ijk}$, with indices $\{i,j,k\}$ labeling discrete positions in dimensions $\{x^1, x^2, x^3\}$ respectively. Tracers of the phase space of a Lagrangian quantity are on a regular grid in that quantity’s homogeneous-coordinate-density system (“equal-charge tracers”), $\vec y_{ijk}$, with indices $\{i,j,k\}$ labeling discrete positions in dimensions $\{y^1, y^2, y^3\}$. The coordinates are related by $\vec y = \vec x + \vec \coordDisp$, such that a regular grid in one space becomes a curved mesh in the other; the grid points in both spaces do not coincide.
How does one generate realisations of initial conditions in real space, when multiple grids are present, as many as there are Lagrangian quantities plus one for all Eulerian quantities? The short answer is: [ the same random numbers can be used on all grids]{}, as long as displacements are sufficiently small. In a more General Relativistic language: as seeds of linear perturbations, [ the random numbers are gauge independent [ ($\vec \xi$ in Appendix \[sec:discrete\])]{}]{}. This follows straightforwardly from the argument in section \[sec:anyPlainDisp\], which can be summarized by $\vec x_{ijk} = \vec y_{ijk} + \vec \coordDisp(\vec y_{ijk}) = \vec y_{ijk} + \vec \coordDisp(\vec x_{ijk}) + \mathcal{O}(\epsilon^2) $, where $\epsilon$ denotes all quantities that are assumed small, being displacements, potentials, densities and so on.
By virtue of the closed set of linear differential equations that describes the system in the linear regime, $$\begin{aligned}
{\mathbf M}\left(\tau, \vec k\right) {\mathbf f}_{\vec k}(\tau) = 0,\label{eq:genericODE}\end{aligned}$$ where $\mathbf{M}\left(\tau, \vec k\right)$ denotes a matrix with differential operators in $t$ and nonlinear functions in $\vec k$, and ${\mathbf f}_{\vec k}(\tau)$ denotes the set of dynamic degrees of freedom, all random input in a realisation of initial conditions is encoded in the initial conditions of the differential equations. That is, the differential equations are deterministic, and the solution changes linearly with the initial conditions, $$\begin{aligned}
{\mathbf f}_{\vec k}(\tau) = \left(\begin{array}{c} D_1 \left(\tau, \vec k\right) f^{\rm ini}_{1,\vec k} \\ \ldots \\ \ldots \\ D_m \left(\tau, \vec k\right) f^{\rm ini}_{m,\vec k} \end{array} \right),\end{aligned}$$ where ${\mathbf D}\left(\tau, \vec k\right)$ solves the system of equations . Generating one realisation at any time in cosmic history, amounts to choosing random numbers ${\mathbf f}^{\rm ini}_{\vec k}$ that in the real universe actually are drawn at the hot big bang, for example by the inflaton as quantum correlations decohere and become classical. [ The quantity ${\mathbf f}^{\rm ini}_{\vec k}$ can be identified with $\vec \xi$ in Appendix \[sec:discrete\].]{}
A realisation is entirely described by its Fourier transform, regardless of whether the random numbers are drawn in Fourier space or in real space (as in [@Bertschinger:2001ng][ , see Appendix \[sec:discrete\]]{}), and regardless of whether the distribution is gaussian or not. All quantities in the system are related linearly by functions of wavenumber $\vec k$, because we consider the system at the linear regime. In practice, the Fourier transform of the perturbations in each quantity are a time and space dependent amplitude, multiplied by a normal Gaussian random number (with or without correlations, depending on the type of initial conditions).
In summary, in the case of adiabatic initial conditions, all quantities in the vector ${\mathbf f}_{\vec k}(\tau)$ share the same random seed, $$\begin{aligned}
{\mathbf f}_{\vec k}(\tau) = \left(\begin{array}{c} D_1 \left(\tau, \vec k\right) \\ \ldots \\ D_m \left(\tau, \vec k\right)\end{array} \right) f^{\rm ini}_{\vec k} , \hspace{2cm} \mbox{[adiabatic]}\end{aligned}$$ regardless of whether these quantities describe a lagrangian displacement field in a mesh which appears curved in Eulerian coordinates, or whether these quantities are on a regular Eulerian grid. Already in a $\Lambda$CDM universe, baryons and Cold Dark Matter have slightly different transfer functions, which should be taken into account when generating their initial conditions based on the same random seed, as first applied in [@Bird:2010mp]. Owing to the linearity of the system, any type of isocurvature perturbations can then be expressed as, $$\begin{aligned}
{\mathbf f}_{\vec k}(\tau) = \left(\begin{array}{c} D_1 \left(\tau, \vec k\right) \\ \ldots \\ D_m \left(\tau, \vec k\right)\end{array} \right) f^{\rm(1), ini}_{\vec k} + \left(\begin{array}{c} D_1 \left(\tau, \vec k\right) \\ \ldots \\ D_m \left(\tau, \vec k\right)\end{array} \right) f^{(2),\rm ini}_{\vec k} , \hspace{2cm} \mbox{[adiabatic + isocurvature]}\end{aligned}$$ and so on. These statements hold for any type of initial seed, whether it is gaussian or not. Notably, in [@Burrage:2015lla] it is pointed out that exactly a scalar modification of gravity may acquire its own spectrum of perturbations, giving rise to isocurvature perturbations.
Non-relativistic initial conditions\[sec:nrlim\]
================================================
Non-relativistic limit
----------------------
The non-relativistic limit for perturbations in an expanding universe is obtained when velocities are small compared to the speed of light. Furthermore, the Newtonian limit is obtained when pressure is small, in which case the linearised perturbations of the fluid are described by [@1980lssu.book.....P], $$\begin{aligned}
\dot \Delta_{\rho} =& - \theta + q, \hspace{2cm} \mbox{[Newtonian, $\sigma=0$, $w=0$, $c_s^2\ll 1$]}\\
\dot \theta - \dot q=& k^2 \phi + c_s^2 k^2 \Delta_\rho - \mathcal{H} (\theta - q),\end{aligned}$$ where we continue employing the usual co-expanding[^9] coordinate system with conformal time. However, these equations neglect the effect of pressure on the energy density present in relativity (associated to the work needed to compress a fluid with pressure). The equations can be modified to obtain the non-relativistic limit that has the correct continuation to the General Relativistic equations [@Lima:1996at], which in practice amounts to removing all references to geometry and installing a single gravitational potential, $$\begin{aligned}
\dot \Delta_{\rho} =& (1+w) (q - \theta), \hspace{2cm} \mbox{[Non-relativistic, $\sigma=0$, $w=c_s^2\neq0$, $\dot w = 0$]}\label{eq:nreom}\\
\dot \theta - \dot q=& k^2 \phi +\frac{c_s^2 }{ (w+1)}k^2 \Delta_\rho - \mathcal{H} (\theta - q). \label{eq:nreomtheta}\end{aligned}$$ The Newtonian limit is necessary for setting initial conditions for Newtonian simulations. Power spectra for the distribution of densities, velocities and potentials are however readily obtainable from solvers of relativistic Boltzmann equations, such as [Class]{} [@Blas:2011rf] and [Camb]{} [@Lewis:1999bs].
One could use the gauge freedom to fix the gauge such that Eqs. (\[eq:nreom\], \[eq:nreomtheta\]) hold at all scales [@Flender:2012nq; @Rampf:2013dxa]. We briefly elaborate on that in Section \[sec:gauges\]. In this subsection we focus however on the weak field limit, in which initial conditions for Newtonian N-body simulations are inevitably set. An approximate translation from Newtonian large-scale results to relativistic interpretations can be found in [@Green:2011wc].
The proof that Newtonian gravity is the weak field limit of General Relativity is textbook material, where it is usually shown that the equations of motion in the longitudinal gauge ($H_T=B=0$) reproduce the non-relativistic equations of motion , although the same can be proven in the comoving gauges ($\theta=B$) [@1980lssu.book.....P]. The density perturbations of both these gauges, computed in terms of quantities of an arbitrary gauge, are [@Bardeen:1980kt], $$\begin{aligned}
\Delta_\rho^{\comoving} =& \Delta_\rho + 3(1+w)\frac{\mathcal{H}}{k}\left(\theta - B\right),\\
\Delta_\rho^{\longitudinal} =& \Delta_\rho + 3(1+w)\frac{\mathcal{H}}{k}\left(\frac{\dot H_T}{k} - B\right).\end{aligned}$$ The pre-factor $\mathcal{H}/k$ is by definition a measure of the smallness of velocities in the problem[^10], such that $\mathcal{H}/k \rightarrow 0$ gives the lowest order terms in the Newtonian limit. In other words, the density perturbation computed in [*any*]{} gauge reduces to the same density perturbation in the non-relativistic limit, and is a solution to the set Eqs. (\[eq:nreom\], \[eq:nreomtheta\]), $$\begin{aligned}
\Delta_\rho = \Delta_\rho^{\comoving} + \mathcal{O}(\tfrac{v}{c}) = \Delta_\rho^{\longitudinal} + \mathcal{O}(\tfrac{v}{c}).\end{aligned}$$ This means that for the non-relativistic limit, one can use the density perturbations computed in [*any*]{} gauge (also gauges in which $\theta=0$), and set the velocities by, $$\begin{aligned}
\theta^{\newt} =& \frac{-\dot \Delta_\rho}{1+w}. \hspace{2cm} \left [\frac{3(1+w)\mathcal{H}}{k} \ll 1 \right]\end{aligned}$$ More generally, this equation holds under any extension of general relativity (modified theory of gravity), provided that the fluid in question satisfies $\dot \Delta_{\rho} = -(1+w) \theta$ with $\dot w=0$.
To summarise:
- for a newtonian simulation, positions are obtained from the density–displacement duality applied to $\Delta_\rho$, and velocities from $\dot\Delta_\rho$,
- while for a relativistic simulation, the density–displacement duality is applied to bare quantities, $\Delta_n^{\bare}$ and $\dot\Delta_n^{\bare}=-\theta$.
Gauges and the Newtonian limit\[sec:gauges\]
--------------------------------------------
From comparing Eqs. (\[eq:energyConservation\], \[eq:velocityEquation\]) to Eqs. (\[eq:nreom\], \[eq:nreomtheta\]), it is obvious that a gauge in which $H_L=0$ reproduces the same linear equations in General Relativity as in Newtonian gravity. [^11] It is tempting to believe that Newtonian simulations hence can be used on relativistic distances $d$ such that $d\,\mathcal{H} = \mathcal{O}(1)$. One strong hint that this is the case, comes from the fact that linear perturbation theory agrees well with observations of large scale structure in the universe, suggesting that small-scale nonlinearities do not affect the large scale dynamics. The growth of structure is hierarchical: small scales enter the nonlinear regime first, while large scales continue to evolve linearly. Moreover, all modes follow a sequence of (1) relativistic dynamics (super Hubble), (2) newton linear dynamics, (3) nonlinear dynamics [@Rigopoulos:2013nda]. While this suggests that long wavelength modes in Newtonian N-body simulations are solved for properly, this does not at all mean that the full nonlinear dynamics of the short wavelength modes are the same for relativistic and Newtonian systems. What is needed is a proof that the fully nonlinear equations of Newtonian gravity and General Relativity agree, even when taking into account all scales (since obviously, when any scale goes nonlinear, formally the entire system is nonlinear). It is often claimed that even the nonlinear density solutions only source the relativistic potentials at the linear level, but as pointed out in [@Green:2011wc], it is inconsistent to linearise the left-hand side (the gravity part) of the Einstein equations without linearising the right-hand side (the matter part). When the matter part contains nonlinear contributions, the average of the matter density will no longer agree with the definition of the background matter density, around which perturbation theory was setup. Properly taking this mismatch into account, amounts to the Buchert formalism (see Ref. [@Buchert:2011sx] for a review), which schematically can be summarised as, $$\begin{aligned}
\bar G_{\mu\nu} ( \bar g_{\mu\nu}(\tau) ) =& \bar T_{\mu\nu} ( \bar \rho(\tau), \bar P(\tau), \ldots ), \\
\delta G_{\mu\nu} ( \delta g_{\mu\nu}(\tau, \vec x) ) =& \delta T_{\mu\nu} ( \delta \rho(\tau, \vec x), \delta P(\tau, \vec x), \ldots ),& \mbox{[ordinary perturbation theory]}\\
\left<G_{\mu\nu} ( g^{\rm fully\,nonlinear}_{\mu\nu}(\tau, \vec x) ) \right> \neq& \bar T_{\mu\nu} ( \bar \rho(\tau), \bar P(\tau), \ldots ) , &\mbox{[averaging vs. background]}\end{aligned}$$ where both the left and right-hand side of the last line are pure functions of $\tau$, but different pure functions of $\tau$.
In General Relativity, intuition tells that wavelengths larger than the scales considered, can be included as a locally constant spatial curvature term (the universe locally is open or closed). However, this only addresses the density contributions, while the Einstein equations contain higher order gradients in the potential as well. Under general modifications of gravity, Birkhoff’s theorem no longer holds, and the above intuition is flawed.
In summary, it is not obvious that two perturbation theories that are identical in their linear limits, are identical in their full nonlinear regimes.
In [@Adamek:2015gsa] is was found numerically that the mismatch between average and background quantities may be tiny, but a rigorous mathematical proof of the agreement between fully nonlinear Newtonian gravity and General Relativity in cosmic structure formation does currently not exist.
Scale dependent growth in the quasi-static approximation\[sec:qsa\]
===================================================================
![A 3x5 of patch of a $128^2$ slice of a realization $128^3$ vertices (particles) of a Dark Matter density field at a comoving size of 200 Mpc (1.56 Mpc inter-vertex distance), at newtonian displacements varying from redshift $z=100$ down to $z=10$, in a cosmology endowed with designer $f(R)$ gravity in $w$CDM background ($w=-0.95$) with $B_0=0.9$, compared to a plain $\Lambda$CDM cosmology. The $f(R)$ trajectories can be distinguished from the $\Lambda$CDM counter parts, by their curved shape, as emphasised in the inset. The straight dashed line connecting start and end points serves as a guide to the eye, to recognise the curved shape of the trajectory. This displacement field is linear, yet the particles do not follow straight lines. []{data-label="fig:curves"}](curves_zoom "fig:"){width="70.00000%"} ![A 3x5 of patch of a $128^2$ slice of a realization $128^3$ vertices (particles) of a Dark Matter density field at a comoving size of 200 Mpc (1.56 Mpc inter-vertex distance), at newtonian displacements varying from redshift $z=100$ down to $z=10$, in a cosmology endowed with designer $f(R)$ gravity in $w$CDM background ($w=-0.95$) with $B_0=0.9$, compared to a plain $\Lambda$CDM cosmology. The $f(R)$ trajectories can be distinguished from the $\Lambda$CDM counter parts, by their curved shape, as emphasised in the inset. The straight dashed line connecting start and end points serves as a guide to the eye, to recognise the curved shape of the trajectory. This displacement field is linear, yet the particles do not follow straight lines. []{data-label="fig:curves"}](scale "fig:"){height="39.37500%"}
(0.2,1) (0, 0) [ $z = 10$ ]{} (0, 0.49) [ $z = 55$ ]{} (0, 0.98) [ $z = 100$ ]{}
In this section, we show that for Dark Energy and Modified Gravity models (in short DE/MG models) the velocities of Dark Matter particles are scale dependent in linear perturbation theory, already in the quasi-static approximation. Scale dependence of the density transfer function of Dark Matter in Fourier space generally translates into a varying direction in real space. That is, the dust particles do not move on straight lines. Here we choose to apply the quasi-static approximation in order to make an intuitive transition from General Relativity to the more general Effective Field Theory of Modified Gravity. In the next section we abandon the quasi-static approximation[^12].
We fix the metric to the longitudinal gauge, $B=H_T = 0$, $A=\Psi$ and $H_L=-\Phi$, $$\begin{aligned}
ds^2=a^2(\tau)\left[-(1+2\Psi)d\tau^2+(1-2\Phi)d\vec x^2\right]\;.\end{aligned}$$ Via the quasi-static approximation, we can parametrise the Poisson and anisotropic stress equations into the following generic form [@Bean:2010zq]: $$\begin{aligned}
-\nabla^2_x\Phi(\tau,\vec x) &= 4\pi G a^2Q(\tau,\vec x)\bar\rho_i\Delta_i(\tau,\vec x)\;,\label{modfdPoisson}\\
-\nabla^2_x(\Psi-R\Phi)&=12\pi Ga^2\bar\rho_i(1+w_i)\sigma_i Q(\tau,\vec x)\label{gamma}\;,
$$ where $Q(\tau,\vec x)$ and $R(\tau,\vec x)$ describe the variation of Newton’s constant in the environment and the anisotropic stress tensor induced by the modification of gravity, respectively. The quasi-static approximation was proposed via ($\mu,\gamma$) functions in [@Bertschinger:2008zb], but mixing the gravitational shear and the anisotropic stress from relativistic species[^13]. Several phenomenological parameterisations of these functions exist in the literature. We refer to the Refs. [@Brax:2012gr; @Silvestri:2013ne; @Lombriser:2014ira] for details.
Consider a system with some parametrisation of modified gravity and a dust fluid (Cold Dark Matter). The continuity and momentum equations of the dust fluid in the quasi-static regime (neglecting the time derivatives of the gravitational potentials), can be condensed into, $$\begin{aligned}
\frac{d^2\Delta_c(\tau,\vec x)}{d\tau^2}+\mathcal{H}\frac{d\Delta_c(\tau,\vec x)}{d\tau}=-\nabla^2_x\Psi(\tau,\vec x)\;,\label{contineq}\end{aligned}$$ which can be compared to Eqs. (\[eq:energyConservation\], \[eq:velocityEquation\]) by setting $w=c_s^2=q=\Delta_P=B=\sigma=0$, $A=\Psi$ and $\dot \Phi = 0$, where the last equalities follows from the quasi-static approximation. Since, for simplicity, we only consider the dust and gravity sectors, the anisotropic stress term from relativistic components, such as massive neutrinos, [*etc.*]{} vanishes.
Combining equation (\[modfdPoisson\]) and (\[gamma\]), and keeping only terms at linear order, we get $$\begin{aligned}
\nabla_x^2\Psi\simeq -4\pi Ga^2Q(\tau,\vec x)R(\tau,\vec x)\bar\rho_c\Delta_c(\tau,\vec x)\;.\label{modfdPoisson2}\end{aligned}$$ Inserting equation (\[modfdPoisson2\]) into (\[contineq\]), we arrive at the equation governing the growth of the dust fluid perturbations [@Dossett:2014oia], $$\begin{aligned}
\frac{d^2\Delta_c(\tau,\vec x)}{d\tau^2}+\mathcal{H}\frac{d\Delta_c(\tau,\vec x)}{d\tau} - 4\pi G a^2Q(\tau,\vec x)R(\tau,\vec x)\bar\rho_c\Delta_c(\tau,\vec x) =0 \;.\label{mastereq}\end{aligned}$$ In General Relativity, $Q(\tau,\vec x)$ and $R(\tau,\vec x)$ are unity, hence the perturbations of the dust fluid grow in the same way on all the scales, [*i.e.*]{} the growth rate is only a function of time. However, this is no longer true in the case of modified gravity.
The next step is to calculate the growth rate $\mathcal D$ of CDM density perturbations. In GR, in the late-time and small scale, the growth rate only depends on time; while in the modified gravity case, $\mathcal D(\tau,\vec x)$ depends both on time and space, [*i.e.*]{}, $$\begin{aligned}
\Delta_c(\tau,\vec x)=D(\tau,\vec x)\Delta_c(\tau_i,\vec x)\;,\label{growthrate}\end{aligned}$$ where $\Delta_c(\tau_i,\vec x)$ denotes for the CDM density perturbation at initial time $\tau_i$. In order to set initial conditions for a Newtonian N-body simulation, one now uses $\Delta_c$ for the particle positions. Moreover, by ‘virtue’ of the quasi-static approximation, the time derivatives of the potentials are ignored, such that bare quantities are equal to the full physical quantities. In other words, both Newtonian and relativistic (longitudinal gauge) particle positions are obtained from, $$\begin{aligned}
\vec x=\vec y-\mathcal{D}(\tau,\vec y)\frac{1}{\nabla_y^2}\vec\nabla_y\Delta_c(\tau_i,\vec y)-\Delta_c(\tau_i,\vec y)\frac{1}{\nabla_y^2}\vec\nabla_y\mathcal D(\tau,\vec y)\;.\label{VHA}\end{aligned}$$ Equation (\[VHA\]) is the main result of this section. If we assume that $\mathcal D$ depends on time only, we recover the Zel’dovich Approximation, which tells us that in General Relativity, at the linear scale, the dust particles propagate on a straight line, because $\vec v_c=\vec\nabla_x\Delta_c(\vec x)$; however, in the modified gravity scenario, the dust particles are also deflected by the gravitational potential due to the second term of equation (\[VHA\]).
Finally, in Figure \[fig:curves\] we show a discrete realisation of Dark Matter particle trajectories for a particular choice of EFT parameters (see the following section), without the quasi-static approximation. The particle trajectories are compared their $\Lambda$CDM counter parts, and clearly show a curved shape. This figure shows how particle trajectories at the linear level are deflected by modifications of gravity.
Effective Field Theory parametrisation, and modified gravity as a particle displacement field\[sec:demgeftLPT\]
===============================================================================================================
Transfer functions
------------------
After having discussed the physical picture, now let us go through a more advanced parametrisation method of DE/MG models. In this section, we will focus on the effective field theory (EFT) treatment of the cosmic acceleration. This approach is introduced into the study of DE/MG models by [@Gubitosi:2012hu; @Bloomfield:2012ff] to unify the most generic viable single scalar field models of DE/MG. Let us briefly summarise it here.
Compared with the covariant approach, the construction of the effective action of the scalar field starts with a particular choice of time coordinate, in which the scalar field perturbations vanish. In other words, we first break the four-dimensional covariant diffeomorphisms by choosing a particular clock. Then, we can build all the operators that are consistent with the unbroken symmetries of the theory, [*i.e.*]{} time-dependent spatial diffeomorphisms. Thanks to this symmetry guidance, we can figure out the spatial structure of these operators. Hence, we are left with only time-dependent prefactors of these operators, namely EFT functions, which need to be parametrized. Another advantage of this procedure is that at the linear order there exist only a few relevant operators which could cover most of the generic viable single scalar field DE/MG models. For these reasons, the EFT approach makes analysing DE/MG models with the on-going and up-coming cosmological surveys feasible.
The EFT approach relies on the assumption of the validity of the weak equivalence principle which ensures the existence of a metric universally coupled to matter fields and therefore of a well-defined Jordan frame. The EFT action in conformal time reads
$$\begin{aligned}
\label{full_action_Stuck}
S = \int d^4x& \sqrt{-g} \left \{ \frac{m_0^2}{2} \l[1+\Omega(\tau+\pi)\r]R+ \Lambda(\tau+\pi) - c(\tau+\pi)a^2\left[ \delta g^{00}-2\frac{\dot{\pi}}{a^2} +2\hub\pi\left(\delta g^{00}-\frac{1}{a^2}-2\frac{\dot{\pi}}{a^2}\right)\right.\right. \nonumber\\
&\left.\left. +2\dot{\pi}\delta g^{00} +2g^{0i}\partial_i\pi-\frac{\dot{\pi}^2}{a^2}+ g^{ij}\partial_i \pi \partial_j\pi -\l(2\hub^2+\dot{\hub}\r)\frac{\pi^2}{a^2} \right]+\cdots\right\} + S_{m} [g_{\mu \nu},\chi_i]\;,\end{aligned}$$
where $m_0^2$ is the Planck mass, and $S_m$ is the action for all matter fields, $\chi_i$. For simplicity, here we only list the three operators $\{\Omega$,$\Lambda$,$c\}$ which is describing the background dynamics. For the complete description of the quadratic EFT action, we refer to [@Hu:2013twa; @Gubitosi:2012hu; @Bloomfield:2012ff]. In the following calculation, we follow the convention of [@Hu:2013twa].
![\[fig:GR\_MG\_transfer\] The fractional differences of the CDM density transfer function between GR and the viable $f(R)$ gravity for several redshift snapshots. (Left panel) Designer $f(R)$ gravity in $\Lambda$CDM background with $B_0=0.001$ and (Right panel) Designer $f(R)$ gravity in wCDM background ($w=-0.95$) with $B_0=0.01$. The grey band denotes for the $5\%$ regime.](frac3_transfer_gr_vs_for_LCDM "fig:"){width="45.00000%"} ![\[fig:GR\_MG\_transfer\] The fractional differences of the CDM density transfer function between GR and the viable $f(R)$ gravity for several redshift snapshots. (Left panel) Designer $f(R)$ gravity in $\Lambda$CDM background with $B_0=0.001$ and (Right panel) Designer $f(R)$ gravity in wCDM background ($w=-0.95$) with $B_0=0.01$. The grey band denotes for the $5\%$ regime.](frac3_transfer_gr_vs_for_wCDM "fig:"){width="45.00000%"}
![\[fig:transferq\] The $Q$-fluid energy density fluctuation transfer function in two viable $f(R)$ gravity. (Left panel) Designer $f(R)$ gravity in $\Lambda$CDM background with $B_0=0.001$ and (Right panel) Designer $f(R)$ gravity in $w$CDM background ($w=-0.95$) with $B_0=0.01$.](for_LCDM_B0_0p001_transfer_q "fig:"){width="45.00000%"} ![\[fig:transferq\] The $Q$-fluid energy density fluctuation transfer function in two viable $f(R)$ gravity. (Left panel) Designer $f(R)$ gravity in $\Lambda$CDM background with $B_0=0.001$ and (Right panel) Designer $f(R)$ gravity in $w$CDM background ($w=-0.95$) with $B_0=0.01$.](for_wCDM_B0_0p01_w_m0p95_transferq_q "fig:"){width="45.00000%"}
![\[fig:transfer\_dnq\] The $Q$-fluid bare number density fluctuation transfer function of the designer $f(R)$ gravity in $w$CDM background ($w=-0.95$) with $B_0=0.01$.](for_wCDM_B0_0p01_w_m0p95_transferq_dnq){width="85.00000%"}
The modified Einstein equation can be written as $$\begin{aligned}
\label{eq:fluid}
m_0^2(1+\Omega) G_{\mu\nu}[g_{\mu\nu}]=T^{(m)}_{\mu\nu}[\rho_m,\theta_m,\cdots]+ T^{(\pi)}_{\mu\nu}[\pi,\dot\pi,\cdots]\;,\end{aligned}$$ We can divide both sides of the above equation by a factor ($1+\Omega$) and define an effective energy momentum tensor ($T^{(Q)}_{\mu\nu}$), which is conserved by construction ($\nabla^{\nu}T^{(Q)}_{\mu\nu}=0$)[^14] $$\begin{aligned}
m_0^2 G_{\mu\nu}[g_{\mu\nu}]&=T^{(m)}_{\mu\nu}[\rho_m,\theta_m,\cdots]+ T^{(Q)}_{\mu\nu}[\rho_{\pi},\theta_{\pi},\rho_m,\cdots]\;\;, \label{eq:def_QT0}\\
T^{(Q)}_{\mu\nu}[\rho_{\pi},\theta_{\pi},\rho_m,\cdots]&\equiv \frac{1}{1+\Omega}\Big\{-\Omega T^{(m)}_{\mu\nu}[\rho_m,\theta_m,\cdots] +T^{(\pi)}_{\mu\nu}[\pi,\dot\pi,\cdots]\Big\}\;.\label{eq:def_QT}\end{aligned}$$ The reason why we introduce this effective energy momentum tensor ($T^{(Q)}_{\mu\nu}$) instead of directly solve the modified Einstein equation is that we want to apply the Lagrangian perturbation treatment for this effective DE/MG fluid, hereafter $Q$-fluid. The motivation of our algorithms are mainly the following two.
First of all, in an N-body simulation, a high resolution of the extra scalar field in Eulerian representation is quite expensive, especially for the collapsing DE/MG models. [This is because in the DE/MG simulation, whether we model the extra scalar degree of freedom as the particle or fluid, depends on the comparison of the mean free path of the particles with the scales we are interested in. Take cold dark matter and massive neutrino as examples, on the scales which are much larger than their mean free path, we can adopt the fluid approximation (ideal fluid for CDM; imperfect fluid for massive neutrino). However, once we concern on the scales which are comparable or even smaller than their mean free path, we have to use the particle description to study its non-linear dynamics. A similar case happens to the collapsing DE/MG model, which has a small sound speed (large mean free path). ]{} One solution is to discretise the fluid element into virtual particles with a charge, [*e.g.*]{} mass, and let the grid follow these virtual particles [($Q$-particles)]{}, [*i.e.*]{} the Lagrangian perturbation approach. This gives the simulation a high resolution at the high density regions.
Second of all, the algorithms to add extra fluid components into N-body and hydro simulation are being extensively developed. We can utilise these techniques to develop the DE/MG simulations via this $Q$-fluid description, such as loading pressure, [*etc.*]{} In this work, we focus on the linear phenomena, such as linear structure growth and the IC for N-body simulation. In principle, this Lagrangian treatment can be extended to the non-linear phenomena in the simulations. [Basically the recipe is to generate the initial conditions for the $Q$-particles from the linear Boltzmann code; and then, evolve them with the geodesics.]{}
Let us go back to the linear perturbation description. Within this approach the above equations (\[eq:def\_QT0\]) and (\[eq:def\_QT\]) can be split into two sets of equations, namely background and (linear) perturbation. On the background, due to historical reason, $\rho_Q$ and $P_Q$ are not the conserved background quantities in the non-minimally coupled case, $$\begin{aligned}
\rho_Q &= 2c-\Lambda-\frac{3m_0^2\mathcal H\dot\Omega}{a^2}\;,\\
P_Q &=\Lambda+\frac{m_0^2}{a^2}(\ddot\Omega+\mathcal H\dot\Omega)\;.\end{aligned}$$ The conserved one are defined as $$\begin{aligned}
\rho_{\rm DE} &=-\frac{\Omega}{1+\Omega}\rho_m+\frac{\rho_Q}{1+\Omega}\;,\\
P_{\rm DE} &= -\frac{\Omega}{1+\Omega}P_m+\frac{P_Q}{1+\Omega}\;.\end{aligned}$$
At the linear perturbation level, equation (\[eq:def\_QT\]) and (\[eq:def\_QT0\]) reads $$\begin{aligned}
\label{eq:fluid}
m_0^2\delta G_{\mu\nu}[\delta g_{\mu\nu}]&=\delta T^{(m)}_{\mu\nu}[\delta\rho_m,\theta_m,\cdots]+\delta T^{(Q)}_{\mu\nu}[\delta\rho_{\pi},\theta_{\pi},\delta\rho_m,\cdots]\;,\\
\delta T^{(Q)}_{\mu\nu}[\delta\rho_{\pi},\theta_{\pi},\delta\rho_m,\cdots]&=\frac{1}{1+\Omega}\Big\{-\Omega T^{(m)}_{\mu\nu}[\delta\rho_m,\theta_m,\cdots]+\delta T^{(\pi)}_{\mu\nu}[\pi,\dot\pi,\cdots]\Big\}\;.\end{aligned}$$ Armed with these results, we could recognize the fluid variables, such as the energy density, velocity, pressure as well as anisotropic stress tensor. Within the fully relativistic treatment, the definition of these quantities are gauge related. In the synchronous gauge, the Einstein equation reads $$\begin{aligned}
-\frac{2m_0^2}{a^2}\Big(k^2\eta-\frac{1}{2}\mathcal H\dot h\Big)&=\delta\rho^{({\rm syn})}_m+\delta\rho^{({\rm syn})}_Q\;,\\
\frac{2m_0^2}{a^2}k^2\dot\eta&=(\rho_m+P_m)\theta_m^{({\rm syn})}+(\rho_{\rm DE}+P_{\rm DE})\theta_Q^{({\rm syn})}\;,\\
-\frac{m_0^2}{a^2}\Big(\ddot h+2\mathcal H\dot h-2k^2\eta\Big)&=3\delta P_m^{({\rm syn})}+3\delta P_Q^{({\rm syn})}\;,\\
-\frac{m_0^2}{3a^2}\Big[\ddot h+6\ddot\eta+2\mathcal H(\dot h+6\dot\eta)-2k^2\eta\Big]&=(\rho_m+P_m)\sigma_m^{({\rm syn})}+(\rho_{\rm DE}+P_{\rm DE})\sigma_{Q}^{({\rm syn})}\;,\end{aligned}$$ where the fluid variables are defined as $$\begin{aligned}
\delta\rho_{Q}^{({\rm syn})}&=\frac{1}{(1+\Omega)}\left\{-\Omega\delta\rho_m^{({\rm syn})}+\dot\rho_{Q}\pi+2c(\dot\pi^{({\rm syn})}+\mathcal H\pi^{({\rm syn})})\right.\nonumber\\
&\left.-\frac{2m_0^2}{a^2}\left[\frac{\dot\Omega}{4}\dot h+\frac{\dot\Omega}{2}\Big(3(3\mathcal H^2-\dot{\mathcal H})\pi^{({\rm syn})}+3\mathcal H\dot\pi^{({\rm syn})}+k^2\pi^{({\rm syn})}\Big)\right]\right\}\;,\label{eq:Qdensity}\\
(\rho_{\rm DE}+P_{\rm DE})\theta_Q^{({\rm syn})}&=\frac{1}{1+\Omega}\left[-\Omega(\rho_m+P_{m})\theta_m^{({\rm syn})}+(\rho_Q+P_Q)k^2\pi^{({\rm syn})}\right.\nonumber\\
&+\left.\frac{2m_0^2}{a^2}k^2\dot\Omega(\dot\pi^{({\rm syn})}+\mathcal H\pi^{({\rm syn})})\right]\;,\label{eq:thetaq}\\
\delta P_Q^{({\rm syn})}&=\frac{1}{1+\Omega}\left\{-\Omega\delta P_m^{({\rm syn})}+P_Q\dot\pi^{({\rm syn})}+(\rho_Q+P_Q)(\dot\pi^{({\rm syn})}+\mathcal H\pi^{({\rm syn})})\right.\nonumber\\
&+\left.\frac{m_0^2}{a^2}\left[\frac{1}{3}\dot\Omega\dot h+\dot\Omega\ddot\pi^{({\rm syn})}+(\ddot\Omega+3\mathcal H\dot\Omega)\dot\pi^{({\rm syn})}+\left(\mathcal H\ddot\Omega+5\mathcal H^2\dot\Omega+\dot{\mathcal H}\dot\Omega+\frac{2}{3}k^2\dot\Omega\right)\pi^{({\rm syn})}\right]\right\}\;,\\
(\rho_{\rm DE}+P_{\rm DE})\sigma_{Q}^{({\rm syn})}&=\frac{1}{1+\Omega}\left[-\Omega(\rho_m+P_m)\sigma_m^{({\rm syn})}+\frac{m_0^2}{3a^2}\dot\Omega\Big(\dot h+6\dot\eta+2k^2\pi^{({\rm syn})}\Big)\right]\;. \end{aligned}$$ Beware that in the $\Lambda$CDM background ($\rho_{\rm DE}+P_{\rm DE}=0$), the divergence of the velocity $\theta_Q^{({\rm syn})}$ and the anisotropic stress tensor $\sigma_{Q}^{({\rm syn})}$ of the $Q$-fluid are not well defined. The Einstein equation and the $Q$-fluid energy momentum tensor in the longitudinal gauge are listed in the appendix \[App1\].
In the rest of this section, we will evolve the full dynamics of the $Q$-fluid (or $\pi$ field) by using [EFTCamb]{} [@Hu:2013twa; @Raveri:2014cka], calculate the relevant fluid quantities for the simulations and produce a realisation of discrete initial conditions for a discrete simulation. In the following numerical calculations, for simplicity, we take the $f(R)$ gravity as an example to show the non-trivial dynamics of the $Q$-fluid. In details, we investigate the designer $f(R)$ model [@Song:2006ej; @Pogosian:2007sw] in $\Lambda$CDM and $w$CDM background ($w=P_{\rm DE}/\rho_{\rm DE}$), which could [*exactly*]{} reproduce the background history we fixed [*a priori*]{}. After fixing the background expansion history, the extra degree of freedom of this higher derivative gravity theory still allows for one extra parameter, here we choose it as the present value of the Compton wavelength of the scalar field, namely $B_0\sim \frac{6 f_{RR}}{(1+f_{R})}H^2|_{a=1}$ (in Hubble parameter unit) [@Hu:2007nk], where $f_R\equiv \partial_R f$. We take the $B_0$ parameter value inside the [*viable*]{} regime after Planck-2013 results [@Raveri:2014cka], namely for designer $f(R)$ model in $\Lambda$CDM case we take $B_0=0.001$; for designer $f(R)$ model in $w$CDM case we take ($B_0=0.01,w=-0.95$).
In Figure \[fig:GR\_MG\_transfer\] we show the fractional differences between GR and the designer $f(R)$ models, for the cold dark matter density perturbations at 11 redshift snapshots. The left panel for the $\Lambda$CDM-mimicking case with $B_0=0.001$ ($w=-1$) and the right panel for the $w$CDM-mimicking case with ($B_0=0.01, w=-0.95$). The grey band in the figures is the $5\%$-deviation regime. We can see that for the $\Lambda$CDM case at redshift $z=50$ (pink curve), where generally N-body simulations start to run, the differences are deep inside the $5\%$ regime. However, for the $w$CDM case on scales smaller than ($k\gtrsim 1~[h/{\rm Mpc}]$) the differences are outside of the grey band.
In Figure \[fig:transferq\] we show the transfer functions of the energy density of the $Q$-fluid. Our calculation demonstrates that for the $\Lambda$CDM case, $\delta\rho_Q$ is anti-correlated with $\delta\rho_{\rm cdm}$ on all the scales, [*i.e.*]{} in the CDM over-dense region ($\delta\rho_{\rm cdm}>0$) the $Q$-fluid energy density perturbation keeps under-dense ($\delta\rho_Q<0$). This reflects the fact that the CDM perturbation in the $f(R)$ gravity is enhanced in the linear regime. To explain this, we take the Poisson equation in the Newtonian limit $$\begin{aligned}
-k^2\psi&=\frac{16\pi G}{3}\delta\rho_{\rm cdm}-\frac{\delta R}{6}\;.\label{eq:for}\end{aligned}$$ In the CDM over-dense region, $\delta\rho_{\rm cdm}>0$, the Newtonian potential $\psi<0$ and $\delta R>0$. In our $Q$-fluid language, equation (\[eq:for\]) reads $$\begin{aligned}
-k^2\psi&=\frac{16\pi G}{3}(\delta\rho_{\rm cdm}+\delta\rho_Q)\;.\label{eq:for2}\end{aligned}$$ Comparing these two equations, we get $\delta\rho_Q\propto -\delta R$ and this quantity stays negative. In order to generate the same depth of the gravitational potential well ($\psi$), we need extra CDM fluctuations to compensate the negative $\delta\rho_Q$. This indicates the fact that the growth rate of CDM get enhanced. From the modified gravity point of view at the linear scale[^15], the effective gravitational constant $G_{\rm eff}$ is enhanced by a factor $4/3$ compared with those in GR. What we find is nothing new, just another explanation of the same phenomena in term of an exotic fluid component. From the right panel of Figure \[fig:transferq\] we can see that in the deep redshift, the $Q$-fluid energy density perturbation change the sign (on the large scale $\Delta_{\rho,Q}$ is positive; while on the small scale $\Delta_{\rho,Q}$ is negative[^16]), but in the late redshift it does not. This is due to the complicated competition relationship between the $\delta\rho_m$ and $\pi$ field in the definition of $\delta\rho_Q$ in equation (\[eq:Qdensity\]).
In Figure \[fig:transfer\_dnq\] we show the transfer function of the bare number density perturbation (\[eq:dnbare\]) of the $Q$-fluid ($\Delta_{n,Q}^{\bare}$). As we have discussed in section (\[sec:bare\]), this quantity defines the positions of vertices at rest with the flow of the $Q$-fluid, in relativistic simulations. From the equation (\[eq:baredot\]) we can see that $\Delta_{n,Q}^{\bare}$ is the integration of $\theta_Q$ over the conformal time. Furthermore, from equation (\[eq:thetaq\]) we know that in the $\Lambda$CDM background $\theta_Q$ is not well defined, neither is $\Delta_{n,Q}^{\bare}$. Given these considerations, in Figure \[fig:transfer\_dnq\] we only show $\Delta_{n,Q}^{\bare}$ in the $w$CDM background case. We can see that, unlike $\Delta_{\rho,{\rm cdm}}/k^2$, the quantity $\Delta_{n,Q}^{\bare}/k^2$ increases on small scales. This also reflects the fact that the modification of gravity in $f(R)$ models becomes more significant on smaller scales, though above the screening scale.
Initial conditions for discrete simulations
-------------------------------------------
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
[*Relativistic and Eulerian*]{}
![ Slices of realizations of initial conditions at $z=50$, for use in relativistic (top row) and newtonian (bottom row) simulations, based on linear perturbation solutions from [EFTCamb]{}. The setup is identical to the $f(R)$-cosmology in Figure \[fig:curves\]. The space between vertices is $1.56$ Mpc. Sizes of the squares represent masses of the particles (densities at the vertices), while for the $Q$-fluid, the color additionally represents internal energy (pressure at the vertices), where the colour green indicates a low pressure, which simply traces under-densities. We omit the combination ‘Relativistic and Lagrangian’, since [camb]{} works in synchronous gauge, and the velocity of Dark Matter in that gauge is zero, and a Lagrangian representation in that gauge is meaningless. The $Q$-fluid can be treated in the Lagrangian Representation in a relativistic simulation in coordinates synchronous and comoving with Dark Matter. A relativistic simulation of Dark Matter only makes sense in a gauge that does not produces caustics, such as for example the longitudinal gauge. The top row serves as a proof of concept. The displacement of Dark Matter is exaggerated by a factor of 20 in the bottom row, for the purpose of contrast in the illustration. []{data-label="fig:examplesEL"}](Eulerian_DM "fig:"){width="30.00000%"} ![ Slices of realizations of initial conditions at $z=50$, for use in relativistic (top row) and newtonian (bottom row) simulations, based on linear perturbation solutions from [EFTCamb]{}. The setup is identical to the $f(R)$-cosmology in Figure \[fig:curves\]. The space between vertices is $1.56$ Mpc. Sizes of the squares represent masses of the particles (densities at the vertices), while for the $Q$-fluid, the color additionally represents internal energy (pressure at the vertices), where the colour green indicates a low pressure, which simply traces under-densities. We omit the combination ‘Relativistic and Lagrangian’, since [camb]{} works in synchronous gauge, and the velocity of Dark Matter in that gauge is zero, and a Lagrangian representation in that gauge is meaningless. The $Q$-fluid can be treated in the Lagrangian Representation in a relativistic simulation in coordinates synchronous and comoving with Dark Matter. A relativistic simulation of Dark Matter only makes sense in a gauge that does not produces caustics, such as for example the longitudinal gauge. The top row serves as a proof of concept. The displacement of Dark Matter is exaggerated by a factor of 20 in the bottom row, for the purpose of contrast in the illustration. []{data-label="fig:examplesEL"}](Eulerian_Q "fig:"){width="30.00000%"} ![ Slices of realizations of initial conditions at $z=50$, for use in relativistic (top row) and newtonian (bottom row) simulations, based on linear perturbation solutions from [EFTCamb]{}. The setup is identical to the $f(R)$-cosmology in Figure \[fig:curves\]. The space between vertices is $1.56$ Mpc. Sizes of the squares represent masses of the particles (densities at the vertices), while for the $Q$-fluid, the color additionally represents internal energy (pressure at the vertices), where the colour green indicates a low pressure, which simply traces under-densities. We omit the combination ‘Relativistic and Lagrangian’, since [camb]{} works in synchronous gauge, and the velocity of Dark Matter in that gauge is zero, and a Lagrangian representation in that gauge is meaningless. The $Q$-fluid can be treated in the Lagrangian Representation in a relativistic simulation in coordinates synchronous and comoving with Dark Matter. A relativistic simulation of Dark Matter only makes sense in a gauge that does not produces caustics, such as for example the longitudinal gauge. The top row serves as a proof of concept. The displacement of Dark Matter is exaggerated by a factor of 20 in the bottom row, for the purpose of contrast in the illustration. []{data-label="fig:examplesEL"}](Eulerian_Mix "fig:"){width="30.00000%"}
Dark Matter (Eulerian) $Q$-fluid (Eulerian) Combined
[*Newtonian and Lagrangian*]{}
![ Slices of realizations of initial conditions at $z=50$, for use in relativistic (top row) and newtonian (bottom row) simulations, based on linear perturbation solutions from [EFTCamb]{}. The setup is identical to the $f(R)$-cosmology in Figure \[fig:curves\]. The space between vertices is $1.56$ Mpc. Sizes of the squares represent masses of the particles (densities at the vertices), while for the $Q$-fluid, the color additionally represents internal energy (pressure at the vertices), where the colour green indicates a low pressure, which simply traces under-densities. We omit the combination ‘Relativistic and Lagrangian’, since [camb]{} works in synchronous gauge, and the velocity of Dark Matter in that gauge is zero, and a Lagrangian representation in that gauge is meaningless. The $Q$-fluid can be treated in the Lagrangian Representation in a relativistic simulation in coordinates synchronous and comoving with Dark Matter. A relativistic simulation of Dark Matter only makes sense in a gauge that does not produces caustics, such as for example the longitudinal gauge. The top row serves as a proof of concept. The displacement of Dark Matter is exaggerated by a factor of 20 in the bottom row, for the purpose of contrast in the illustration. []{data-label="fig:examplesEL"}](Lagrangian_DM "fig:"){width="30.00000%"} ![ Slices of realizations of initial conditions at $z=50$, for use in relativistic (top row) and newtonian (bottom row) simulations, based on linear perturbation solutions from [EFTCamb]{}. The setup is identical to the $f(R)$-cosmology in Figure \[fig:curves\]. The space between vertices is $1.56$ Mpc. Sizes of the squares represent masses of the particles (densities at the vertices), while for the $Q$-fluid, the color additionally represents internal energy (pressure at the vertices), where the colour green indicates a low pressure, which simply traces under-densities. We omit the combination ‘Relativistic and Lagrangian’, since [camb]{} works in synchronous gauge, and the velocity of Dark Matter in that gauge is zero, and a Lagrangian representation in that gauge is meaningless. The $Q$-fluid can be treated in the Lagrangian Representation in a relativistic simulation in coordinates synchronous and comoving with Dark Matter. A relativistic simulation of Dark Matter only makes sense in a gauge that does not produces caustics, such as for example the longitudinal gauge. The top row serves as a proof of concept. The displacement of Dark Matter is exaggerated by a factor of 20 in the bottom row, for the purpose of contrast in the illustration. []{data-label="fig:examplesEL"}](Lagrangian_Q "fig:"){width="30.00000%"} ![ Slices of realizations of initial conditions at $z=50$, for use in relativistic (top row) and newtonian (bottom row) simulations, based on linear perturbation solutions from [EFTCamb]{}. The setup is identical to the $f(R)$-cosmology in Figure \[fig:curves\]. The space between vertices is $1.56$ Mpc. Sizes of the squares represent masses of the particles (densities at the vertices), while for the $Q$-fluid, the color additionally represents internal energy (pressure at the vertices), where the colour green indicates a low pressure, which simply traces under-densities. We omit the combination ‘Relativistic and Lagrangian’, since [camb]{} works in synchronous gauge, and the velocity of Dark Matter in that gauge is zero, and a Lagrangian representation in that gauge is meaningless. The $Q$-fluid can be treated in the Lagrangian Representation in a relativistic simulation in coordinates synchronous and comoving with Dark Matter. A relativistic simulation of Dark Matter only makes sense in a gauge that does not produces caustics, such as for example the longitudinal gauge. The top row serves as a proof of concept. The displacement of Dark Matter is exaggerated by a factor of 20 in the bottom row, for the purpose of contrast in the illustration. []{data-label="fig:examplesEL"}](Lagrangian_Mix "fig:"){width="30.00000%"}
Dark Matter $Q$-particles (Langrangian) Combined
(Langrangian, exaggerated)
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
In Figure \[fig:examplesEL\] we show an application of all the findings in this paper. [ We discretise the results of the previous sections using a discretisation as described in Appendix \[sec:discrete\].]{} In the top row, we setup initial conditions for a relativistic simulation in synchronous comoving coordinates, as a proof of concept. Such a simulation would make little sense, as the motivation for a simulation with Dark Matter is to be able to trace the particles beyond shell crossing, which invalidates the synchronous gauge. The figures show a regular grid, which is comoving with the Dark Matter. Proper masses (not bare!) are indicated by the size of squares. Moreover, for the $Q$-fluid a low pressure is indicate the colour green, while high pressure is indicated in blue. For this fluid, pressure simply traces density.
In the bottom row of \[fig:examplesEL\], we show the Newtonian approximation based on the same spectra computed in the synchronous comoving gauge, as explained in Section \[sec:nrlim\]. In this case, both fluids can be considered in the Lagrangian representation, that is, represented by vertices (particles) which are at rest with each species’ flow. From this picture it becomes explicit that the phrase ‘at rest with the flow’ does not imply that mass- and internal-energy densities are constant in such a frame, if pressure or heat transfer are present. For Dark Matter, pressure is zero and hence these particles do have constant mass (equal-sized squares) in the Lagrangian representation[^17].
In the Eulerian view, it is clearly seen that the $Q$-fluid has density perturbations which are anti-correlated with the Dark Matter perturbations, since both fluids are over-dense in complementary regions. This exemplifies the discussion at Eqs. (\[eq:for\], \[eq:for2\]). However, it is important to realise that this regular grid is in the frame comoving with the Dark Matter, which is hence not a regular grid in newtonian coordinates; in the newtonian representation (bottom row), one can recognise that the low-mass and low-pressure vertices in the $Q$-fluid are in fact clustered, partially compensating the density perturbation that is visible in the top row.
Conclusion
==========
We have provided the means to generate a distribution of particles with masses, positions and internal energies such that they describe exactly the underlying fluid theory, for both perfect and imperfect fluids, in arbitrary gauges. That is, simulations in longitudinal, synchronous co-moving, or the so-called ‘N-body gauge’ [@Fidler:2015npa], find their initial conditions following our prescription. The newtonian limit is always found by taking the dressed densities for particle positions, while fully relativistic particle positions are defined by the bare number densities. The description holds for any metric theory of gravity, including the newtonian limit of General Relativity. The description can be used in any gauge, for any description of a fluid, with velocities defined as comoving with any quantity of the fluid.
We applied our findings to a description of Modified Gravity, in which we describe the extra degree of freedom as an imperfect fluid $Q$, with particles with varying mass and internal energy. Trajectories of all species in the linear regime are curves rather than straight lines. Most importantly, we show that the perturbations in new dynamical degrees of freedom cannot be ignored when setting initial conditions.
[ Such initial conditions can be applied in at least three ways: (1) when the modified gravity field stays linear and its fluid description remains valid, a Lagrangian representation of the linear field can be included in an N-body simulation during all times, providing high resolution where the simulation needs it, (2) when the nonlinear dynamics of the field are still described by a fluid, the field can be simulated by adapting existing hydrodynamics techniques, and (3) these initial conditions can be used for any particle species whose linear solution is effectively described by an imperfect fluid, such as neutrinos on scales larger than their free-streaming length.]{}
Our findings open the way to modelling non-Newtonian gravity in various ways, in arbitrary coordinate choices. The advantages of this possibility remain to be explored.
We release a C$^{++}$-code for the generation of initial conditions, FalconIC, at [<http://falconb.org>]{}, with a minimalistic GUI that runs natively on Linux and OS X. The code links against any version of both Boltzmann codes [camb]{} and [class]{}, including [EFTCamb]{}, such that no separate running of those codes is necessary, and generates initial conditions at arbitrary scales, of arbitrary size (fully parallelised using MPI and OpenMP), for arbitrary cosmological parameters.
Acknowledgements {#acknowledgements .unnumbered}
================
The authors wish to thank Ignacy Sawicki, Julien Bel, Carmelita Carbone, Julien Lesgourgues, Mark Lovell, Ewald Puchwein, Cornelius Rampf, Marco Raveri, Gerasimos Rigopoulos, Christoph Schmid and Alessandra Silvestri for fruitful discussions. WV is supported by a Veni research grant from the Netherlands Organization for Scientific Research (NWO). BH is supported by the Dutch Foundation for Fundamental Research on Matter (FOM).
Gauge transformation and $Q$-fluid variables in the longitudinal gauge\[App1\]
==============================================================================
In the longitudinal gauge, with the convention of Ma and Bertschinger [@Ma:1995ey], $$\begin{aligned}
ds^2&=a^2\Big[-(1+2\psi)d\tau^2+(1-2\phi)\delta_{ij}dx^idx^j\Big]\;,\\
ds^2&=a^2\Big[-(1+2A)d\tau^2+(1+2H_L)\delta_{ij}dx^idx^j\Big]\;,\end{aligned}$$ so we have $A=\psi\;,H_L=-\phi$. In the following, we will move to the ($\psi,\phi$) convention $$\begin{aligned}
\psi&=\frac{1}{2k^2}\Big[\ddot h+6\ddot\eta+\mathcal H(\dot h+6\dot\eta)\Big]\;,\\
\phi&=\eta-\frac{\mathcal H}{2k^2}(\dot h+6\dot\eta)\;,\\
\pi^{\longitudinal}&=\pi^{\synchronous}+\frac{1}{2k^2}(\dot h+6\dot\eta)\;,\end{aligned}$$ In the longitudinal gauge, the Einstein equation reads $$\begin{aligned}
-\frac{2m_0^2}{a^2}\Big[k^2\phi+3\mathcal H(\dot\phi+\mathcal H\psi)\Big] &=\delta\rho_m^{\longitudinal}+\delta\rho_Q^{\longitudinal}\;,\\
\frac{2m_0^2}{a^2}k^2\Big(\dot\phi+\mathcal H\psi\Big) &=(\rho_m+P_m)\theta_m^{\longitudinal}+(\rho_{\rm DE}+P_{\rm DE})\theta_Q^{\longitudinal}\;,\\
\frac{2m_0^2}{a^2}\left[\ddot\phi+\mathcal H(\dot\psi+2\dot\phi)+(\mathcal H^2+2\dot{\mathcal H})\psi+\frac{k^2}{3}(\phi-\psi)\right] &= \delta P_m^{\longitudinal}+\delta P_Q^{\longitudinal}\;,\\
\frac{2m_0^2}{3a^2}k^2(\phi-\psi) &= (\rho_m+P_m)\sigma_{m}^{\longitudinal} + (\rho_{\rm DE}+P_{\rm DE})\sigma_{Q}^{\longitudinal}\;.\end{aligned}$$ And the $Q$-fluid variables are defined as $$\begin{aligned}
\delta\rho_Q^{\longitudinal} &=\frac{1}{1+\Omega}\left\{-\Omega\delta\rho_m^{\longitudinal}+\dot\rho_Q\pi^{\longitudinal}+2c(\dot\pi^{\longitudinal}+\mathcal H\pi^{\longitudinal}-\psi)\right.\nonumber\\
&\left.-\frac{m_0^2}{a^2}\dot\Omega\left[3(2\mathcal H^2-\dot{\mathcal H})\pi^{\longitudinal}-3\mathcal H(\dot\pi^{\longitudinal}+\mathcal H\pi^{\longitudinal}-\psi)+k^2\pi^{\longitudinal}-3(\dot\phi+\mathcal H\psi)\right]\right\}\;,\\
(\rho_{\rm DE}+P_{\rm DE})\theta_Q^{\longitudinal} &=\frac{1}{1+\Omega}\left[-\Omega(\rho_m+P_m)\theta_m^{\longitudinal}+(\rho_Q+P_Q)k^2\pi^{\longitudinal}\right.\nonumber\\
&\left.+\frac{m_0^2}{a^2}\dot\Omega k^2(\dot\pi^{\longitudinal}+\mathcal H\pi^{\longitudinal}-\psi)\right]\;,\\
\delta P_Q^{\longitudinal} &=\frac{1}{1+\Omega}\left\{-\Omega\delta P_m^{\longitudinal} +\dot P_Q\pi^{\longitudinal}+(\rho_Q+P_Q)\Big(\dot\pi^{\longitudinal}+\mathcal H\pi^{\longitudinal}-\psi\Big)\right.\nonumber\\
&+\frac{m_0^2}{a^2}(\ddot\Omega-\mathcal H\dot\Omega)\Big(\dot\pi^{\longitudinal}+\mathcal H\pi^{\longitudinal}-\psi\Big)+\frac{m_0^2}{a^2}\dot\Omega\left[\ddot\pi^{\longitudinal}+\mathcal H\dot\pi^{\longitudinal}+\dot{\mathcal H}\pi^{\longitudinal}\right.\nonumber\\
&\left.\left.-\dot\psi-2\dot\phi+3\mathcal H\Big(\dot\pi^{\longitudinal}+\mathcal H\pi^{\longitudinal}\Big)-5\mathcal H\psi+3\mathcal H^2\pi^{\longitudinal}+\frac{2}{3}k^2\pi^{\longitudinal}\right]\right\}\;,\\
(\rho_{\rm DE}+P_{\rm DE})\sigma_Q^{\longitudinal}&=\frac{1}{1+\Omega}\left[-\Omega(\rho_m+P_m)\sigma_m^{\longitudinal}+\frac{2m_0^2}{3a^2}\dot\Omega k^2\pi^{\longitudinal}\right]\;.\end{aligned}$$
Discrete sampling of phase space \[sec:discrete\]
=================================================
For the sake of completeness, here we briefly summarise the currently known methods for generating a discrete sampling of the otherwise continuous phase space of a fluid. For in-depth discussions, we refer the reader to [@Bertschinger:2001ng; @Hahn:2011uy].
In this section we work [*top down*]{}, in that we start from a theoretical continuous field on an infinite space, and construct a finite-resolution finite-sized discrete representation that resembles the theoretical field as closely as possible.
Discreteness enters at two ends of the length scale: (1) moving from an infinite box size to a finite box size renders the eigen space of the laplacian discrete (Fourier transforms become sums rather than integrals), which sets the lowest nonzero wave number under consideration; and (2) only finitely many numbers can be treated on a discrete computer, which on one hand limits the highest sampled wave number and on the other hand limits the number of simulated vertices (particles).
A statistically isotropic gaussian random field $\hat s(\vec x)=\int \frac{d^3k}{(2\pi)^3} e^{i\vec k \cdot \vec x} \hat s_{\vec k}$ in an infinite-sized space can be defined by its Fourier-space correlator, $$\begin{aligned}
\left<\hat s_{\vec k}\hat s_{\vec k'}\right> = (2\pi)^3\delta^3(\vec k - \vec k') P_{\hat s}(k),\end{aligned}$$ such that, $$\begin{aligned}
\left< \hat s(\vec x) \hat s^*(\vec x') \right>=&\int \frac{d^3k}{(2\pi)^3} e^{-\vec k\cdot \left( \vec x - \vec x' \right)} P_{\hat s}(k),\label{eq:gausscorrel}\\
\left< \left|\hat s(\vec x)\right|^2 \right> =&\int \frac{d^3k}{(2\pi)^3} P_{\hat s}(k),\end{aligned}$$ is independent of position $\vec x$. Random variables are denoted by a hat, $\hat s$.
Finite length
-------------
The continuous field $\hat s(\vec x)$ in the finite-sized box with length $L$ and periodic boundary conditions (a 3-torus) is related to the continuous field in infinite-space, $$\begin{aligned}
\hat s(\vec x)=&\int \frac{d^3k}{(2\pi)^3} e^{i\vec k \cdot \vec x} \hat s_{\vec k}\nonumber\\
=& \lim_{L\rightarrow\infty} \sum_{\vec n=-\infty}^{\infty} L^{-3} e^{i\frac{2 \pi \vec n}{L}\cdot \vec x} \hat s_{\frac{2 \pi \vec n}{L}},\label{eq:discretelims}\end{aligned}$$ where $\vec n$ is a vector of three integers. Inserting Eq. into Eq., one finds that, $$\begin{aligned}
\left< \hat s_{\frac{2 \pi \vec n}{L}} \hat s^*_{\frac{2 \pi \vec n'}{L}} \right> = L^{-3} \delta_{\vec n, \vec m} P_{\hat s}\left(\left|\frac{2 \pi \vec n}{L}\right|\right),\label{eq:appTwoPointDiscrete}\end{aligned}$$ where $\delta_{\vec n, \vec m}$ is the Kronecker delta.
Finite resolution
-----------------
Next, keeping the finite box size $L$ (not taking the infinite limit), the relation between the continuous and discrete functions in the box is set by the highest wave number $\pi M/L$, $$\begin{aligned}
\hat s(\vec x)
=& \lim_{M\rightarrow\infty} \sum_{\vec n=-M/2}^{M/2} L^{-3} e^{i\frac{2 \pi \vec n}{L}\cdot \vec x} \hat s_{\frac{2 \pi \vec n}{L}}.\label{eq:appFourierFiniteResolutionLimit}\end{aligned}$$
It is custom to simply take a finite $M$, which is equivalent to multiplying the Fourier space $\hat s_{\frac{2 \pi \vec n}{L}}$ with a tophat filter. Different filters may be chosen. The real space representation of the tophat filter indeed closely resembles a series of unit valued peaks, equally spaced at a separation $L/M$, such that the finite $M$ leads to a real space $s(\vec x)$ which is convolved with a series of delta functions, [*i.e.*]{} a discrete sampling. Note however that relation between the tophat filter and a discrete real space sampling is not exact, though it is a good approximation. Formally, a discrete periodic sampling of a function in real space corresponds to a discrete periodic sampling in Fourier space exactly, only when both samples are infinite ($M\rightarrow\infty$, see [@bracewell2000fourier]).
When one wants to generate a coarse grained realisation that represents an averaging of the continuous field, the filter applied inside the sum of Eq. should be the Fourier transform of the filter used in the real space averagin, such as a gaussian filter, cloud-in-cell, triangular-shaped-cloud, and so on [@Cui:2008fi].
In summary, the continuous gaussian random field $\hat s(\vec x)$ on an infinite space, can be discretely sampled on a grid with length $L$ and resolution $M$ by generating gaussian random numbers that obey, $$\begin{aligned}
\left<\hat s(\vec n L/M) \hat s(\vec n' L/M)\right> = \sum_{\vec m = -M/2}^{M/2} L^{-3} e^{i{\frac{2\pi}{M}\vec m \cdot(\vec n - \vec n')}} P_{\hat s}\left(\left|\frac{2 \pi \vec m}{L}\right|\right),\end{aligned}$$ with $\vec n$ a vector of three integers in the range $\left[0,M\right)$ and $\vec m$ a vector with integers in the range $\left[-M/2,M/2\right)$.
Generation in Fourier space
---------------------------
From Eq. , we find that both the real and imaginary parts of $ \hat s_{\frac{2 \pi \vec n}{L}} $ are independent gaussian random variables (see [*e.g.*]{} [@Sirko:2005uz]) with variance $\sigma_{\hat s} = \sqrt{\tfrac{1}{2}L^{-3} P_{\hat s}\left(\left|\frac{2 \pi \vec n}{L}\right|\right) }$, such that, $$\begin{aligned}
\left< \left|\hat s_{\frac{2 \pi \vec n}{L}} \right|^2 \right> =& \left< {\rm Re\,} \hat s_{\frac{2 \pi \vec n}{L}}^2 \right> + \left< {\rm Im\,} \hat s_{\frac{2 \pi \vec n}{L}}^2 \right> \nonumber\\
=& L^{-3} \delta_{\vec n, \vec m} P_{\hat s}\left(\left|\frac{2 \pi \vec n}{L}\right|\right).\end{aligned}$$ Imposing Hermitian symmetry $\hat s^*_{\frac{2 \pi \vec n}{L}} = \hat s_{-\frac{2 \pi \vec n}{L}}$ guarantees that $\hat s(\vec x)$ be real valued. Finally, one simply has to draw normal gaussian random complex values $\hat \xi$ for each discrete wave number, and multiply by its respective amplitude to realise one random field, $$\begin{aligned}
\hat s(\vec x_i)
=& \sum_{\vec n=-M/2}^{M/2} e^{i\frac{2 \pi \vec n}{L}\cdot \vec x_i} \left[{\tfrac{1}{2}L^{-3} P_{\hat s}\left(\left|\frac{2 \pi \vec n}{L}\right|\right) }\right]^{\frac{1}{2}} \hat\xi\left(\frac{2 \pi \vec n}{L}\right),\label{eq:appFourierRealisation}\end{aligned}$$ where $\vec x_i$ represents a position on the discrete grid, with three discrete components in the range $\left[0,M\right)$.
Generation in real space
------------------------
Following [@Bertschinger:2001ng], the separation of amplitude and normal random variables in Eq. implies that $\hat s(\vec x_i)$ can be written as a convolution of the inverse Fourier transforms of the individual ingredients, the amplitude and the normal random field. A normal random field transforms into a normal random field. Hence, one could just as well generate the normal random field in real space, and perform the convolution with the real space transform, $$\begin{aligned}
T(\vec x_i) =& \sum_{\vec n=-M/2}^{M/2} e^{i\frac{2 \pi \vec n}{L}\cdot \vec x_i} \left[{\tfrac{1}{2}L^{-3} P_{\hat s}\left(\left|\frac{2 \pi \vec n}{L}\right|\right) }\right]^{\frac{1}{2}}.\end{aligned}$$
Generating just one grid, this method is equivalent to the Fourier space method. However, now having access to the random numbers in real space, this opens the way to generating higher resolution initial conditions in only a part of the initial conditions, by creating a subgrid in a subregion, which is again convolved with the real space transfer function $T(\vec x_i)$. These are so-called [*zoom initial conditions*]{}. The description here is by no means sufficient to generate accurate zoom initial conditions, but is intended as a rough sketch. Refs. [@Bertschinger:2001ng; @Hahn:2011uy] discuss the matter in great detail, solving issues such as mass conservation and keeping the zoom region ‘invisible’ to the coarsely sampled outer regions when it comes to the gravitational Poisson equation.
Pre-initial conditions, or particles
------------------------------------
The choice of largest wave number $k=\pi M/L$ is unrelated to the choice of the number of particles, although it is tempting (and simplest) to set them equal, such that the vertices on the grid for the Fourier transform are directly interpreted as particles. This simplest choice is what we used in the illustrations in the body of this paper. An obvious deviation is to choose less particles, still on the vertices though, simply skipping a fraction of the generated vertices.
The most popular choice is to put particles on [*glass*]{} pre-initial conditions. Particles are distributed randomly and evolved under a repulsive force with a friction to aid convergence, until they reach an equilibrium. Such a configuration has no preferred directions, as opposed to a grid. The downside is that the particles are not at the vertices of the Fourier grid, such that the values of the field at the particle positions must be obtained by interpolations, and a mathematically rigorous construction of the field values at particle positions is lost. The interpolation between the grid vertices, boils down to turning the discrete field back into a continuous one, by convolving the discrete field with a mass window function, the shape of which needs to be taken into account as it affects the power spectrum just like a filter. Other tilings, apart from glass and grid, exist as well [@Hansen:2006hj].
As pointed out in for example [@Baertschiger:2001eu], both grids and glasses are gravitationally unstable against small perturbations. This implies that part of the finest structure found in a simulation is due to the choice of pre-initial conditions, and not cosmological of origin.
Summary
-------
The parameters that a simulator has to choose in the initial conditions, include at least:
- the box size $L$,
- the highest wavenumber $\pi M/L$
- additional filters on the initial spectrum (apart from the tophat at $\pi M/L$),
- the number of vertices / particles,
- the position of the vertices (on the Fourier grid, glass, …).
Each of these has its own consequences for the resolution of the simulation, and the meaning of its outcome. What these consequences are, is a whole field of research [@Pen:1997up; @Baertschiger:2001eu; @Joyce:2004qz; @Sirko:2005uz; @Prunet:2008fv; @Joyce:2008kg; @Carron:2014wea; @Colombi:2014zga], and is beyond the scope of this paper.
[^1]: [<http://falconb.org>]{}
[^2]: One can add vorticity by means of several scalar fields [@Schutz:1970my].
[^3]: A similar conclusion is drawn in Ref. [@Joyce:2004qz], based on the continuity equation such that velocities [*are*]{} in agreement with a physical theory.
[^4]: In cosmology, the labels $\vec x'$ are often referred to as the initial positions of particles at the past singularity. This is not a consequence of Lagrangian perturbation theory, but this is a consequence of the fact that in cosmology usually only growing modes are considered, such that $D(x) \rightarrow 0$ at past infinity. Fundamentally, it is not necessary for $\vec x'$ and $\vec x$ to coincide at any time, especially since in the context here there is no reference to time at all.
[^5]: The symbol $\perp$ can be pronounced ‘perp’, for ‘perpendicular’.
[^6]: Compare with the convention in [@Ma:1995ey], in the longitudinal gauge we have ($A=\psi,H_L=-\phi$) and in the synchronous gauge ($6H_L=h,H_T=6\eta+h$).
[^7]: See Ref. [@Hu:2004xd] for the special case of the perfect fluid, [*i.e.*]{} $q=\sigma=0$.
[^8]: As noted in [@Adamek:2013wja], for simulations in the longitudinal gauge, $n^{\bare}_{\rm (long)}$ corresponds to $n_{\rm (comov)}\equiv n_{\rm (comov)}^\bare / \sqrt{g_{\rm (comov)}^{(3)}} $ of the comoving gauge, but this coincidence does not occur for other gauges in which the transverse traceless perturbation of the metric does not vanish.
[^9]: Normally called comoving coordinates, we exceptionally say co-expanding in order to avoid confusion with a coordinate gauge for relativistic perturbations, such as the comoving gauge.
[^10]: In the Newtonian picture, cosmic expansion is a recession velocity $v_{\rm recession} = \mathcal{H}\, d$, for arbitrary distance $d$. Identifying $k$ with a wavelength $\lambda = 2\pi/k$, we have $v_{\rm recession}=1$ for $k=\mathcal{H} / (2\pi)$, hence for simulations that include scales close to the Hubble radius. Thus, the Newtonian limit is valid for simulations of boxes smaller than $2\pi/(3(1+w)\mathcal{H})$ [@Rigopoulos:2013nda].
[^11]: During the final stages of preparation of this manuscript, the same was pointed out in Ref. [@Fidler:2015npa], where it is shown that for Cold Dark Matter, the gravitational potential follows the Newtonian Poisson equation, such that the [*linear*]{} dynamics look completely Newtonian at relativistic scales.
[^12]: In our numerical calculation, we do not assume the quasi-static approximation, and evolve the full dynamical equation of motion of the extra scalar field.
[^13]: In the only CDM+modified gravity case, the functions $(\mu,\gamma)$ are related to $(Q,R)$ by: $Q=\mu\gamma\;,\;\; R=\gamma^{-1}\;.$
[^14]: There exists another equivalent definition by moving the non-minimally coupling term from the left-hand side to the right, [*i.e.*]{} $T^{(Q)}_{\mu\nu}[\rho_{\pi},\theta_{\pi},g_{\mu\nu},\cdots]\equiv T^{(\pi)}_{\mu\nu}[\pi,\dot\pi,\cdots] - m_0^2\Omega G_{\mu\nu}[g_{\mu\nu}]$. The differences between this definition and the one in equation (\[eq:def\_QT\]) is that in the former, the extra argument of $T^{(Q)}_{\mu\nu}$ is a metric field, while in the latter it is a matter field.
[^15]: Here the “linear scale”, we mean the scale above the screening scale via the chameleon mechanism [@Brax:2004qh].
[^16]: Our the above explanation is still valid in the $w$CDM case on the small scale, where the collapsing of Dark Matter will happen.
[^17]: If there is a direct coupling between Dark Matter and the extra scalar field, the Dark Matter might not be conserved due to the non-trivial coupling[@Amendola:1999er; @Matarrese:2003tn; @Maccio:2003yk; @Perrotta:2003rh].
|
---
abstract: 'Observables have a dual nature in both classical and quantum kinematics: they are at the same time *quantities*, allowing to separate states by means of their numerical values, and *generators of transformations*, establishing relations between different states. In this work, we show how this two-fold role of observables constitutes a key feature in the conceptual analysis of classical and quantum kinematics, shedding a new light on the distinguishing feature of the quantum at the kinematical level. We first take a look at the algebraic description of both classical and quantum observables in terms of Jordan-Lie algebras and show how the two algebraic structures are the precise mathematical manifestation of the two-fold role of observables. Then, we turn to the geometric reformulation of quantum kinematics in terms of Kähler manifolds. A key achievement of this reformulation is to show that the two-fold role of observables is the constitutive ingredient defining what an observable is. Moreover, it points to the fact that, from the restricted point of view of the transformational role of observables, classical and quantum kinematics behave in exactly the same way. Finally, we present Landsman’s general framework of Poisson spaces with transition probability, which highlights with unmatched clarity that the crucial difference between the two kinematics lies in the way the two roles of observables are related to each other.'
author:
- 'Federico Zalamea[^1]'
title: |
The Two-fold Role of Observables\
in Classical and Quantum Kinematics
---
Introduction {#intro}
============
In the contemporary usage, the ‘kinematical description’ of a physical system has come to signify a characterization of all the *states* accessible to the system and all the *observables* which can be measured. These are the two fundamental notions of kinematics, and each is associated with different areas of mathematics: the set of all states is generally conceived as a *space* and is hence described by means of *geometric* structures; the set of all observables, on the other hand, is generally conceived as an *algebra* and is accordingly described by means of algebraic structures.
Of course, the notions of ‘state’ and ‘observable’ are closely related—much in the same way that, in mathematics, geometric and algebraic methods are. One first obvious such relation is the existence of a ‘numerical pairing’ between states and observables. If we respectively denote by ${\mathcal{S}}$ and ${\mathcal{A}}$ the space of states and algebra of observables of a certain physical system, then the numerical pairing is a map: $$\begin{aligned}
\langle \cdot, \cdot \rangle: & {\mathcal{S}}\times {\mathcal{A}}&\longrightarrow {\mathbb{R}}\\
& (\rho, F) &\longmapsto \langle \rho, F \rangle.\end{aligned}$$ In the geometric formulation of classical kinematics, where the notion of state is primitive—it is the starting point from which the other notions are built—, this numerical pairing is seen as the *definition* of an observable and is rather denoted by $F(\rho)$: observables are indeed defined as smooth real-valued functions over the space of states[@abraham1978; @arnold1989]. On the contrary, in the algebraic formulation of quantum kinematics, the primitive notion is that of an observable and the numerical pairing is used instead to define states: the latter are considered to be linear (positive) functionals over the algebra of observables [@strocchi2008]. Accordingly, in the algebraic setting, the numerical pairing is denoted by $\rho(F)$. Formally, the transformation which allows to switch between these two points of view on the numerical pairing $\langle \rho, F\rangle$ (the geometric, where $\langle \rho, F\rangle=:F(\rho)$, and the algebraic, where $\langle \rho, F\rangle =:\rho(F)$) is called the *Gelfand transform*[@gelfand1943; @landsman1998].
As is well-known, one crucial difference between classical and quantum kinematics is the interpretation of the number $\langle \rho, F \rangle$. Whereas in the former the numerical pairing is interpreted as yielding the *definite value* of the physical *quantity* $F$ when the system is in the state $\rho$, in the latter the numerical pairing can only be interpreted *statistically*—as the expectation value for the result of measuring the observable $F$ when the system is prepared in the state $\rho$. Because of this feature, the conception of observables as quantities having well-defined values at all times cannot be straightforwardly applied to the standard formulation of quantum kinematics. This difficulty has surely been one of the main sources of insatisfaction towards the quantum theory. In fact, one could argue that all hidden variable theories are (at least partially) motivated by the desire to reconcile quantum kinematics with such a conception. But, as the various results from von Neumann, Gleason, Bell, Kochen and Specker have shown, the clash between the standard quantum formalism and the interpretation of observables as quantities is irremediable[@neumann1955; @gleason1957; @bell1964; @kochen1967].
Yet, although the numerical pairing and the associated conception of observables-as-quantities has dominated much of the attention, both classical and quantum observables play another important role in relation to states: they generate transformations on the space of states. The progressive disclosure of the intimate link between observables and transformations is, in my opinion, one of the most important conceptual insights that the 20th century brought to the foundations of kinematics. A much celebrated result pointing in this direction is of course Noether’s first theorem, which relates the existence of symmetries to the existence of conserved quantities[@kosmann2011]. For some particular observables, this relation is now included in the folklore of theoretical physics—for instance, by *defining* linear momentum, angular momentum and the Hamiltonian as the generators of space translations, space rotations and time evolution respectively[@townsend2000]. This notwithstanding, the idea that a systematic relation between observables and transformations may constitute a key feature in the conceptual analysis of classical and quantum kinematics has remained somewhat dormant, despite some attempts to draw more attention to it[@alfsen2001; @catren2008; @guillemin2006; @grgin1974; @landsman1994].
The goal of this paper is to insist on the usefulness of investigating the conceptual structure of both classical and quantum kinematics through the looking glass of the two-fold role of observables. Rather than considering “states” and “observables” as the two fundamental notions, we will henceforth distinguish observables-as-quantities and observables-as-transformations and consider what we call the “fundamental conceptual triad of Kinematics" (). Through their numerical role, observables allow to distinguish, to *separate* different points of the space of states; on the other hand, when viewed as the generators of transformations on the space of states, they instead allow to *relate* different states. Understanding precisely in which manner these two different roles are articulated to give a consistent account of the notion of “observable” will be the key question of our analysis. We will explain in detail how the two-fold role of observables is manifest in the mathematical structures used to describe the space of states and the algebra of observables of classical and quantum systems, and we will use this common feature to shed a new light on the fundamental traits distinguishing the Quantum from the Classical. As it will be shown, quantum kinematics can be characterized by a certain compatibility condition between the numerical and transformational roles of observables.
$$\begin{tikzcd}[column sep=large, row sep=small]
&& \parbox{8em}{\centering \normalsize Numerical role\\ of observables}
\ar[leftrightarrow, dd, "\Huge{?}", line width = 0.2mm] \\
\parbox{8em}{\centering \normalsize States}
\ar[leftarrow, rru, bend left=20, "separate", line width = 0.15mm]\\
&& \parbox{9.45em}{\centering \normalsize Transformational role of observables}
\ar[llu, bend left = 20, "relate", line width = 0.15mm]
\end{tikzcd}$$
But before entering into the details, let us first briefly sketch the content of the paper. In we will review the standard formulations of classical and quantum kinematics, where the first is casted in the language of symplectic geometry and the second in the language of Hilbert spaces. In both cases, the algebra of observables has the structure of a Jordan-Lie algebra. This is a real algebra equipped with two structures, a commutative Jordan product and an anti-commutative Lie product, which respectively govern the numerical and transformational roles of observables. From this point of view, the only difference between the classical and quantum algebras of observables lies in the associativity or non-associativity of the Jordan product, but it is a priori unclear what this means.
In , we move on to discuss the geometric formulation of quantum kinematics, which stresses the role of the geometric structures inherent to any *projective* Hilbert space. The Jordan-Lie structures of the algebra of observables are mirrored by two geometric structures on the quantum space of states: a symplectic and a Riemannian structure. A key achievement of this reformulation is to show that the two-fold role is the constitutive ingredient defining what an observable is. Moreover, it points to the fact that, from the restricted point of view of the transformational role of observables, classical and quantum kinematics behave in exactly the same way.
However, a satisfactory comparison of the classical and quantum Jordan structures remains somewhat elusive at this stage, mainly because the language of Kähler manifolds fails to provide a unifying language for describing both kinematics. Thus, in we finally turn to Landsman’s proof that any state space can be described as a Poisson space with transition probability. As we will show, this framework highlights with unmatched clarity that the crucial difference between classical and quantum kinematics lies in the way the two roles of observables are related to each other.
Standard formulation of kinematics {#standard}
==================================
Standard classical kinematics {#standardC}
-----------------------------
[*Symplectic geometry has become the framework *per se* of mechanics, up to the point one may claim today that these two theories are the same. Symplectic geometry is not the language of mechanics, it is its essence and matter.*[@iglesias2014]]{}
[P. Iglesias-Zemmour]{}
Classical Hamiltonian mechanics is casted in the language of symplectic geometry[@souriau1970; @chernoff1974; @abraham1978; @arnold1989; @puta1993; @marsden1999]. In this formulation, the starting point is the classical space of states of the system ${\mathcal{S}}^C$, which is identified with a finite-dimensional symplectic manifold.
\[def:symp\] A finite-dimensional *symplectic manifold* is a differentiable manifold $S$ equipped with one additional structure: a two-form $\omega \in \Omega^2(S)$, called the *symplectic form*, which is closed and non-degenerate. This means:
1. $\omega$ is an anti-symmetric section of $T^*S \otimes T^*S$,
2. $d\omega = 0$,
3. $\omega$, seen as a map from $TS$ to $T^*S$, is an isomorphism.
The group $T^C$ of classical *global state transformations* is the group of symplectomorphisms $\text{Aut}(S) = Symp (S)$[^2]. It is the subgroup of diffeomorphisms $\phi : S \longrightarrow S$ leaving invariant the symplectic 2-form: $\phi^*\omega=\omega$, where $\phi^*\omega$ is the pull-back of the symplectic form[^3].
The Lie algebra ${\mathfrak{t}}^C$ of classical *infinitesimal state transformations* is the Lie algebra associated to the group of global transformations. It is the Lie algebra $\Gamma(TS)_\omega$ of vector fields leaving invariant the symplectic 2-form: $\Gamma(TS)_\omega = \{ v \in \Gamma(TS) \:|\: {\mathcal{L}}_v \omega = 0\}$ where ${\mathcal{L}}$ denotes the Lie derivative[^4].
Finally, classical observables are defined as smooth real-valued functions over the space of states. The algebra of observables ${\mathcal{C}^\infty}(S, {\mathbb{R}})$ has the structure of a Poisson algebra:
\[def:Poiss\] A *Poisson algebra* is a real (usually infinite-dimensional) vector space ${\mathcal{A}}^C$ equipped with two additional structures: a Jordan product $\bullet$ and a Lie product $\star$ such that:
1. $\bullet$ is a bilinear symmetric product, \[Psym\]
2. $\star$ is a bilinear anti-symmetric product,\[Pasym\]
3. $\star$ satisfies the Jacobi identity: $f\star (g\star h) + g\star (h\star f) + h\star (f\star g) = 0$,\[PJac\]
4. $\star$ satisfies the Leibniz rule with respect to $\bullet$: $f\star (g \bullet h) = (f \star g)\bullet h + g \bullet (f \star h)$,\[PLeib\]
5. $\bullet$ is associative. \[Passoc\]
The Lie product of a Poisson algebra is very often called the *Poisson bracket* and denoted by $\{\cdot, \cdot\}$.
In this case, the commutative and associative Jordan product $\bullet$ is simply the point-wise multiplication of functions. The Lie product, on the other hand, is defined in terms of the symplectic structure by: $$\label{def:pb}
\forall f,g \in {\mathcal{C}^\infty}(S, {\mathbb{R}}), \: \{f,g\}=f \star g:=\omega(df^\sharp, dg^\sharp),$$ where $df^\sharp:=\flat^{-1}(df)$ and $\flat$ denotes the so-called musical vector bundle isomorphism defined by $$\begin{aligned}
\flat : T_pS \xrightarrow{\:\: \sim \:\:}& \:T_p^*S\\
v \longmapsto &\: \omega_p(v, \cdot).\end{aligned}$$ The fact that $\omega$ is a 2-form implies the anti-commutativity of the product thus defined, whereas the Jacobi identity follows from the closedness of the symplectic form.
Let us make a series of comments on the definition of the Poisson algebra of classical observables in order to motivate the terminology and explain the relation between the algebraic structures and the two-fold role of observables in classical kinematics.
First, axioms \[Psym\] and \[Passoc\] turn $({\mathcal{A}}^C, \bullet)$ into a Jordan algebra[^5]. Moreover, notice how the very definition of the Jordan product of two classical observables involves solely their *numerical* role: $f\bullet g$ is defined as the observable whose *value* at each state is the product of the *values* of the observables $f$ and $g$ at the the same state. Conversely, the set $spec(f) \subset {\mathbb{R}}$ of values of the observable $f$ is in fact completely determined by its position within the Jordan algebra $({\mathcal{A}}^C, \bullet)$ [@landsman1994]. Indeed, it may be defined as $$spec(f):=\big\{\alpha \in {\mathbb{R}}\:\big|\: \nexists g \in ({\mathcal{A}}^C, \bullet) \text{ such that } (f-\alpha1) \bullet g = 1\big\},$$ in exact analogy with the definition of the spectrum of a linear operator[^6]. In this sense, *the Jordan structure of the algebra of classical observables completely encodes their numerical role*.
Similarly, axioms \[Pasym\] and \[PJac\] turn $({\mathcal{A}}^C, \star)$ into a Lie algebra. Only axiom \[PLeib\] establishes a relation between the otherwise unrelated Jordan and Lie structures of the algebra of classical observables. Given an observable $f \in {\mathcal{A}}^C$, consider the linear operator $v_f$ whose action on any element $g\in {\mathcal{A}}^C$ is defined by $v_f(g):=f \star g$. The Leibniz rule states that the linear operator $v_f$ is in fact a derivation on the Jordan algebra $({\mathcal{A}}^C, \bullet)$. Now, derivation on an algebra of smooth functions over a manifold are nothing but vector fields: $$Der({\mathcal{C}^\infty}(S, {\mathbb{R}}), \bullet) = \Gamma(TS),$$ and it is easy to show that the derivative operator $v_f$ leaves the symplectic form invariant. Hence, the Leibniz rule guarantees the existence of a map $$\label{map:cot}
v_{-}: {\mathcal{A}}^C \longrightarrow {\mathfrak{t}}^C$$ that, to any classical observable $f$ associates an infinitesimal state transformation $v_f$. The vector field $v_f$ is more commonly called the *Hamiltonian vector field* associated to $f$, and $v_{-}$ the *Hamiltonian map*. It is the technical tool that captures the transformational role of classical observables. In particular, the susbset ${\mathfrak{t}}_{\mathcal{A}}^C$ of Hamiltonian vector fields represents the set of infinitesimal transformations arising from classical observables.
From this point of view, the Jacobi identity is the requirement that this map be a morphism of Lie algebras: $\begin{tikzcd}
({\mathcal{A}}^C, \star) \ar[r, "v_{-}"] & {\mathfrak{t}}^C.
\end{tikzcd}$ Indeed, axiom \[PJac\] may be rewritten as: $$\begin{aligned}
v_{f\star g}(h) = v_f \circ v_g(h) - v_g \circ v_f(h) =:[v_f, v_g](h).\footnotemark\end{aligned}$$ Moreover, since the kernel of the map $v_{-}$ is the set of constant functions[^7], we then have the isomorphism of Lie algebras $$\label{iso:cot}
({\mathcal{A}}^C/ {\mathbb{R}}, \star) \simeq {\mathfrak{t}}^C_{\mathcal{A}}.$$ In other words, “up to a constant”, *the transformational role of classical observables is found by simply forgetting the Jordan product and focusing on the Lie structure*.
To sum up, the following picture emerges (see ): In the standard geometric formulation of classical kinematics, the primitive notion from which one constructs all the others is the notion of ‘state’. Classical observables are defined by their numerical role, which yields the *commutative* Jordan algebra $({\mathcal{C}^\infty}(S, {\mathbb{R}}), \bullet)$. Thus, classical observables are primarily seen as *quantities*. Their transformational role, on the other hand, appears only as a secondary feature, defined in a subsequent stage. It is defined through the addition of a *non-commutative* algebraic structure, the Lie product or Poisson bracket, induced by the geometric symplectic structure present on the classical space of states.
$${\small\begin{tikzcd}[row sep=small]
&&&& \parbox{7em}{\centering \normalsize \textbf{numerical role}\\ \textbf{of observables}} \ar[dd, "v_{-}", Rightarrow]
& \parbox{7em}{\centering \emph{Jordan structure}\\ (commutative, associative)}\\
&\boxed{\normalsize\textsc{states}} \ar[rrru, bend left=20, Rightarrow, line width = 0.23mm]\\
\parbox{4em}{\centering \emph{symplectic structure}}
&&&& \parbox{9em}{\centering \normalsize transformational role of observables} \ar[lllu, bend left = 20, dashed]
& \parbox{8em}{\centering \emph{Lie structure}\\ (non-commutative, non-associative)}
\end{tikzcd}}$$
Standard quantum kinematics {#standardQ}
---------------------------
In the standard formulation of quantum kinematics, the starting point is an abstract Hilbert space ${\mathcal{H}}$, usually infinite-dimensional. In order to facilitate the comparison with the classical case, we will again identify the mathematical structures used to describe the four fundamental structures: the space of quantum states ${\mathcal{S}}^Q$, the group of quantum global state transformations $T^Q$, the Lie algebra of quantum infinitesimal state transformations ${\mathfrak{t}}^Q$ and the algebra of quantum observables ${\mathcal{A}}^Q$.
\[def:Hilb\] A complex *Hilbert space* is a complex vector space ${\mathcal{H}}$ equipped with one additional structure: a Hermitian, positive-definite map $\langle \cdot, \cdot \rangle: {\mathcal{H}}\times {\mathcal{H}}\longrightarrow {\mathbb{C}}$ such that the associated metric $d: {\mathcal{H}}\times {\mathcal{H}}\longrightarrow {\mathbb{R}}^+$ defined as $d(\psi, \varphi) := \langle \varphi - \psi, \varphi - \psi\rangle$ turns $({\mathcal{H}}, d)$ into a complete metric space.
For a quantum system described by ${\mathcal{H}}$, states are given by rays of the Hilbert space—that is, by one-dimensional subspaces of ${\mathcal{H}}$.
The group $T^Q$ of *global state transformations* is the group $Aut({\mathcal{H}})= U({\mathcal{H}})$ of unitary operators. It is the subgroup of linear operators $U: {\mathcal{H}}\longrightarrow {\mathcal{H}}$ such that $U^*= U^{-1}$. The Lie algebra ${\mathfrak{t}}^Q$ of *infinitesimal state transformations* is the Lie algebra $({\mathcal{B}}_{i{\mathbb{R}}}, [\cdot, \cdot])$ of bounded anti-self-adjoint operators[^8].
Finally, a quantum observable is described by a bounded self-adjoint operator. The algebra of observables ${\mathcal{B}_{\mathbb{R}}}({\mathcal{H}})$ has the structure of a non-associative Jordan-Lie algebra:
\[def:naJL\] A *non-associative Jordan-Lie algebra* is a real vector space ${\mathcal{A}}^Q$ (usually infinite-dimensional) equipped with two additional structures: a Jordan product $\bullet$ and a Lie product $\star$ such that
1. $\bullet$ is a bilinear symmetric product, \[naJLsym\]
2. $\star$ is a bilinear anti-symmetric product,\[naJLasym\]
3. $\star$ satisfies the Jacobi identity: $F\star (G\star H) + G\star (H\star F) + H\star (F\star G) = 0$,\[naJLJac\]
4. \[naJLLeib\]$\star$ satisfies the Leibniz rule with respect to $\bullet$: $$F\star (G \bullet H) = (F \star G)\bullet H + G \bullet (F \star H),$$
5. \[naJLassoc\] $\bullet$ and $\star$ satisfy the associator rule: $(F \bullet G) \bullet H - F \bullet (G\bullet H) = (F \star H) \star G.$
In this case, both the Jordan and Lie products are related to the composition of operators $\circ$, by means of the anti-commutator and ($i$-times) the commutator respectively: $$\begin{aligned}
F\star G :=& \frac{i}{2}[F, G]_{~} = \frac{i}{2}(F \circ G-G \circ F)\:\:
\label{def:qjp}\\
F \bullet G :=& \frac{1}{2}[F, G]_+= \frac{1}{2}(F\circ G + G\circ F)\footnotemark.
\label{def:qlp}\end{aligned}$$ As it was the case in classical kinematics, also in quantum kinematics can the two natural algebraic structures present on the set of quantum observables be seen as the manifestation of the two-fold role of observables. This time, however, the transformational role of quantum observables is much easier to perceive. Indeed, the quantum analogue of the classical map is here simply defined as $$\begin{aligned}
V_{-}: {\mathcal{A}}^Q &\longrightarrow {\mathfrak{t}}^Q \label{map:qot}\\
F &\longmapsto iF\nonumber.\end{aligned}$$ In other words, given a quantum observable $F$, the associated generator of state transformations is just the anti-self-adjoint operator obtained through multiplication by $i$. The map $V_{-}$ is obviously an isomorphism of Lie algebras: $$({\mathcal{A}}^Q, \star) \simeq {\mathfrak{t}}^Q,
\label{iso:qot}$$ which should be compared with its classical analogue . Again, this means that *considering quantum observables solely in their transformational role—that is, ignoring their numerical role—corresponds exactly to focusing only on the Lie structure and forgetting the second algebraic structure* (here, the Jordan product).
Mathematically, the above statement is certainly trivial. But this triviality points to the fact that, whereas in classical kinematics there was an emphasis on the numerical role of observables, quantum kinematics, at least in the standard Hilbert space formulation, presents the reverse situation: *quantum observables are defined by their role as generators of state transformations* and the reading of their numerical role is more involved. From this perspective, it is not surprising that specific quantum observables are sometimes explicitly defined through their transformational role[@townsend2000].
On the other hand, the set of possible values of a quantum observable $F$ is completely determined by the Jordan structure. A simple way of seeing this is to recall that the spectrum of $F$ coincides with the ‘Gelfand spectrum’ of the $C^*$-algebra $(C^*(F), \circ)$ generated by $F$[^9]. But when $F$ is self-adjoint, this is a *commutative* subalgebra of ${\mathcal{B}}({\mathcal{H}})$[^10], in which case the composition $\circ$ and the anti-commutator $\bullet$ on $C^*(F)$ are the same operation. Therefore, as it was the case for the Classical, the Jordan structure encodes all the information of the numerical role of quantum observables.
At this point, it is worth drawing the analogue of for the standard formulation of quantum kinematics. Now, the primitive mathematical structure from which all others are constructed is that of a Hilbert space ${\mathcal{H}}$. But ${\mathcal{H}}$ does not correspond neither to the space of states (given by ${\mathbb{P}\mathcal{H}}$) nor to the algebra of observables (given by ${\mathcal{B}_{\mathbb{R}}}({\mathcal{H}})$). Thus, one can no longer say which, among “states" and “observables", is the primitive notion of the formulation. The two roles of observables, on the other hand, are not on the same footing: observables are here defined by their transformational role (as operators on states) and their numerical role only comes in a second step (as expectation-values of operators). The conceptual priority between the two roles of observables is therefore reversed with respect to the classical case discussed in the previous section. The conceptual diagram is summarised in below.
[$$\begin{tikzcd}[row sep=small]
&&&& \parbox{8em}{\centering \normalsize numerical role\\ of observables} \ar[dd, Leftarrow] \ar[lllld, bend right = 20, dashed, leftrightarrow]
& \parbox{7em}{\centering \emph{Jordan structure}\\ (commutative, non-associative)}\\
\parbox{3em}{\normalsize \textbf{states}} & &
{\large\boxed{{\mathcal{H}}}} \ar[ll, Rightarrow, line width = 0.23mm]
\ar[rrd, Rightarrow, line width = 0.23mm]
\\
&&&& \parbox{9em}{\centering \normalsize \textbf{transformational}\\\textbf{role of observables}} \ar[llllu, bend left = 20, dashed]
& \parbox{8em}{\centering \emph{Lie structure}\\ (non-commutative, non-associative)}
\end{tikzcd}$$]{}
Classical/Quantum: from associative to non-associative Jordan-Lie algebras {#assoc/nassoc}
--------------------------------------------------------------------------
The striking similarity just brought to light between the classical and quantum algebras of observables motivates the definition of a general (not necessarily non-associative) Jordan-Lie algebra, which encapsulates both the classical and the quantum cases[@landsman1998]:
\[def:JL\] A *general Jordan-Lie algebra* is a real vector space ${\mathcal{A}}$ equipped with two additional structures: a Jordan product $\bullet$ and a Lie product $\star$ such that
1. $\bullet$ is a bilinear symmetric product, \[JLsym\]
2. $\star$ is a bilinear anti-symmetric product,\[JLasym\]
3. $\star$ satisfies the Jacobi identity: $F\star (G\star H) + G\star (H\star F) + H\star (F\star G) = 0$,\[JLJac\]
4. \[JLLeib\]$\star$ satisfies the Leibniz rule with respect to $\bullet$: $$F\star (G \bullet H) = (F \star G)\bullet H + G \bullet (F \star H),$$
5. \[JLassoc\] $\bullet$ and $\star$ satisfy the associator rule: $$\exists \kappa \in {\mathbb{R}}, (F \bullet G) \bullet H - F \bullet (G\bullet H) = \kappa^2(F \star H) \star G.$$
Only the last axiom differentiates the classical and quantum algebras of observables. When $\kappa=0$, the Jordan product is associative and one gets the definition of a Poisson algebra describing classical observables. When $\kappa=1$, one gets the previous definition for the algebra of quantum observables with a non-associative Jordan product. In fact, whenever $\kappa \neq 0$, one may always rescale the Lie product so as to yield $\kappa = 1$. Therefore, the world of Jordan-Lie algebras is sharply divided into the sole cases of $\kappa = 0$ (corresponding to classical mechanics) and $\kappa = 1$ (corresponding to quantum mechanics). In this precise sense, one can say that *the transition from classical observables to quantum observables is the transition from associative Jordan-Lie algebras to non-associative Jordan-Lie algebras*.
Characterizing the algebraic difference between Classical and Quantum in terms of the associativity or non-associativity of the Jordan product may come as a surprise to the reader more familiar with the widespread conception “classical/quantum = commutative/non-commutative”. However, it is by no means clear how to render precise this heuristic equation. As it is unfortunately often the case, the equation can be taken to mean that in Classical Kinematics one always has $fg-gf=0$ whereas in Quantum Kinematics one has in general $[F,G]\neq 0$. But this point of view adopts a wrong analogy between the two kinematics. Indeed, instead of comparing either the full algebras of observables (with both the Jordan and Lie structures), the two commutative algebras of observables-as-quantities (with only the Jordan structure) or else the two non-commutative algebras of observables-as-transformations (with only the Lie structure), this point of view on the Classical/Quantum transition compares point-wise multiplications of functions with the commutator of operators—that is, the Jordan structure of classical observables with the Lie structure of quantum observables...[^11]
Another widespread, and mathematically more sophisticated, point of view from which one could try to give meaning to the commutativity/non-commutativity idea is that of $C^*$-algebras[^12]: classical observables would be described by commutative $C^*$-algebras and quantum observables by non-commutative ones. However, although partially correct, this viewpoint fails to capture the whole situation. It is indeed true that the full algebra of quantum observables can equivalently be described as real non-associative Jordan-Lie-Banach algebra or as a complex non-commutative $C^*$-algebra. But, on the other hand, it is simply false that the full *Poisson* algebra of classical observables can be described as a commutative $C^*$-algebra. In fact, describing the set of classical observables as a commutative $C^*$-algebra is equivalent to completely ignoring the classical Poisson structure![^13]
Thus, when it comes to characterizing the algebraic difference between both kinematics, it seems better to abandon the commutativity/non-commutativity picture and adopt the associativity/non-associativity opposition furnished by the language of Jordan-Lie algebras. The main question becomes then to understand the meaning of the associator rule, crucial in distinguishing both kinematics. When $\kappa\neq 0$, as in the quantum case, a new relation between the Jordan and Lie structures of the algebra of observables is introduced. Therefore, one would expect that the precise way in which the two roles of observables are intertwined differs in both theories. In this regard, it is interesting to note, as it has been done in [@catren2014a], that two of the most notable features of the Quantum—namely, the eigenstate-eigenvalue link and the existence of a condition for observables to be compatible—may be reformulated as conditions relating the numerical and transformational roles:
1. *Eigenstate-eigenvalue link*. In the standard formulation of quantum kinematics, an observable $F$ has a definite value only when the state of the system is described by an eigenstate of the operator $F$: $F|\rho\rangle= \tilde F(\rho) |\rho \rangle$. If we denote by $[\rho]$ the ray in ${\mathcal{H}}$ describing the state of the system, this condition may be reformulated as: *the observable $F$ has a definite **value** on the state $[\rho]$ if and only if the state is left invariant by the **transformations** associated to the observable: $e^{tV_F}[\rho]= [\rho]$*.
2. *Compatibility of observables*. Two observables $F$ and $G$ are said to be compatible if, for any $\epsilon \in {\mathbb{R}}^+$, it is possible to prepare the system in a state $\rho$ such that $\Delta_\rho(F) + \Delta_\rho(G) < \epsilon$, where $\Delta_\rho(F)$ denotes the uncertainty of $F$[^14]. Thus, the compatibility of two observables is a notion that applies to their numerical role. However, as it is well-known, two observables are compatible if and only if their Lie product vanishes: $F \star G = \frac{i}{2}[F,G]=0$. This may be reformulated as: *the **values** of two quantum observables $F$ and $G$ are compatible if and only if $F$ is invariant under the **transformations** generated by $G$ (or vice versa)*.
The possibility of these reformulations hints to the idea that indeed the quantum is characterized by a particular interplay between the two-fold role of observables. But to confirm this, it is necessary to have a deeper understanding of the Jordan and Lie structures governing the algebra of observables.
The geometric formulation of quantum kinematics {#geometric}
===============================================
The comparison of the standard formulations of both kinematics brings out a striking structural similarity between the algebras of classical and quantum observables. They are both equipped with two products—one commutative and one anti-commutative—whose existence may be seen as a manifestation of the fundamental two-fold role of the observables of a physical system. On the other hand, the classical and quantum descriptions of the space of states seem at first sight not to have any points in common. One could then be inclined to think that, although the non-associativity of the Jordan product has been spotted as the main algebraic difference between classical and quantum observables, the really crucial departure of the Quantum with respect to the Classical lies in the nature of the space of states. For in the dominant conception of quantum mechanics, the linearity of the space of states is concomitant of the *superposition principle*, which in turn is often regarded as one—or perhaps *the*—fundamental feature of the theory, as Dirac asserts:
> [For this purpose \[of building up quantum mechanics\] a new set of accurate laws of nature is required. One of the most fundamental and the most drastic of these is the *Principle of Superposition*.[@dirac1958]]{}
From this perspective, the apparently radical difference between the geometric space of classical states and the linear space of quantum states may be perceived as the natural—and almost necessary—manifestation of this “drastic” new feature of the Quantum. But in claiming so, one forgets a central point, which indicates this whole idea cannot be the end of the story: the “true” quantum space, in which points do represent states, is the projective Hilbert space ${\mathbb{P}}{\mathcal{H}}$, a genuine *non-linear* manifold.
The principle of superposition has certainly been a powerful idea, with a strong influence on the heuristics of the Quantum, and its link with the linearity of Hilbert spaces has been in my opinion one of the main reasons for the widespread use of the standard formalism. However, in the attempt to compare classical and quantum kinematics, due care should be taken to express both kinematics in as similar terms as possible. It becomes therefore natural to attempt a reformulation of the quantum situation in a language resembling the classical one—that is, to forget Hilbert spaces and to develop the quantum theory directly in terms of the intrinsic geometry of ${\mathbb{P}}{\mathcal{H}}$.
The task of this reformulation is sometimes referred to as the “*geometric or delinearization program*”. Its explicit goal is to reestablish the fruitful link, witnessed in the classical case, between the geometry of the space of states and the algebraic structures of observables. Some of the most important references are [@kibble1979; @brody2001; @ashtekar1997; @schilling1996; @cirelli1999; @cirelli2003][^15].
The central result upon which the whole geometric program is based is the fact the projective Hilbert space ${\mathbb{P}\mathcal{H}}$ is a Kähler manifold.
A *Kähler manifold* is a real manifold $M$ (possibly infinite-dimensional, in which case $M$ should be taken to be a Banach manifold) equipped with two additional structures: a symplectic form $\omega$ and an integrable almost complex structure $J$ which is compatible with $\omega$. This means:
1. $J$ is a vector bundle isomorphism $J : TM \longrightarrow TM$ such that $J^2 =-1$,
2. For any point $p$ of $M$ and any two vectors $v,w$ of $T_pM$, $$\omega (Jv, Jw) = \omega (v, w).$$
Given this, one can naturally define a Riemannian metric $g \in \vee^2 \Gamma(TM)$ by: $$g(v,w):= \omega(v, Jw).
\label{def:qm}$$ In fact, a Kähler manifold can also be defined as a triple $(M, g, J)$ where $g$ is a Riemannian metric and $J$ is an invariant almost complex structure[^16]. Equation is then perceived as the definition of the symplectic form.
The important fact for us is that *the quantum space of states is both a symplectic manifold and a Riemannian manifold*. It has thus a very rich geometry which can be used to provide an alternative description of the full Jordan-Lie algebra of quantum observables, with no reference to operators on the Hilbert space. This is achieved in two steps, as we explain in the subsequent subsections. Here, we follow closely [@ashtekar1997].
The symplectic-Lie structure of the quantum {#quantum symplectic-Lie}
-------------------------------------------
Let ${\mathbb{S}\mathcal{H}}$ denote the collection of unit vectors of the Hilbert space ${\mathcal{H}}$, and consider the pair of arrows $$\begin{tikzcd}[column sep=large]
{\mathcal{H}}\ar[r, hookleftarrow, "i"]& {\mathbb{S}\mathcal{H}}\ar[r, twoheadrightarrow,"\tau"] & {\mathbb{P}}{\mathcal{H}},
\end{tikzcd}
\label{map:ph}$$ where the left arrow is simply the injection saying that ${\mathbb{S}\mathcal{H}}$ is a submanifold of ${\mathcal{H}}$, and the right arrow is the projection describing the unit sphere as a $U(1)$-fibre bundle over the projective Hilbert space (in other words, it describes ${\mathbb{P}\mathcal{H}}$ as a quotient: ${\mathbb{P}\mathcal{H}}\simeq {\mathbb{S}\mathcal{H}}/ U(1)$).
Consider now the map $\:\:\widehat \:\:: {\mathcal{B}_{\mathbb{R}}}({\mathcal{H}}) \longrightarrow {\mathcal{C}^\infty}({\mathbb{P}\mathcal{H}}, {\mathbb{R}})$ that, to a given self-adjoint operator $F$, associates the real-valued function defined by $$\widehat F(p) :=\langle \phi, F\phi\rangle, \text{ where } \phi \in \tau^{-1}(p),$$ and let us denote by ${\mathcal{C}^\infty}({\mathbb{P}\mathcal{H}}, {\mathbb{R}})_{\mathcal{K}}$ the image of this map.
$\:\:\widehat\:\:\:$ is obviously an injection of real vector spaces. There is however no hope for this map to be a bijection, as may readily be seen by considering finite-dimensional Hilbert spaces: in this case, ${\mathcal{B}_{\mathbb{R}}}({\mathcal{H}})$ is finite-dimensional whereas ${\mathcal{C}^\infty}({\mathbb{P}\mathcal{H}}, {\mathbb{R}})$ is infinite-dimensional. Through the map $\:\:\widehat\:\:\:,$ one can therefore think of self-adjoint operators as being some ‘very particular’ functions on the projective Hilbert space. The difficult question is to specify what ‘very particular’ means—*i.e.* to characterize ${\mathcal{C}^\infty}({\mathbb{P}\mathcal{H}}, {\mathbb{R}})_{\mathcal{K}}$ inside ${\mathcal{C}^\infty}({\mathbb{P}\mathcal{H}}, {\mathbb{R}})$.
A first step in this direction is to use the symplectic structure of the quantum space of states. Let us quickly recall how it is defined. The easiest way is to start by noticing that the Hilbert space ${\mathcal{H}}$ is itself a symplectic manifold. To see this, it is best to change perspectives and consider ${\mathcal{H}}$ from the point of view of real numbers rather than complex numbers. First, one views ${\mathcal{H}}$ as real vector space equipped with a complex structure $J$. This simply means that the multiplication of a vector by a complex number is now considered as the result of two operations—multiplication by real numbers and action of the linear operator $J$: for $z \in {\mathbb{C}}$ and $\phi \in {\mathcal{H}}$, we have $z \phi = \text{Re}(z) \phi + \text{Im}(z)J \phi$. Second, one also decomposes the Hermitian product of two vectors into its real and imaginary parts, and uses the natural isomorphism $T{\mathcal{H}}\simeq {\mathcal{H}}\times {\mathcal{H}}$[^17], to define the tensor $\Omega \in \Gamma(T^*{\mathcal{H}}\otimes T^*{\mathcal{H}})$ by $$\Omega(V_\phi, V_\psi):= 4\text{Im}(\langle \phi, \psi \rangle).
\label{def:hs}$$ The skew-symmetry of the Hermitian product entails the anti-symmetry of $\Omega$, which is hence a 2-form. The fact that the Hermitian product is positive-definite and non-degenerate implies $\Omega$ is both closed and non-degenerate. Therefore, $\Omega$ is a symplectic structure on ${\mathcal{H}}$. Given this, the symplectic form on the projective Hilbert space is the unique non-degenerate and closed 2-form $\omega \in \Omega^2({\mathbb{P}\mathcal{H}})$ such that $\tau^*\omega = \iota^* \Omega$ (the pull-back of $\omega$ to the unit sphere coincides with the restriction to ${\mathbb{S}\mathcal{H}}$ of the symplectic form on ${\mathcal{H}}$)[^18]. The induced Poisson bracket on ${\mathcal{C}^\infty}({\mathbb{P}\mathcal{H}}, {\mathbb{R}})$ will be denoted by $\{\cdot, \cdot\}_{{\mathbb{P}\mathcal{H}}}$.
The question now is whether the symplectic structure plays in quantum kinematics exactly the same role as it does in classical kinematics—namely, whether it allows to define both the Lie product on the algebra of quantum observables, and the generator of state transformations associated to any given observable. As it turns out, the answer is positive. Indeed, it can be shown [@ashtekar1997; @landsman1998] that the Hamiltonian vector field on ${\mathbb{P}\mathcal{H}}$ associated to the self-adjoint operator $F$ (regarded through the map $\:\:\widehat\:\:\:$ as a function on ${\mathbb{P}\mathcal{H}}$) coincides with the projection of the vector field $V_F$ (which is defined on ${\mathcal{H}}$ (cf. ) but is in fact tangent to ${\mathbb{S}\mathcal{H}}$ since it generates unitary transformations). In other words, we have $$\forall F \in {\mathcal{B}_{\mathbb{R}}}({\mathcal{H}}), \: v_{\widehat F}=\tau_*V_F \in \Gamma(T{\mathbb{P}\mathcal{H}}).
\label{eq:qvf}$$ Moreover, we also have $$\{\widehat F, \widehat K\}_{{\mathbb{P}\mathcal{H}}} = \frac{i}{2}\widehat{[ F, K]}
\label{eq:pbelp}$$ which means that the map $\:\:\widehat\:\:\:$ is an injection of Lie algebras: $$\begin{aligned}
\begin{tikzcd}[]
({\mathcal{B}_{\mathbb{R}}}({\mathcal{H}}), \frac{i}{2}[\cdot, \cdot]) \ar[r, hookrightarrow, "\widehat\:"] &({\mathcal{C}^\infty}({\mathbb{P}\mathcal{H}}, {\mathbb{R}}), \{\cdot, \cdot\}_{{\mathbb{P}\mathcal{H}}}).
\end{tikzcd}
\label{inj:bf}\end{aligned}$$ Hence, the commutator of bounded self-adjoint operators may be seen as the restriction to ${\mathcal{C}^\infty}({\mathbb{P}\mathcal{H}}, {\mathbb{R}})_{\mathcal{K}}$ of the Poisson bracket induced by the symplectic structure on ${\mathbb{P}\mathcal{H}}$. Together, equations and show that, *as far as the Lie structure and the transformational role of quantum observables is concerned, we might as well forget self-adjoint operators and reason in terms of expectation-value functions and the intrinsic symplectic geometry of the quantum space of states*.
This should be felt as an impressive merger of the two kinematics. Any space of states, be it classical or quantum, is a symplectic manifold and the symplectic structure plays exactly the same role in both cases: it induces the Lie structure on the algebra of observables and governs their transformational role. Or, to put it differently, if one decides to restrict attention and focus only on the transformational role of observables, then there is no difference whatsoever between classical and quantum kinematics. In particular, any statement of classical mechanics which only involves observables-as-transformations goes unchanged when passing to quantum mechanics[^19].
The Riemannian-Jordan structure of the Quantum {#quantum Riemannian-Jordan}
----------------------------------------------
Despite the injection , the full *Jordan*-Lie algebra $({\mathcal{B}_{\mathbb{R}}}({\mathcal{H}}), \frac{i}{2}[\cdot, \cdot], \frac{1}{2}[\cdot, \cdot]_+)$ cannot be seen as a subalgebra of the Poisson algebra $({\mathcal{C}^\infty}({\mathbb{P}\mathcal{H}}, {\mathbb{R}}), \{\cdot, \cdot\}_{{\mathbb{P}\mathcal{H}}}, \cdot)$. The obstruction, of course, lies on the Jordan structure: although quantum observables may be represented as functions on the space of states, the associative point-wise multiplication of functions cannot yield the non-associative Jordan product of quantum observables.
Another simple way of understanding the obstruction is to reflect on the representation of the *square* of an observable[^20]. Suppose that, for any given observable $\texttt{f}$ represented by the abstract element $f$, it is known how to construct the abstract element $f^2$ representing the observable $\texttt{f}^2$ (operationally defined as the observable associated with squaring the numerical results of all measurements of $\texttt{f}$). Then, this operation of taking squares allows us to define a Jordan product $\bullet$ on the algebra of observables by $$f\bullet k:=\frac{1}{4}((f+k)^2 -(f-k)^2).$$ Thus, in general, given an observable $\texttt{f}$ represented by the abstract element $f$, its square $\texttt{f}^2$ should be represented by the element $f \bullet f$, where $\bullet$ is the Jordan product. In classical kinematics, if $\texttt{f}$ is represented by a certain real-valued function $f$, then the observable $\texttt{k}:=\texttt{f}^2$ is represented simply by the square function $f^2$. But, as we know, this is not the answer of quantum kinematics: the observable $\texttt{f}$ is represented by the self-adjoint operator $F$ and the observable $\texttt{k}$ by the self-adjoint operator $F^2$. Thus, in terms of the functions over the space of states, we have $k = \widehat{F^2} \neq (\widehat F)^2 = f^2$. This shows that indeed point-wise multiplication is not the right structure for the quantum case. Instead, one should try to define a Jordan product $\bullet$ satisfying $\widehat{F^2} = \widehat F \bullet \widehat F$.
With little surprise, this is achieved by using the additional geometric structure present on the quantum space of states that we have ignored so far: the Riemannian metric $g$. This structure is defined in very similar fashion to the construction of the symplectic form. Now, one considers the real part of the Hermitian product to define the tensor $G \in \Gamma(T^*{\mathcal{H}}\otimes T^*{\mathcal{H}})$ by: $$G(V_\phi, V_\psi):= 4\text{Re}(\langle \phi, \psi \rangle).
\label{eq:hilbert_riemann}$$ This time, the skew-symmetry, positive-definitiveness and non-degeneracy of the Hermitian product respectively imply the symmetry, positive-definitiveness and non-degeneracy of $G$, which is hence a Riemannian metric on ${\mathcal{H}}$.
At this point, we may use again diagram to induce a Riemannian metric on the space of states. In the symplectic case, we regarded the isomorphism ${\mathbb{S}\mathcal{H}}/U(1) \simeq {\mathbb{P}\mathcal{H}}$ as the second stage of the Marsden-Weinstein symplectic reduction and this sufficed to insure ${\mathbb{P}\mathcal{H}}$ was also symplectic. Instead, we now adopt towards this isomorphism a different perspective, called by Ashtekar and Schilling the “Killing reduction”[@ashtekar1997]. It is the following: first, the restriction $i^*G$ of the metric $G$ to the unit sphere is again a metric and ${\mathbb{S}\mathcal{H}}$ becomes then a Riemannian manifold in its own right. Second, one regards the action of $U(1)$ on ${\mathcal{H}}$ as the one-parameter group of transformations generated by the vector field $V_{Id} \in \Gamma(T{\mathcal{H}})$ associated to the identity self-adjoint operator. Since these transformations preserve the Hermitian product, they also preserve the metric $G$. Thus $V_{Id}$ is a Killing vector. Moreover, this vector field is tangent to ${\mathbb{S}\mathcal{H}}$ and is hence also a Killing vector for $i^*G$. In this way, the isomorphism ${\mathbb{S}\mathcal{H}}/U(1) \simeq {\mathbb{P}\mathcal{H}}$ describes the projective Hilbert space as the space of all trajectories of the Killing vector field $V_{Id}$. By a result of Geroch[@geroch1971], we know that the resulting manifold is also Riemannian. The Riemannian metric on ${\mathbb{P}\mathcal{H}}$ is called the *Fubini-Study* metric and will be denoted by $g$[^21].
Given this metric, and in very similar fashion to the definition of the Poisson bracket in terms of the symplectic structure (cf. eq ), one can define the following product on ${\mathcal{C}^\infty}({\mathbb{P}\mathcal{H}}, {\mathbb{R}})$: $$\label{def:rb}
\forall f, k \in {\mathcal{C}^\infty}({\mathbb{P}}{\mathcal{H}}, {\mathbb{R}}), \:\: f \bullet k := g(v_f, v_k) + f\cdot k,$$ where, to the point-wise multiplication of functions $f\cdot k$, the metric adds a “Riemannian bracket” $(f, k) :=g(v_f, v_k)$. The result is a commutative and non-associative product. Thus, the presence of the Riemannian structure allows to *deform* the usual commutative and associative algebra of functions into a commutative but non-associative algebra. Yet, this does not turn $({\mathcal{C}^\infty}({\mathbb{P}\mathcal{H}}, {\mathbb{R}}), \bullet, \{\cdot, \cdot\}_{{\mathbb{P}\mathcal{H}}})$ into a Jordan-Lie algebra, for $\bullet$ and $\{\cdot, \cdot\}_{{\mathbb{P}\mathcal{H}}}$ do not satisfy the associator rule in general.
Remarkably, however, one has the identity[@ashtekar1997]: $$\label{eq:qrj}
\forall F, K \in {\mathcal{B}_{\mathbb{R}}}({\mathcal{H}}), \widehat F \bullet \widehat K = \frac{1}{2}\widehat{[F,K]_+}.$$ which has many important implications. First, it implies $\widehat F \bullet \widehat F = \widehat{F^2}$, as we wanted. Second, it shows that $({\mathcal{C}^\infty}({\mathbb{P}\mathcal{H}}, {\mathbb{R}})_{\mathcal{K}}, \bullet)$ is a subalgebra of $({\mathcal{C}^\infty}({\mathbb{P}\mathcal{H}}, {\mathbb{R}}), \bullet)$ and, more importantly, that *when restricted to this subalgebra* the new product $\bullet$ becomes a Jordan product. In other words, we now have the isomorphism of non-associative Jordan-Lie algebras: $$\begin{aligned}
\boxed{
\big({\mathcal{B}}_{\mathbb{R}}({\mathcal{H}}), \frac{1}{2}[\cdot, \cdot]_+, \frac{i}{2}[\cdot, \cdot]\big) \simeq \big({\mathcal{C}^\infty}({\mathbb{P}\mathcal{H}}, {\mathbb{R}})_{\mathcal{K}}, \bullet, \{\cdot, \cdot\}_{{\mathbb{P}}{\mathcal{H}}}\big).
}
\label{iso:operators_functions}\end{aligned}$$
In addition to its role in the definition of the Jordan product for quantum observables, the presence of a metric in the quantum space of states provides a very simple geometric interpretation of two other crucial aspects of quantum kinematics: its transition probability structure and the indeterminacy in the numerical value of observables. First, given a state $p$, the probability that a measurement of the observable $\widehat F$ will yield the result $\lambda$ is given by $$\text{Pr}(p, \widehat F = \lambda) = \cos^2\big(d_g(p, \Sigma_\lambda)\big)$$ where $\Sigma_\lambda$ is the subset of states having $\lambda$ as definite value of the observable $\widehat F$, and $d_g(p, \Sigma_\lambda)$ is the minimal distance between the state $p$ and the subset $\Sigma_\lambda$[^22]. In other words, the quantum transition probabilities appear here to be simply a measure of the distance in the quantum space of states. Second, from the combination of and , we get $$\label{eq:gu}
\Delta F = \widehat F \bullet \widehat F - \widehat F \cdot \widehat F = g(v_F, v_F),$$ which shows that the uncertainty of a quantum observable is nothing but the norm of the Hamiltonian vector field associated to it.
From the conceptual perspective that is ours, this last result is particularly enlightening. Indeed, in terms of the two-fold role of observables, this can also be expressed as: given a state $\rho$ and an observable $F$, *the uncertainty $\Delta F(\rho)$ in the numerical value of the observable $F$ is precisely a measure of how much the state $\rho$ is changed by the transformations generated by the observable*. In particular, we recover as a special case the relation, noted at the end of between definite-valuedness and invariance. Thus, it brings to the fore the existence in quantum kinematics of an interdependence between the numerical and transformational role of observables which is absent in classical kinematics.
The geometric characterization of quantum observables {#geometric quantum observables}
-----------------------------------------------------
Although we have now reached a completely geometric definition of both the Jordan and Lie structures governing the algebra of quantum observables, the reference to operators on Hilbert spaces has not yet been eliminated altogether: the elements of the algebra ${\mathcal{C}^\infty}({\mathbb{P}\mathcal{H}}, {\mathbb{R}})_{\mathcal{K}}$ are still defined as expectation-value functions associated to bounded self-adjoint operators. Thus, the last stone in the full geometric description of quantum observables is to furnish a criterion allowing to know when a function $f \in {\mathcal{C}^\infty}({\mathbb{P}\mathcal{H}}, {\mathbb{R}})$ is of this form.
Many different characterizations exist[@landsman1994; @shultz1982; @alfsen1976], but the simplest one—and the most relevant one from the point of view of the two-fold role of observables—was found first by Schilling[@schilling1996] (and shortly later rediscovered by Cirelli, Gatti and Manià[@cirelli1999]).
\[thm:obs\] Let $f$ be a smooth real-valued function over the projective Hilbert space ${\mathbb{P}\mathcal{H}}$. The following two conditions are equivalent:
1. there exists a bounded self-adjoint operator $F \in {\mathcal{B}_{\mathbb{R}}}({\mathcal{H}})$ such that, for any $\phi \in {\mathbb{S}\mathcal{H}}$, $f([\phi]) = \langle \phi, F\phi\rangle$,
2. the Hamiltonian vector field associated to the function $f$ is also a Killing vector field.
In one direction, the equivalence is obvious. In the other, the proof is essentially a combination of Wigner’s theorem [@wigner1959; @bargmann1954] (which forces the one-parameter group of transformations generated by the Killing field to be a group of unitary transformations) and Stone’s theorem (which forces the unitary transformations to be generated by a self-adjoint operator). The delicate part of the proof is to show that indeed it is a *bounded* self-adjoint-operator.
In this way, one reaches a mathematical definition of observables that applies equally well to classical and quantum kinematics:
\[def:o\]
In the classical case, there is only the symplectic structure to preserve. Hence, any function $f$ does the job, since its Hamiltonian vector field $v_f$ automatically verifies ${\mathcal{L}}_{v_f}\omega = 0$. But in the quantum case, there is also the metric to preserve. Accordingly, only those functions for which the symplectic gradient is also a Killing vector field will qualify as observables. Theorem \[thm:obs\] guarantees that these functions exactly coincide with the functions $\widehat F$ that are real expectation-value maps of bounded self-adjoint operators $F$. Moreover, it is important to notice that this last point only applies to the *projective* Hilbert space. Were one to insist on working at the level of ${\mathcal{H}}$, this geometric characterization of observables would fail, for there are too many functions preserving both the symplectic and Riemannian structures which do not arise as expectation-value maps of operators[@ashtekar1997].
The conceptual relevance of this definition should not be missed, as it enlightens the essential importance of the two-fold role of observables in kinematics. For *it is precisely this two-fold role, numerical and transformational, that serves as a definition of what an observable is*. The standard definition of classical observables only involved their numerical role—they were defined as functions on the space of states—and did not apply to quantum kinematics. Conversely, the standard definition of quantum observables only involved their transformational role—they were defined as operators acting on states—and did not apply to classical kinematics. A posteriori, it is therefore most natural that the general mathematical definition of an observable, classical or quantum, should explicitly mention both the functions and the transformations.
Classical vs Quantum in the geometric formulation {#c vs q geometric}
-------------------------------------------------
() below summarizes the conceptual understanding of classical and quantum kinematics that emerges from the analysis of their geometric formulations.
-- -- --
-- -- --
: Comparison of the main mathematical structures present in classical and quantum kinematics (geometric formulation).[]{data-label="tbl:quantum classical geometric"}
The comparison is striking: in contrast to the staggering difference between the two theories conveyed by the standard formalisms, the geometric point of view highlights the deep common ground shared by the classical and the quantum. It allows, by the same token, to pinpoint more precisely the place where they differ. Following Schilling, it is indeed tempting to say that
> the fundamental distinction between the classical and quantum formalisms is the presence, in quantum mechanics, of a Riemannian metric. While the symplectic structure serves exactly the same role as that of classical mechanics, the metric describes those features of quantum mechanics which do not have classical analogues.[@schilling1996 p. 48]
Both the non-associativity of the Jordan product and the indeterminacy of the values for quantum observables explicitly involve the metric. This view—that the quantum world has one additional geometric structure, *with no analogue in the classical*, and that, in a loose sense, to quantize is to add a Riemannian metric to the space of states—is found in the vast majority of works which played an important role in developing the “geometrization or delinearization program" of quantum mechanics.
Nonetheless, this is *not* the impression conveyed by the comparative table. It is not so much that the classical analogue of the Riemannian structure is *missing* but rather, one would be tempted to say, that the classical analogue is *trivial*. Indeed, one gets the impression that the correct classical ‘Riemannian metric’’ is $g=0$, for setting $g$ to vanish in the quantum formulas yields the classical ones. Of course, $g=0$ is not an actual metric, but this does suggest there may be yet another manner of formulating the two kinematical arenas, a manner in which they both exhibit the same two kinds of geometric structures, and it just so happens that one of these structures is trivial—and hence unnoticed—in classical kinematics.
Landsman’s axiomatization of quantum mechanics {#landsman}
==============================================
In the geometric approach to mechanics, the goal of a unifying programme is to find a notion of space which meets the following three requirements:
1. *Unification of states*: both the classical and quantum space of states fall under the same notion of space.
2. *Unification of observables:* there is a unique definition of the algebra of observables, which, when restricted to the classical case, yields a Poisson algebra, and when restricted to the quantum case yields a non-associative Jordan-Lie algebra.
3. *Characterization of the quantum*: there is a physically meaningful characterization of when a space of this sort is a quantum space of states.
As we have just seen, the formulation of quantum mechanics in terms of Kähler manifolds achieves the second requirement but it fails to meet the first one (and thus the last one).
At the end of the last century, Nicolaas P. Landsman developed an alternative approach which succeeds in meeting the three demands[@landsman1997; @landsman1998; @landsman1998c]. One can consider that the starting point of his approach is to extend the geometric formulation of mechanics to the case where there exist superselection rules. In this situation, the quantum space of states is no longer described by a single projective Hilbert space ${\mathbb{P}\mathcal{H}}$ but, instead, by a disjoint union of many: ${\mathcal{P}}^Q= \sqcup_\alpha {\mathbb{P}\mathcal{H}}_\alpha$. The classical analogue of this is to consider general Poisson manifolds instead of focusing only on symplectic manifolds[^23].
However, by performing such an extension, the Riemannian metric does no longer suffice to define all the transition probabilities on the quantum space of states. The problem arises when considering two inequivalent states $p$ and $p'$ (that is, two states belonging to different superselection sectors ${\mathbb{P}\mathcal{H}}_\alpha$ and ${\mathbb{P}\mathcal{H}}_{\alpha'}$): the geometric formula $\text{Pr}(p, p') = \cos^2(d_g(p, p'))$ cannot be applied since there is no notion of distance between points of different sectors. In the light of this, the natural strategy is to reverse the priority between the metric and the transition probabilities: instead of considering the metric $g$ as a primitive notion and the transition probabilities as a derived notion, take the transition probabilities as a fundamental structure of the quantum space of states.
Poisson spaces with transition probability
------------------------------------------
The relevant notion of space coined by Landsman is that of a *Poisson space with a transition probability*. In preparation of the definition of this notion, we first need to introduce some terminology:
\[def:ps\] A *Poisson space* is a Hausdorff topological space ${\mathcal{P}}$ together with a collection $S_\alpha$ of symplectic manifolds, as well as continuous injections $\iota_\alpha: S_\alpha \hookrightarrow {\mathcal{P}}$, such that $${\mathcal{P}}= \bigsqcup\limits_{\alpha} \iota_\alpha(S_\alpha).\vspace*{-0.7em}$$ The subsets $\iota_\alpha(S_\alpha) \subset {\mathcal{P}}$ are called the *symplectic leaves* of ${\mathcal{P}}$[^24].
A symmetric *transition probability space* is a set ${\mathcal{P}}$ equipped with a function $\text{Pr}: {\mathcal{P}}\times {\mathcal{P}}\longrightarrow [0,1]$ such that for all $\rho, \sigma \in {\mathcal{P}}$
1. $\text{Pr}(\rho, \sigma) = 1 \:\Longleftrightarrow \rho = \sigma$,
2. $\text{Pr}(\rho, \sigma) = \mathrm{Pr}(\sigma, \rho)$ (i.e. $\text{Pr}$ is symmetric).
The function $\text{Pr}$ is called a *transition probability*[^25].
Now, recall the geometric characterization of quantum observables achieved in : the quantum space of states was seen to be endowed with two geometric structures—a symplectic form $\omega$ and a Riemannian metric $g$. Associated to the symplectic structure $\omega$ was the set of functions ${\mathcal{C}^\infty}({\mathbb{P}\mathcal{H}}, {\mathbb{R}})_\omega$ preserving it. Similarly, to the Riemannian metric $g$ one associated the set ${\mathcal{C}^\infty}({\mathbb{P}\mathcal{H}}, {\mathbb{R}})_g$. Then, the algebra of observables was simply found to be the intersection: $${\mathcal{C}^\infty}({\mathbb{P}\mathcal{H}}, {\mathbb{R}})_{\mathcal{K}}:= {\mathcal{C}^\infty}({\mathbb{P}\mathcal{H}}, {\mathbb{R}})_\omega \cap {\mathcal{C}^\infty}({\mathbb{P}\mathcal{H}}, {\mathbb{R}})_g.\vspace*{-0.4em}$$ This idea may be immediately transposed for those spaces ${\mathcal{P}}$ which are equipped with the two structures just defined: one considers the function space ${\mathcal{C}}_{Prob}({\mathcal{P}}, {\mathbb{R}})$ intrinsically related to a transition probability space and the function space ${\mathcal{C}^\infty}_{Pois}({\mathcal{P}}, {\mathbb{R}})$ intrinsically associated to a Poisson space[^26] in order to define $$\begin{aligned}
{\mathcal{C}}({\mathcal{P}}, {\mathbb{R}})_{\mathcal{K}}:= {\mathcal{C}^\infty}_{Pois}({\mathcal{P}}, {\mathbb{R}}) \cap {\mathcal{C}}_{Prob}({\mathcal{P}}, {\mathbb{R}}).
\label{Landsman observables}\end{aligned}$$
We are now ready to introduce the central definition:
\[def:upswtp\] A *Poisson space with a transition probability* is a set that is both a transition probability space and a Poisson space and for which ${\mathcal{C}}({\mathcal{P}}, {\mathbb{R}})_{\mathcal{K}}$, as defined in , satisfies:
1. completeness: ${\mathcal{C}}({\mathcal{P}}, {\mathbb{R}})_{\mathcal{K}}$ separates points,
2. closedness: ${\mathcal{C}}({\mathcal{P}}, {\mathbb{R}})_{\mathcal{K}}$ is closed under the Poisson bracket,
3. unitarity: the Hamiltonian flow defined by each element of ${\mathcal{C}}({\mathcal{P}}, {\mathbb{R}})_{\mathcal{K}}$ preserves the transition probabilities.
\[def:Poisson space with probability\]
From , it is clear that a projective Hilbert space, equipped with its natural symplectic form and the transition probability function $\text{Pr}(p, p')=\cos^2(d_g(p,p'))$ induced by the Fubini-Study metric $g$, satisfies all three axioms and qualifies hence as a Poisson space with transition probability. On the other hand, one can always consider any Poisson manifold—and in particular any symplectic manifold $S$— as a Poisson space with transition probability, where the transition probability function is trivial: $\text{Pr}(p,p') = \delta_{p,p'}$. In this case, we have ${\mathcal{C}}_{Prob}(S, {\mathbb{R}}) = {\mathcal{C}}(S, {\mathbb{R}})$, ${\mathcal{C}}(S, {\mathbb{R}})_{\mathcal{K}}= {\mathcal{C}^\infty}(S, {\mathbb{R}})$ and the three axioms are trivially met.
As it has been the case for all other structures that we have met in the description of kinematics, the fundamental notion introduced by Landsman to achieve the geometric unification of classical and quantum kinematics is a space endowed with *two* structures. To show that these are the geometric counterparts of the two algebraic structures present on the algebra of observables, what remains to be seen is how to construct a Jordan product on ${\mathcal{C}}({\mathcal{P}}, {\mathbb{R}})_{\mathcal{K}}$ starting from a transition probability function $\text{Pr}$.
This is achieved by noticing that, for transition probability spaces, one can develop a spectral theory, much in the like of the spectral decomposition of self-adjoint operators on a Hilbert space. Given a transition probability space ${\mathcal{P}}$, define a *basis* ${\mathcal{B}}$ as an orthogonal family of points in ${\mathcal{P}}$ such that $$\sum\limits_{\rho \in {\mathcal{B}}} p_\rho=1,$$ where $p_\rho$ is the function on ${\mathcal{P}}$ defined by $p_\rho(\sigma):=\text{Pr}(\rho, \sigma)$[^27]. It can be shown [@landsman1998 Proposition I.2.7.4] that all the bases of ${\mathcal{P}}$ have the same cardinality and hence allow to define a notion of ‘dimension’ for a transition probability space. Now, given an orthoclosed subset ${\mathcal{S}}\subset {\mathcal{P}}$[^28] and a basis ${\mathcal{B}}$ thereof, define the function $p_{\mathcal{S}}:=\sum_{\rho \in {\mathcal{B}}}p_\rho$. This function turns out to be independent of the choice of the basis ${\mathcal{B}}$[^29]. With this in hand, we can now define the spectral theory:
Consider a well-behaved transition probability space $({\mathcal{P}}, \text{Pr})$ and a function $A \in \ell^\infty({\mathcal{P}}, {\mathbb{R}})$. Then a *spectral resolution* of $A$ is an expansion $$A = \sum\limits_{j} \lambda_j p_{{\mathcal{S}}_j}$$ where $\{{\mathcal{S}}_j\}$ is an orthogonal family of orthoclosed subsets of ${\mathcal{P}}$ such that $\sum p_{{\mathcal{S}}_j}=1$.
The crucial point which confers to the spectral resolution its power is the fact that, for both Poisson manifolds (equipped with the trivial transition probabilities) and spaces of the form ${\mathcal{P}}= \bigcup {\mathbb{P}\mathcal{H}}_\alpha$, the spectral resolution is *unique* and can thus be used to define the square of an observable by $$\begin{aligned}
A^2:=\sum\limits_{j} (\lambda_j)^2 p_{{\mathcal{S}}_j}.
\label{Jsq}\end{aligned}$$ Finally, this allows to define the Jordan product by $$\begin{aligned}
A \bullet B := \frac{1}{4}\big((A+B)^2 - (A-B)^2\big).
\label{Jdef}\end{aligned}$$
In sum, the notion of a Poisson space with transition probability succeeds in providing a common geometric language in which to describe both classical and quantum state spaces (requirement of unification of states), and from which one can construct, through a unified procedure, the algebra of classical or quantum observables (requirement of unification of observables). We now turn to the last and most important point: the characterization of the class of Poisson spaces with transition probability which describe quantum systems.
Characterization of quantum kinematics
--------------------------------------
As it has been hinted at several times, from the point of view of the two-fold role of observables, a fundamental difference between classical and quantum kinematics seems to lie in the connection between the two roles: while in classical kinematics the two roles seem to be essentially independent from each other, in quantum kinematics the quantitative aspect of observables encodes information about the transformational one. Therefore, it is interesting to compare the behaviour of the Poisson structure and the transition probability structure. In order to do so, consider the following two equivalence relations defined on any Poisson space with transition probability:
\[def:er\] Let ${\mathcal{P}}$ be a Poisson space with transition probability. Then, two points $p, p' \in {\mathcal{P}}$ are said to be:
1. *transformationally equivalent*, denoted by $p \underset{T}{\sim} p'$, if they belong to the same symplectic leave,
2. *numerically equivalent*, denoted by $p \underset{N}{\sim} p'$, if they belong to the same probability sector.
These two different equivalence relations may be seen as the two different notions of *connectedness* of the space of states arising from the two fundamental geometric structures. ‘Transformational equivalence’ is connectedness from the point of view of the transformational role of observables: two states $p$ and $p$ are transformationally equivalent if and only if there exists a curve $\gamma$ on ${\mathcal{P}}$ generated by an element $F \in {\mathcal{C}}({\mathcal{P}}, {\mathbb{R}})_{\mathcal{K}}$ such that $p, p' \in \gamma$. In similar fashion, ‘numerical equivalence’ is connectedness from the point of view of transition probabilities: two states are numerically equivalent if and only if there exists a collection of intermediate states $\chi_1, \ldots, \chi_n$ such that the chain of transitions $p \rightarrow \chi_1 \rightarrow \ldots \rightarrow \chi_n \rightarrow p'$ has a non-vanishing probability—*i.e.*, such that $\mathrm{Pr}(\rho, \chi_1)\mathrm{Pr}(\chi_1, \chi_2) \ldots \mathrm{Pr}(\chi_{n-1},\chi_n)\mathrm{Pr}(\chi_n,\sigma) \neq 0$.
Now, in classical kinematics, where one considers as space of states ${\mathcal{P}}_{cl}$ a symplectic manifold with transition probabilities $\mathrm{Pr}(p, p')=\delta_{p, p'}$, the two notions of connectedness are at odds from each other: from the point of view of the Poisson structure, the space of states is completely connected (any two states are transformationally equivalent), whereas from the point of view of the transition probability structure the space of states is completely disconnected (no two different states are numerically equivalent). In other words, we have $$\ast = ({\mathcal{P}}_{cl} / \underset{T}{\sim}) \neq ({\mathcal{P}}_{cl} / \underset{N}{\sim}) = {\mathcal{P}}_{cl}.
\vspace*{-0.5em}$$ On the other hand, in quantum kinematics the compatibility between the two roles of observables is captured in the fact that these two a priori different equivalence relations coincide. Indeed, we have $$({\mathcal{P}}_{qu} / \underset{T}{\sim}) = ({\mathcal{P}}_{qu} / \underset{N}{\sim}).$$
One of the great achievements of Landsman’s approach in terms of Poisson spaces with transition probabilities is to show with unmatched clarity that this compatibility between the two roles of observables is in fact one of the essential differences between classical and quantum kinematics. Indeed, given a Poisson space with a transition probability, he has provided the following axiomatic characterization of when such a space is a quantum space of states:
A non-trivial Poisson space with a transition probability ${\mathcal{P}}$ is the pure state space of a non-commutative $C^*$-algebra if and only if:
1. \[PoS\] **Principle of superposition**:\
for any $p, p'\in {\mathcal{P}}$ such that $p \underset{N}{\sim} p'$ and $p\neq p'$, we have $\{p, p'\}^{\bot\bot}\simeq S^2$.
2. **Compatibility of the two roles of observables**:\
the probability sectors and the symplectic leaves of ${\mathcal{P}}$ coincide.[^30]
These are hence the two essential features that differentiate quantum kinematics from classical kinematics. As the name suggests, the first axiom—also called the “two-sphere property"—is nothing but the geometric reformulation of the superposition principle[^31]. This has been invariably stressed, from the beginning of quantum mechanics, as one of the fundamental features of the theory. The second point, however, seems to have been the blind spot in the conceptual analysis of quantum kinematics.
Conclusion
==========
In both classical and quantum kinematics, observables play two different conceptual roles: on the one hand, observables are *quantities* which may take definite numerical values on certain states; on the other hand, observables are intimately related to the *generation of transformations* on the space of states. The main goal of this work was to show that the detailed study of the two-fold role of observables furnishes a very fruitful point of view from which to compare the conceptual structure of classical and quantum kinematics. As our analysis puts forward, this double nature of observables is an essential feature of the theory, deeply related to the mathematical structures used in the description of classical and quantum kinematics. This is particularly salient in the geometric definition of observables () and in Landmsan’s axiomatization of quantum mechanics (). Yet, much more remains to be explored about the subject and its conceptual implications. In particular, it remains unclear how to interpret this two-fold role from a physical point of view. Why is it the case that observables are intimately related to the generation of transformations? And how, if at all, should we alter our conception of what observables are in order to render their two-fold role more natural? These are important questions which ought to receive more attention in the future.
To conclude, let us then summarize the global picture which has emerged from our analysis of three different formulations of quantum kinematics: the standard one in terms of Hilbert spaces, the geometric one in terms of Kähler manifolds and Landsman’s in terms of Poisson spaces with a transition probability (see also , ).
Common to both kinematics is the fact that the full description of observables is the conjunction of their numerical and transformational roles. That this two-fold role is the *defining* feature of physical observables is best seen in their geometric definition: an observable is a function on the space of states to which an infinitesimal state transformation can be associated (cf. ). Algebraically, this two-fold role gets translated into the existence of two structures on the set of observables: a Jordan product which governs the numerical role, and a Lie product which governs the transformational role. Accordingly, the language of real Jordan-Lie algebras is the common algebraic language which covers both classical and quantum kinematics. The geometric level of states mirrors the algebraic level in every respect: herein, the two-fold role manifests itself by the presence of two geometric structures—a transition probability structure and a Poisson structure (which respectively stem from the Jordan and Lie product, and from which the Jordan and Lie product can be defined)—and the common geometric language is that of Poisson spaces with a transition probability. One often restricts attention to the simpler case where the Poisson space has only one symplectic leaf. Then, the Poisson structure is equivalent to a symplectic 2-form and the non-trivial transition probability structure of the Quantum may be perceived as arising from a Riemannian metric (the transition probability being, roughly, the distance between two points). In this way, one recovers the geometric formulation of classical and quantum kinematics in terms of symplectic manifolds and Kähler manifolds respectively.
With the use of either Jordan-Lie algebras or Poisson spaces with a transition probability, one may sharply characterize the mathematical difference between the two kinematics. Contrary to the mistaken commutativity/non-commutativity motto, this point of view shows that, at the algebraic level, the difference really lies in the associativity/non-associativity of the Jordan product. Geometrically, the difference is captured in the triviality/non-triviality of the transition probability structure. In particular, this implies that, from the restricted point of view of the symplectic/Lie structure, the Classical and the Quantum are indistinguishable. In other words, both theories are identical if one focuses only on the transformational role of observables.
The conceptual difference between the Classical and the Quantum can only be grasped when studying the relation between the two roles of observables. In both kinematics, the transformations preserve the numerical role of the observables. At the geometric level, this fact is captured by unitarity: the Hamiltonian flow of any physical property preserves the transition probabilities. At the algebraic level, this is encoded in the Leibniz rule. However, on top of this, quantum kinematics exhibits a second compatibility condition between the two roles which distinguishes it from classical kinematics: the numerical role of observables encodes information on their transformational role. Geometrically, this is seen in the coincidence between the two natural foliations on the space of states induced by the two geometric structures. Algebraically, it is encoded in the associator rule, which ties together the Jordan and Lie structures.
In the light of Landsman’s axiomatization of quantum mechanics, we see that this last point may be turned around: given the two-fold role of observables in kinematics, the requirement that the two roles be consistent with each other forces the Jordan product to be non-associative and the transition probability to be non-trivial. Therefore, *the two-fold role compatibility condition and the superposition principle may be seen as the two fundamental pillars on which quantum kinematics rests*.
$$\begin{tikzcd}[]
\parbox{7em}{\centering \textbf{conceptual role \\ of observables}}
& \parbox{10em}{\centering \textsc{transformational}\\ role}
\arrow[rrr, leftrightarrow, "\emph{conservation}"', bend left= 15, dashed]
\arrow[rrr, leftrightarrow, "\textbf{compatibility}", bend right= 15]
\arrow[rrr, leftrightarrow, bend right= 15, dash, line width = 0.3mm]
&
&
& \parbox{10em}{\centering \textsc{numerical} \\ role} \\ \\
\parbox{7em}{\centering \textbf{algebra of \\ observables}} \ar[dd, Leftrightarrow]
& \parbox{7em}{\centering Lie \\ structure}
\arrow[rrr, leftrightarrow, "\emph{Leibniz rule}"', bend left= 15, dashed]
\arrow[rrr, leftrightarrow, "\textbf{associator rule}", bend right= 15]
\arrow[rrr, leftrightarrow, bend right= 15, dash, line width = 0.3mm]
\ar[dd, Leftrightarrow]
&
&
& \parbox{7em}{\centering Jordan \\ structure} \ar[dd, Leftrightarrow]
\\ \\
\parbox{7em}{\centering \textbf{space of \\ states}}
& \parbox{7.5em}{\centering Poisson \\ structure}
\arrow[rrr, leftrightarrow, "\emph{unitarity}"', bend left= 16, dashed]
\arrow[rrr, leftrightarrow, "\textbf{~~sectors = leaves~~}", bend right= 16]
\arrow[rrr, leftrightarrow, bend right= 16, dash, line width = 0.3mm]
&
&
& \parbox{10em}{\centering transition probability \\ structure}
\end{tikzcd}$$
**Acknowledgements.** I would like to thank Gabriel Catren, Mathieu Anel, Christine Cachot, Julien Page, Michael Wright, Fernando Zalamea and anonymous reviewers for helpful discussions and comments on earlier drafts of this paper.
[^1]: This work has received funding from the European Research Council under the European Community’s Seventh Framework Programme (FP7/2007-2013 Grand Agreement n$^\circ$263523, ERC Project PhiloQuantumGravity).
[^2]: Sometimes, these transformations are also called *canonical transformations*.
[^3]: A diffeomorphism $\phi: S \longrightarrow S$ induces a map $\Phi: {\mathcal{C}^\infty}(S, {\mathbb{R}}) \longrightarrow {\mathcal{C}^\infty}(S, {\mathbb{R}})$ defined by: $$\forall f \in {\mathcal{C}^\infty}(S, {\mathbb{R}}), (\Phi f)(p) = f(\phi(p)).$$ This in turn allows one to define the *push-forward* $\phi_*$ of vector fields and the *pull-back* $\phi^*$ of $n$-forms by: $$\begin{aligned}
\forall v \in \Gamma(TS),& \:\: (\phi_*v)[f]:= v[\Phi f],\\
\forall \alpha \in \Omega^n(S),& \:\: (\phi^*\alpha)(v_1, \ldots, v_n):= \alpha(\phi_*v_1, \ldots,\phi_*v_n ).\end{aligned}$$
[^4]: For a given two-form $\alpha \in \Omega^2(S)$, the Lie derivative with respect to the vector field $v \in \Gamma(TS)$ is given by the so-called “Cartan’s magic formula”: ${\mathcal{L}}_v \alpha = (\iota_v d + d\iota_v)\alpha$, where $\iota_v \alpha := \alpha(v, \cdot) \in \Omega^1(S)$.
[^5]: A *real Jordan algebra* $({\mathcal{A}}, \bullet)$ is a commutative algebra such that, moreover, $F \bullet (G \bullet F^2) =( F \bullet G) \bullet F^2$ for all $F, G \in {\mathcal{A}}$. This concept was introduced by the German theoretical physicist Pascual Jordan in 1933[@jordan1933].
[^6]: Recall that, given a vector space $V$ and a linear operator $A$ acting on $V$, the spectrum of $A$ is defined as $$spec(A):=\big\{\alpha \in {\mathbb{R}}\:\big|\: (A-\alpha\, \text{Id}_V) \text{ is not invertible}\big\}.$$
[^7]: Here, we suppose that the space of state is a simply connected manifold. In the general case, the kernel of $v_{-}$ is the center of $({\mathcal{A}}^C, \star)$, that is, the set of *locally* constant functions.
[^8]: For finite-dimensional Hilbert spaces, this is clear: any operator $A \in {\mathcal{B}}_{i{\mathbb{R}}}$ defines a one-parameter group of unitary operators through exponentiation: $e^{tA} \in U({\mathcal{H}}),\: t \in {\mathbb{R}}$. The situation is more delicate in the infinite-dimensional case for two reasons. First, $U({\mathcal{H}})$ is not a Lie group (it is infinite-dimensional) and thus the notion of an associated Lie algebra is problematic. However, by Stone’s theorem we know there is a one-to-one correspondence between anti-self-adjoint operators and continuous one-parameter unitary groups. In this sense, one is still allowed to claim that anti-self-adjoint operators are the generators of unitary transformations. The second problem is that, without further restrictions, anti-self-adjoint operators do not form a Lie algebra (in fact, they do not even form a vector space). This is the reason why we restrict attention here to *bounded* anti-self-adjoint operators. For a precise mathematical treatment of these issues, see [@abraham1988].
[^9]: Given a unital commutative $C^*$-algebra ${\mathcal{U}}$, its *Gelfand spectrum* $spec_G({\mathcal{U}})$ is the set of all positive linear functionals $\rho: {\mathcal{U}}\longrightarrow {\mathbb{C}}$ such that $\rho(\mathbb{I})=1$[@landsman1998]. The fact that $spec_G(C^*(F))$ is isomorphic to the spectrum of $F$ (in the usual sense) justifies the use of the word “spectrum" in Gelfand’s theory[@cartier2008].
[^10]: Indeed, $C^*(F)$ consists of all polynomials in $F$ and $F^*$. Then, $C^*(F)$ will be non-commutative if and only if $[F, F^*]\neq 0$.
[^11]: This confusion was there since the very beginning of Quantum Mechanics. For example, in their second paper of 1926, Born, Heisenberg and Jordan write:
> We introduce the following basic quantum-mechanical relation: $\bm{pq} - \bm{qp} = \frac{h}{2\pi i} \bm{1}$. \[...\] One can see from \[this equation\] that in the limit $h=0$ the new theory would converge to classical theory, as is physically required. [@born1967a 327]
It is clear that they were comparing the commutator in quantum mechanics with point-wise multiplication in Classical Mechanics (despite the fact that, by the time of the second quoted paper, Dirac had already shown in [@dirac1925] that the quantum commutator should be compared to the classical Poisson bracket).
[^12]: A $C^*$-algebra $({\mathcal{U}}, \circ, *, \|\cdot\|)$ is a complex associative algebra $({\mathcal{U}}, \circ)$ equipped with an involution $^*$ and a norm $\| \cdot \|$ such that: *i)* $({\mathcal{U}}, \| \cdot \|)$ is a complex Banach space, *ii)* $\forall A, B \in {\mathcal{U}}, \| A \circ B \| \leq \| A \|\| B \|$, and *iii)* $\forall A \in {\mathcal{U}}, \| A^* \circ A \| = \| A \|^2$.
[^13]: To be more precise: given any $C^*$-algebra $({\mathcal{U}}, \circ)$, its real part ${\mathcal{U}_{\mathbb{R}}}:=\big\{A \in {\mathcal{U}}\big| A=A^*\big\}$ equipped with the operations $A\bullet B= \frac{1}{2}(A\circ B+ B\circ A)$ and $A\star B= \frac{i}{2}(A\circ B- B\circ A)$ is a Jordan-Lie-Banach algebra. Conversely, given any real JLB-algebra $({\mathcal{U}_{\mathbb{R}}}, \bullet, \star)$, its complexification $({\mathcal{U}_{\mathbb{R}}})_{\mathbb{C}}$ can be turned into a $C^*$-algebra by defining the operation $A \circ B := A \bullet B -i A \star B$. In this sense, $C^*$-algebras are equivalent to JLB-algebras. Moreover, a $C^*$-algebra ${\mathcal{U}}$ is commutative if and only if the associated JLB-algebra ${\mathcal{U}_{\mathbb{R}}}$ is associative. However, not all Jordan-Lie algebras can be equipped with a norm so that they become JLB-algebras, and, in fact, no non-trivial Poisson algebra can be normed in such a way that the bracket is defined in the norm-completion of the algebra. Therefore, non-trivial Poisson algebras do not fall under the theory of $C^*$-algebras. For details, see [@landsman1998 Chapter I.1].
[^14]: Because of Heisenberg’s famous uncertainty relations, the definition of compatible observables is perhaps more often stated in terms of the product $\Delta_\rho(F) \Delta_\rho(G)$ rather than the sum. However, as Strocchi points out in [@strocchi2008], this is wrong since for any two *bounded* operators, one has $\inf\limits_\rho(\Delta_\rho(F) \Delta_\rho(G)) =0$.
[^15]: It is important to clearly distinguish the program of a geometric reformulation of quantum mechanics from the program of ‘geometric quantization’ which we will not discuss here and is completely unrelated. The first aims at a reformulation of quantum mechanics which avoids Hilbert spaces. The second is geared towards an explicit construction of the quantum description of a system for which the classical description is given. But the resulting quantum description is still based on Hilbert spaces. What is ‘geometric’ about geometric quantization is the means by which the Hilbert space is constructed: roughly, one starts with the symplectic manifold describing the classical system, considers a complex line bundle over it and defines the Hilbert space as a particular class of sections of this bundle. The program of geometric quantization was started by Jean-Marie Souriau and Bertram Kostant [@souriau1970; @kostant1970]. A standard reference is [@woodhouse1991].
[^16]: Associated to the Riemannian metric, there is a unique torsion-free metric compatible affine connection $\nabla$ (the so-called Levi-Civita connection). An almost complex structure $J$ is said to be *invariant* if $\nabla J = 0$[@kobayashi1969].
[^17]: Given $(\phi, \psi) \in {\mathcal{H}}\times {\mathcal{H}}$, define $V_\phi \in T_\psi{\mathcal{H}}$ by $$\forall f \in {\mathcal{C}^\infty}({\mathcal{H}},{\mathbb{R}}), V_\phi[f](\psi)= \frac{d}{ds}f(\psi + s \phi)\Big|_{s=0}.$$
[^18]: Of course, one needs to be sure that such a 2-form does exist. A cleaner way of defining the symplectic form is by means of the so-called *Marsden-Weinstein symplectic reduction* [@marsden1974]. Therein, one considers the natural action of $U(1)$ on ${\mathcal{H}}$. This is a strongly Hamiltonian action and the momentum map $\mu: {\mathcal{H}}\longrightarrow u(1)^* \simeq {\mathbb{R}}$ is given by $\mu(\phi)=\langle \phi, \phi \rangle$. Then, ${\mathbb{P}\mathcal{H}}\simeq \mu^{-1}(1)/U(1)$ and the general theory insures this is a symplectic manifold. For the details, I refer the reader to [@landsman1998].
[^19]: Two examples of this are the canonical commutation relations $\{p,q\}=1$ (which state that linear momentum is the generator of space translations) and Hamilton’s equations of motion $\frac{d}{dt}=\{H, \cdot\}$ (which state that the Hamiltonian is the generator of time evolution).
[^20]: The reader will recognise here the question raised by Heisenberg in his 1925 seminal paper that definitely launched the development of quantum mechanics[@heisenberg1925].
[^21]: As a side remark, notice that, in the same way that the Riemannian and symplectic structures of the quantum space of states arise then from the real and imaginary parts of the Hermitian product of ${\mathcal{H}}$ respectively, at the algebraic level the quantum Jordan and Lie products $\bullet : \frac{1}{2}[\cdot, \cdot]_+$ and $\star : \frac{i}{2}[\cdot, \cdot]$ may also be seen as the real and imaginary parts of the composition of operators:$$\text{for } A, B \in {\mathcal{B}_{\mathbb{R}}}({\mathcal{H}}), \: A \circ B = A \bullet B - i A\star B.$$
[^22]: Recall that the distance between two points $p$ and $p'$ of a Riemannian manifold with metric $g$ is given by: $$d_g(p,p') := \text{inf}\:\big\{\int\limits_\Gamma \sqrt{g(v_\Gamma(t), v_\Gamma(t))}dt \: \big| \: \Gamma \in \text{Path}(p,p')\big \}.$$
[^23]: A *Poisson manifold* is a manifold $P$ for which the algebra of smooth functions ${\mathcal{C}^\infty}(P, {\mathbb{R}})$ is a Poisson algebra. An important theorem in Poisson geometry states that any such manifold can always be written as a disjoint union of symplectic manifolds—the so-called *symplectic leaves* of the Poisson manifold[@landsman1998 Theorem I.2.4.7, p. 71].
[^24]: This notion was introduced for the first time by Landsman in [@landsman1997 p. 38]. His definition differs slightly from the one given here, for it also includes a linear subspace ${\mathcal{U}_{\mathbb{R}}}({\mathcal{P}}) \subset {\mathcal{C}}^\infty_L({\mathcal{P}}, {\mathbb{R}})$ which separates points and is closed under the Poisson bracket: $\{f, g\}_{\raisebox{-2pt}{$\scriptstyle {\mathcal{P}}$}}(\iota_\alpha(q)) := \{\iota^*_\alpha f, \iota^*_\alpha g\}_{\raisebox{-2pt}{$\scriptstyle S_\alpha$}}(q)$, where $q \in S_\alpha$. I nonetheless find the inclusion of this subspace somewhat unnatural at this point. This subspace ${\mathcal{U}_{\mathbb{R}}}({\mathcal{P}})$ will only become important when defining the key notion of a Poisson space with transition probability.
[^25]: This concept was introduced for the first time in 1937 by von Neumann in a series of lectures delivered at the Pennsylvania State College. The manuscript was only published posthumously in 1981[@neumann1981].
[^26]: These function spaces are defined as follows. First, ${\mathcal{C}^\infty}_{Pois}({\mathcal{P}}, {\mathbb{R}})$ is the set of all $f \in {\mathcal{C}}({\mathcal{P}}, {\mathbb{R}})$ such that their restrictions to any $S_\alpha$ are smooth: $\iota^*_\alpha f \in {\mathcal{C}^\infty}(S_\alpha, {\mathbb{R}})$. On the other hand, the definition of ${\mathcal{C}}_{Prob}({\mathcal{P}}, {\mathbb{R}})$ is more involved. One considers first the functions $\mathrm{Pr}_\rho : {\mathcal{P}}\rightarrow {\mathbb{R}}$ such that $\mathrm{Pr}_\rho (\sigma) := \mathrm{Pr} (\rho, \sigma)$, and defines ${\mathcal{C}}_{Prob}^{00}({\mathcal{P}})$ as the real vector space generated by these functions. Then ${\mathcal{C}}_{Prob}({\mathcal{P}},{\mathbb{R}}):= \overline{{\mathcal{C}}_{Prob}^{00}({\mathcal{P}})}^{**}$. See [@landsman1998 pp. 76–84] for more details.
[^27]: Given a transition probability space $({\mathcal{P}}, \mathrm{Pr})$, two subsets ${\mathcal{S}}_1$ and ${\mathcal{S}}_2$ are said to be *orthogonal* if, for any $p \in {\mathcal{S}}_1$ and any $p' \in {\mathcal{S}}_2$, $\mathrm{Pr}(p, p') = 0$. A subset ${\mathcal{S}}\subset {\mathcal{P}}$ is said to be a *component* if ${\mathcal{S}}$ and ${\mathcal{P}}\setminus {\mathcal{S}}$ are orthogonal. Finally, a *sector* is a component which does not have any non-trivial components.
[^28]: Given a subset ${\mathcal{S}}\subset {\mathcal{P}}$, the orthoplement ${\mathcal{S}}^\bot$ is defined by $${\mathcal{S}}^\bot :=\big\{ p \in {\mathcal{P}}\:\big|\: \forall s \in {\mathcal{S}}, \text{ Pr}(p, s) =0\big\}.\vspace*{-0.5em}$$ In turn, a subset is called *orthoclosed* whenever ${\mathcal{S}}^{\bot \bot} = {\mathcal{S}}$.
[^29]: To be more precise, this holds only for *well-behaved* transition probability spaces. A transition probability space is said to be well-behaved if every orthoclosed subset ${\mathcal{S}}\subset {\mathcal{P}}$ has the property that any maximal orthogonal subset of ${\mathcal{S}}$ is a basis of it. See [@landsman1998 Definition I.2.7.5 and Proposition I.2.7.6].
[^30]: Here, a “trivial Poisson space” means a Poisson space whose Poisson bracket is identically zero. As stated, the theorem is only valid for *finite*-dimensional $C^*$-algebras. In the infinite-dimensional case, two more technical axioms are necessary:
1. The space ${\mathcal{C}}({\mathcal{P}},{\mathbb{R}})_{\mathcal{K}}$ is closed under the Jordan product defined by equations and .
2. The pure state space of ${\mathcal{C}}({\mathcal{P}},{\mathbb{R}})_{\mathcal{K}}$, seen as a Jordan-Lie-Banach algebra, coincides with ${\mathcal{P}}$.
See [@landsman1998 Theorem I.3.9.2. and Corollary I.3.9.2. pp 105–106] for the details and proofs.
[^31]: Indeed, in its core, the quantum superposition principle is a claim about the ability to generate new possible states from the knowledge of just a few: given the knowledge of states $p_1$ and $p_2$, one can deduce the existence of an infinite set $S_{p_1,p_2}$ of other states which are equally accessible to the system. In the standard Hilbert space formalism, the superposition principle is described by the canonical association of a two-dimensional complex vector space to any pair of states: for two different states $\psi_1, \psi_2 \in {\mathcal{H}}$, any superposition of them can be written as $\phi = a \psi_1 + b \psi_2$, with $a,b \in {\mathbb{C}}$. In other words, it is captured by the existence of a map $$V: {\mathcal{H}}\times {\mathcal{H}}\longrightarrow \text{Hom}({\mathbb{C}}^2, {\mathcal{H}})$$ where the linear map $V_{\psi_1, \psi_2}: {\mathbb{C}}^2 \rightarrow {\mathcal{H}}$ is an injection iff $\psi_1$ and $\psi_2$ are linearly independent vectors. The geometric reformulation is then found simply by taking the projective analogue of this. Therein, the superposition principle is now seen as the existence of a map $$S: {\mathbb{P}\mathcal{H}}\times {\mathbb{P}\mathcal{H}}\longrightarrow \text{Hom}({\mathbb{P}}{\mathbb{C}}^2 \simeq S^2, {\mathbb{P}\mathcal{H}})\vspace*{-0.5em}$$ where, for $p_1 \neq p_2$, the map $S_{p_1,p_2}$ is an injection. Axiom \[PoS\] is the generalization of this for any Poisson space with transition probability.
|
---
abstract: 'As Computer Vision algorithms move from passive analysis of pixels to active reasoning over semantics, the breadth of information algorithms need to reason over has expanded significantly. One of the key challenges in this vein is the ability to identify the information required to make a decision, and select an action that will recover this information. We propose an reinforcement-learning approach that maintains an distribution over its internal information, thus explicitly representing the ambiguity in what it knows, and needs to know, towards achieving its goal. Potential actions are then generated according to particles sampled from this distribution. For each potential action a distribution of the expected answers is calculated, and the value of the information gained is obtained, as compared to the existing internal information. We demonstrate this approach applied to two vision-language problems that have attracted significant recent interest, visual dialog and visual query generation. In both cases the method actively selects actions that will best reduce its internal uncertainty, and outperforms its competitors in achieving the goal of the challenge.'
author:
- |
Ehsan Abbasnejad$^1$, Qi Wu$^1$, Iman Abbasnejad$^2$, Javen Shi$^1$, Anton van den Hengel$^1$\
$^1$`{ehsan.abbasnejad,qi.wu01,javen.shi,anton.vandenhengel}@adelaide.edu.au`\
$^2$``\
$^1$Australian Institute of Machine Learning & The University of Adelaide, Australia\
$^2$Queensland University of Technology, Australia
bibliography:
- 'lib.bib'
title: |
An Active Information Seeking Model\
for Goal-oriented Vision-and-Language Tasks
---
Introduction
============
In most problems in Computer Vision it is assumed that all of the information required is available a-priori, and suitable to be embodied in the code or the weights of the solution. This assumption is so pervasive that it typically goes unsaid. In fact this assumption is satisfied by a small subset of problems of practical interest. Problems in this set must be self-contained, tightly specified, relate to a very prescribed form of data drawn from a static distribution, and be completely predictable. Many important problems do not satisfy these criteria, even though researchers have found many that do.
The majority of problems that computer vision might be applied to are solvable only by an agent that is capable of actively seeking the information it needs. This might be because the information required is not available at training time, or because it is too broad to be embodied in the code or weights of an algorithm. The ability to seek the information required to complete a task enables a degree of flexibility and robustness that cannot be achieved through other means, but also enables behaviors that lie towards the Artificial Intelligence end of the spectrum.
![ Two goal-oriented vision-and-language tasks, broken up into four constituent parts: a context encoder, an information seeker, a answerer and a goal executor. The given examples are chosen from a goal-oriented visual dialog dataset GuessWhat [@guesswhat_game] (upper from the red dash-line) and, a compositional VQA dataset CLEVR [@clevr] (lower). ](img/fig1_new.pdf){width="1\columnwidth"}
\[fig:intro\]
Applications that lie at the intersection of vision and language are examples of such problems. They are more challenging than conventional computer vision problems because they often require an agent (model) to acquire information on the fly to help to make decisions, such as visual dialog [@das2016visual; @visdial_rl] and visual question answering [@answer_questioner_mind; @guesswhat_game; @strub2017end]. More recently, a range of tasks have been proposed that use ‘language generation’ as a mechanism to gather more information in order to achieve another specific goal. These tasks offer a particular challenge because the information involved is inevitably very broad, which makes concrete representations difficult to employ practically.
In visual dialog, and particularly goal-oriented visual question generation, an agent needs to understand the user request and complete a task via asking a limited number of questions. Similarly, compositional VQA (e.g. [@clevr]) is a visual query generation problem that requires a model first to convert a natural language question to a sequence of ‘programs’ and then obtain the answer by running the programs on an engine. The question-to-program model represents an information ‘seeker’, while the broader goal is to generate an answer based on the information acquired.
Agents applicable to these tasks normally consist of three parts: a *context encoder*, an *information seeker* and a *goal executor*, as shown in Fig.\[fig:intro\]. The context encoder is responsible for encoding information such as images, questions, or dialog history to a feature vector. The information seeker is a model that is able to generate new queries (such as natural language questions and programs) based on the goal of the given task and its seeking strategy . The information returned will then join the context and internal information to be sent to the goal executor model to achieve the goal. The seeker model plays a crucial role in goal-oriented vision-and-language tasks as the better seeking strategies that recovers more information, the more likely it is that the goal can be achieved. Moreover, the seeker’s knowledge of the value of additional information is essential in directing the seeker towards querying what is needed to achieve the goal. In this paper, we focus on exploring the seeker model.
The conventional ‘seeker’ models in these tasks follow a sequence-to-sequence generation architecture, that is, they translate an image to a question or translate a question to a program sequence via supervised learning. This requires large numbers of ground-truth training pairs. Reinforcement learning (RL) is thus employed in such goal-oriented vision-language to mediate this problem due to its ability to focus on achieving a goal through directed trial and error [@guesswhat_game; @zhang2018goal]. A policy in RL models specifies how the seeker asks for additional information. However, these methods generally suffer from two major drawbacks: (1) they maintain a single policy that translates the input sequence to the output while disregarding the strategic diversity needed. Intuitively a single policy is not enough in querying diverse information content for various goals–we need multiple strategies. In addition, (2) the RL employed in these approaches can be prohibitively inefficient since the question generation process does not consider its effect in directing the agent towards the goal. In fact, the agent does not have a notion of what information it needs and how it benefits in achieving its goal.
To this end, in contrast to conventional methods that use a single policy to model a vision-and-language task, we instead maintain a ***distribution of policies***. By employing a Bayesian reinforcement learning framework for learning this distribution of the seeker’s policy, our model incorporates the expected gain from a query towards achieving its goal. Our framework uses recently proposed Stein Variational Gradient Descent [@svgd] to perform an efficient update of the posterior policies. Having a distribution over seeking policies, our agent is ***capable of considering various strategies for obtaining further information***, analogous to human contemplation of various ways to ask a question. Each sample from the seeker’s policy posterior represents a policy of its own, and seeks a different piece of information. This allows the agent to further contemplate the outcome of the various strategies before seeking additional information and considers the consequence towards the goal. We then formalize an approach for the agent to ***evaluate the consequence of receiving additional information*** towards achieving its goal.
We apply the proposed approach to two complex vision-and-language tasks, namely GuessWhat [@guesswhat_game] and CLEVR [@clevr], and show that it outperforms the baselines and achieves the state-of-art results.
Conclusion
==========
The ability to identify the information needed to support a conclusion, and the actions required to obtain it, is a critical capability if agents are to move beyond carrying out low-level prescribed tasks towards achieving flexible high semantic level goals. The method we describe is capable of reasoning about the information it holds, and the information it will need to achieve its goal, in order to identify the action that will best enable it to fill the gap between the two. Our approach thus actively seeks the information it needs to achieve its goal on the basis of a model of the uncertainty in its own understanding. If we are to enable agents that actively work towards a high-level goal the capability our approach demonstrates will be critical.
|
---
abstract: 'Undulatory locomotion is ubiquitous in nature and observed in different media, from the swimming of flagellated microorganisms in biological fluids, to the slithering of snakes on land, or the locomotion of sandfish lizards in sand. Despite the similarity in the undulating pattern, the swimming characteristics depend on the rheological properties of different media. Analysis of locomotion in granular materials is relatively less developed compared with fluids partially due to a lack of validated force models but recently a resistive force theory in granular media has been proposed and shown useful in studying the locomotion of a sand-swimming lizard. Here we employ the proposed model to investigate the swimming characteristics of a slender filament, of both finite and infinite length, undulating in a granular medium and compare the results with swimming in viscous fluids. In particular, we characterize the effects of drifting and pitching in terms of propulsion speed and efficiency for a finite sinusoidal swimmer. We also find that, similar to Lighthill’s results using resistive force theory in viscous fluids, the sawtooth swimmer is the optimal waveform for propulsion speed at a given power consumption in granular media. The results complement our understanding of undulatory locomotion and provide insights into the effective design of locomotive systems in granular media.'
author:
- Zhiwei Peng
- 'On Shun Pak[^1]'
- 'Gwynn J. Elfring[^2]'
bibliography:
- 'ref.bib'
title: Characteristics of undulatory locomotion in granular media
---
Introduction {#sec:intro}
============
Undulatory locomotion, the self-propulsion of an organism via the passage of deformation waves along its body, is ubiquitous in nature [@gray1953undulatory; @cohen2010swimming]. Flagellated microorganisms swim in fluids [@gray1955propulsion; @chwang1971note; @lighthill1976flagellar; @keller1976swimming; @purcell1977life; @higdon1979hydrodynamic], snakes slither on land [@gray1946mechanism; @guo2008limbless; @Hu23062009; @alben2013] and sandfish lizards (*Scincus scincus*) undulate in granular substrates [@baumgartner2008investigating; @maladen2009undulatory; @ding2012mechanics]. Yet the underlying physics differ: from viscous forces [@lauga2009hydrodynamics] in fluids to frictional forces [@maladen2009undulatory] in terrestrial media. The investigation of these undulatory mechanisms in different environments advances our understanding of various biological processes [@cohen2010swimming; @fauci2006biofluidmechanics] and provides insights into the effective design of biomimetic robots [@williams2014self; @maladen2011undulatory].
The swimming of microorganisms in Newtonian fluids, where viscous forces dominate inertial effects, is governed by the Stokes equations [@lauga2009hydrodynamics]. Despite the linearity of the governing equation, locomotion problems typically introduce geometric nonlinearity, making the problem less tractable [@sauzade11]. For slender bodies such as flagella and cilia, Gray and Hancock [@gray1955propulsion] exploited their slenderness to develop a local drag model, called resistive force theory (RFT), which has been shown useful in modeling flagellar locomotion and the design of synthetic micro-swimmers [@lauga2009hydrodynamics; @pak2014theoretical]. In this local theory, hydrodynamic interactions between different parts of the body are neglected and the viscous force acting on a part of the body depends only on the local velocity relative to the fluid. Using RFT, Lighthill showed that, for an undulating filament of infinite length, the sawtooth waveform is the optimal beating pattern maximizing hydrodynamic efficiency [@lighthill1976flagellar].
Locomotion in granular media is relatively less well understood due to their complex rheological features [@zhang2014effective; @goldman2014colloquium]. The frictional nature of the particles generates a yield stress, a threshold above which the grains flow in response to external forcing [@goldman2014colloquium]. Different from viscous fluids, the resistance experienced by a moving intruder originates from the inhomogeneous and anisotropic response of the granular force chains, which are narrow areas of strained grains surrounded by the unstrained bulk of medium [@albert1999slow]. At low locomotion speed, where the granular matter is in a quasi-static regime, the effect of inertia is negligible compared to frictional and gravitational forces from granular media [@ding2012mechanics], which is similar to that of a low Reynolds-number fluid. In this regime, studies measuring the drag force of an intruder moving through a GM reveal that the drag force is independent of the speed of the intruder, but it increases with the depth of GM and proportional to the size of the intruder [@albert1999slow; @hill2005scaling; @schroter2007phase; @zhou2007simul; @seguin2011dense].
Recently, Maladen *et al*. [@maladen2009undulatory] studied the subsurface locomotion of sandfish in dry granular substrates. While the crawling and burying motion of a sandfish is driven by its limbs, an undulatory gait is employed for subsurface locomotion without use of limbs. Using high speed x-ray imaging, the subsurface undulating pattern of the sandfish body was found to be well described by a sinusoidal waveform. A major challenge in the quantitative analysis of locomotion in granular materials is a lack of validated force models like the Stokes equation in viscous fluids [@zhang2014effective; @goldman2014colloquium]. But inspired by the success of RFT for locomotion in viscous fluids, Maladen *et al*. [@maladen2009undulatory] developed an empirical RFT in dry granular substrates for slender bodies (Sec. \[subsec:RFT\]), which was shown effective in modeling the undulatory subsurface locomotion of sandfish [@maladen2009undulatory]. The proposed force model thus enables theoretical studies to address some fundamental questions on locomotion in granular media. In this paper we employ the proposed RFT to investigate the swimming characteristics of a slender filament of finite and infinite length undulating in a granular medium and compare the results with those in viscous fluids. In particular, previous analysis using the granular RFT considered only force balance in one direction [@maladen2009undulatory] and hence a swimmer can only follow a straight swimming trajectory in this simplified scenario. Here we extend the results by considering a full three-dimensional force and torque balances, resulting in more complex kinematics such as pitching, drifting and reorientation. The swimming performance in relation to these complex kinematics is also discussed.
The paper is organized as follows. We formulate the problem and review the recently proposed RFT in granular media in Sec. \[sec:form\]. Swimmers of infinite length are first considered (Sec. \[sec:inf\]): we determine that the optimal waveform maximizing swimming efficiency, similar to results in viscous fluids, is a sawtooth (Sec. \[subsec:opt\]); we then study the swimming characteristics of sawtooth and sinusoidal swimmers in granular media and compare the results with swimming in viscous fluids (Sec. \[subsec:sawNsine\]). Next we consider swimmers of finite length (Sec. \[sec:fin\]) and characterize the effects of drifting and pitching in terms of propulsion speed and efficiency, before concluding the paper with remarks in Sec. \[sec:discussion\].
Mathematical Formulation {#sec:form}
========================
Kinematics {#subsec:kinematic}
----------
We consider an inextensible cylindrical filament of length $L$ and radius $r$ such that $r \ll L$, and assume that it passes a periodic waveform down along the body to propel itself in granular substrates. Following Spagnolie and Lauga [@spagnolie2010optimal], the waveform is defined as $\bX(s) =[X(s),Y(s),0]^{\mathsf{T}}$, where $s \in [0,L]$ is the arc length from the tip. The periodicity of the waveform can then be described as $$\begin{aligned}
\label{eq:periodicity_condition}
X(s+\Lambda)=X(s)+\lambda, \quad Y(s+\Lambda)=Y(s),\end{aligned}$$ where $\lambda$ is the wave length and $\Lambda$ the corresponding arc length along the body. $N$ is the number of waves passed along the filament. Note that $L=N\Lambda$ and $\lambda =\alpha \Lambda$, where $0<\alpha<1$ is due to the bending of the body [@spagnolie2010optimal].
![image](schematic.pdf)
Initially, the filament is oriented along the $x$-axis of the lab frame with its head at $\bx_0$. At time $t$, the filament is passing the waveform at a phase velocity $\bV$ (with constant phase speed $V$) along the waveform’s centerline, which is oriented at an angle $\theta(t)$ to the $x$-axis (Fig. \[fig:schematic\]). In a reference frame moving with the wave phase velocity $\bV$, a material point on the filament is moving tangentially along the body with speed $c = V/\alpha$, and hence the period of the waveform is $T = \lambda/V = \Lambda/c$. By defining the position vector of a material point at location $s$ and time $t$ in the lab frame as $\bx(s,t)$, we obtain $$\begin{aligned}
\label{eq:positionvec.}
\bx(s,t)-\bx(0,t)= \mathbf{\Theta}(t)\cdot\bR(s,t),\end{aligned}$$ where $$\begin{aligned}
\label{eq:rotationmatrix}
\mathbf{\Theta}(t)=\begin{bmatrix}
\cos\theta(t)&-\sin\theta(t)&0\\
\sin\theta(t) &\cos\theta(t)&0\\
0&0&1 \\
\end{bmatrix}\end{aligned}$$ is the rotation matrix, and $\bR(s,t)=\bX(s,t)-\bX(0,t)$, and note that $\bX(s,t)= \bX(s-ct)$. Then, the velocity of each material point in the lab frame would be $$\begin{aligned}
\label{eq:velocity_relation}
\bu(s,t)=\dot{\bx}(0,t)+ \dot{\theta} \mathbf{\Theta}\cdot \bR^\perp+ \mathbf{\Theta} \cdot\dot{\bR} \textcolor{blue}{,}\end{aligned}$$ where $\bR^\perp=\be_z \times \bR$, and dot denotes time derivative. The unit tangent vector in the direction of increasing $s$ is $$\begin{aligned}
\label{eq:tangent_vec}
\bt= \bx_s=\mathbf{\Theta}\cdot\bX_s(s,t),\end{aligned}$$ where the subscript $s$ denotes the derivative with respect to $s$. The angle between the local velocity vector $\bu$ and the local unit tangent vector $\bt$ is $\psi$: $$\begin{aligned}
\cos\psi = \buh\cdot\bt, \quad \buh = \frac{\bu}{\| \bu\|} \cdot\end{aligned}$$
Now, to define the waveform we specify the tangent angle made with the centerline of the waveform $$\begin{aligned}
\label{eq:waveform}
\varphi(s,t)=\arctan\frac{Y_s}{X_s} \textcolor{blue}{\cdot}\end{aligned}$$ Note that we have the following geometric relations: $$\begin{aligned}
\bR&=\int_0^s \bX_s \d s, \quad \dot{\bR} = \int_0^s \dot{\varphi}\bX^\perp_s\d s,\label{eq:R}\\
\bt &= \mathbf{\Theta}\cdot\bR_s=\mathbf{\Theta}\cdot\bX_s,\end{aligned}$$ where $\bX^\perp_s = \be_z\times \bX_s$, and $$\begin{aligned}
\alpha = \frac{\lambda }{\Lambda} = \frac{1}{\Lambda} \int_0^\Lambda \cos\varphi \d s.\end{aligned}$$ The inextensibility assumption requires that $\partial [\bx_s\cdot\bx_s]/\partial t =0$, and the arc-length parameterization of the swimming filament naturally satisfies this constraint. The tangent angle is specified as a composition of different Fourier modes: $$\begin{aligned}
\label{eq:Fourier_psi}
\varphi(s,t)=\sum\limits_{n=1}^{n^*} \left\{a_n \cos\left[\frac{2\pi n}{\Lambda}\left(s-ct\right)\right]+b_n \sin\left[\frac{2\pi n}{\Lambda}\left(s-ct\right)\right]\right\},\end{aligned}$$ where $$\begin{aligned}
\label{eq:F_coef}
a_n&= \frac{2}{\Lambda}\int_0^\Lambda \varphi(s,0) \cos\lb[\frac{2\pi n s}{\Lambda}\rb] \d s,\\
b_n&= \frac{2}{\Lambda}\int_0^\Lambda \varphi(s,0) \sin\lb[\frac{2\pi n s}{\Lambda}\rb] \d s, \quad n=1, 2, 3, ...\end{aligned}$$
Resistive force theory {#subsec:RFT}
----------------------
In low Reynolds number swimming of a slender filament in a Newtonian fluids, the resistive forces are linearly dependent on the local velocity. The force per unit length exerted by the fluid on the swimmer body at location $s$ and time $t$ is given by $$\begin{aligned}
\bf(s,t) = -K_T \bu \cdot\bt\bt -K_N(\bu-\bu\cdot\bt\bt),\end{aligned}$$ where $K_N$ and $K_T$ are, respectively, the normal and tangential resistive coefficients. The self-propulsion of elongated filaments is possible because of drag anisotropy ($K_N \neq K_T $). A detailed discussion on this property can be found in the review paper by Lauga and Powers [@lauga2009hydrodynamics]. Recent experimental studies of direct force and motion measurements on undulatory microswimmers in viscous fluids find excellent agreement with RFT predictions [@friedrich2010high; @schulman2014dynamic]. The ratio $r_{K}= K_N/K_T$ varies with the slenderness ($L/r$) of the body. In the limit of an infinitely slender body, $L/r \rightarrow \infty$, $r_{K} \rightarrow 2$, which is the value adopted in this study.
For undulatory locomotion in dry granular media, we only consider the slow motion regime where grain-grain and grain-swimmer frictional forces dominate material inertial forces [@maladen2009undulatory]. The motion of the swimmer is confined to the horizontal plane such that the change of resistance due to depth is irrelevant. In this regime the granular particles behave like a dense frictional fluid where the material is constantly stirred by the moving swimmer [@zhang2014effective]. The frictional force acting tangentially everywhere on the surface of a small cylindrical element is characterized by $C_{F}$, which is refered to as the flow resistance coefficient [@maladen2009undulatory]. The other contribution to the resistive forces is the in-plane drag-induced normal force, which is characterized by $C_{S}$. Note that $C_{S}$ is a constant because the drag is independent of the velocity magnitude. The normal resistive coefficient $C_\perp$ depends on the orientation ($\psi$) of the element with respect to the direction of motion (Fig. \[fig:schematic\]). In other words, the resistive force exerted by the granular material on the swimmer per unit length $$\begin{aligned}
\label{eq:RFTGM}
\bf(s,t)= -C_\parallel \buh\cdot\bt\bt-C_\perp(\buh-\buh\cdot\bt\bt),\end{aligned}$$ where $$\begin{gathered}
\label{eq:RFTco}
C_\parallel = 2rC_F,\\
C_\perp(\psi) = 2rC_F+ \frac{2rC_S \sin \beta_0}{\sin \psi} = C_\parallel \left(1+\frac{C_S\sin\beta_0}{C_F\sin\psi}\right),\end{gathered}$$ $\tan \beta_{0}=\cot \gamma_{0} \sin \psi$ and $\gamma_0$ is a constant related to the internal slip angle of the granular media. Although a complete physical picture of the dependence of $C_\perp$ on the orientation $\psi$ remains elusive, the application of the granular RFT proves to be effective. Several studies have applied the granular RFT to study the locomotion of sand-swimming animals and artificial swimmers and found good agreement with experiments and numerical simulations [@maladen2011undulatory; @zhang2014effective]. A detailed discussion about the effectiveness of granular RFT on modelling sand-swimming can be found in a review article by Zhang and Goldman [@zhang2014effective].
An important parameter characterizing the response of dry GM to intrusion is the volume fraction $\phi$, which is defined as the ratio of the total volume of the particles divided by the occupied volume. The level of compaction affects drag response as closely packed (high $\phi$) GM expands to flow while loosely packed (low $\phi$) material would consolidate [@maladen2009undulatory]. The drag parameters $C_{S}, C_{F}$ and $\gamma_{0}$ depend on the volume fraction of the GM. In our study, we refer to the GM with $\phi=0.58$ as loosely packed (LP) whereas $\phi=0.62$ as closely packed (CP). The numerical values of the drag parameters are adopted from the paper by Maladen *et al*. [@maladen2009undulatory], where the forces at a fixed depth of $7.62$ cm were measured by towing a cylinder of stainless steel.
Without external forcing, the self-propelled filament satisfies force-free and torque-free conditions:
$$\begin{aligned}
\bF& =\int_0^L \bf(s,t) \d s=\textbf{0}, \label{eq:Fbalance}\\
\bT &=\int_0^L [\bx(s,t)-\bx(0,t)] \times \bf(s,t)\d s= \textbf{0}.\label{eq:Tbalance}\end{aligned}$$
The granular RFT exhibits the symmetry property that $\bu \to -\bu$ results in $\bf \to -\bf$. Combining this symmetry with the kinematics of the undulatory locomotion (see Sec. \[subsec:kinematic\]), one can show that the velocities $-\dot{\bx}(0,t)$ and $-\dot{\theta}$ are solutions to the instantaneous motion under a reversal of the actuation direction ($c\to -c$) provided that $\dot{\bx}(0,t)$ and $\dot{\theta}$ are solutions to the original problem (without reversal of the actuation). This symmetry is of course present in viscous RFT and this commonality, as we shall show, leads to qualitatively similar swimming behaviors.
Swimming efficiency {#subsec:efficiency}
-------------------
The instantaneous swimming speed of the filament is given by $\dot{\bx}(0,t)$, and the mean swimming velocity is defined as $\bU = \left<\dot{\bx}(0,t)\right> = U_{x}\be_{x}+U_{y} \be_{y}$ with the magnitude $U=\lVert \bU\rVert$. The angle brackets $\left<...\right>$ denote time-averaging over one period $T$. The efficiency of the undulatory locomotion for a given deformation wave is defined by the ratio of the power required to drag the straightened filament through the surrounding substance to the power spent to propel the undulating body at the same velocity [@lighthill1975mathematica]. Hence, the efficiency for undulatory swimming of slender filaments in viscous fluid ($\eta_f$) and granular substance ($\eta_g$), respectively, are $$\begin{aligned}
\label{eq:effi}
\eta_f = \frac{K_T L U^2}{P}, \quad \eta_g = \frac{C_\parallel L U}{P},\end{aligned}$$ where $$\begin{aligned}
P = \left<\int_0^L \bf(s,t)\cdot\bu(s,t)\d s\right> \cdot\end{aligned}$$ The optimal swimming can then be interpreted as either swimming with the maximum speed at a given power or swimming with the minimum power at a given speed.
Waveforms {#subsec:waveform}
---------
We consider two typical planar waveforms that have been well studied in Newtonian swimming: the sinusoidal waveform, and the sawtooth waveform (Fig. \[fig:wave\]). The sinusoidal waveform can be described by its Cartesian coordinates: $$\begin{aligned}
\label{eq:SineCart}
Y = b \sin k(X+X_{0}),\end{aligned}$$ where $k = 2\pi/\lambda$ is the wave number, $kX_{0}$ is the initial phase angle of the waveform, and $b$ the wave amplitude. The dimensionless wave amplitude is defined as $\epsilon = kb$.
The sawtooth waveform, which consists of straight links with a bending angle $\beta$ ($\varphi = \pm \beta/2$), can be described as $$\begin{aligned}
\label{eq:sawtoothEq}
Y = \frac{2b}{\pi} \arcsin[\sin k(X+X_0)],\end{aligned}$$ The dimensionless amplitude $\epsilon = k b= (\pi/2)\tan(\beta/2)$.
![image](wave.pdf)
Bodies of infinite length {#sec:inf}
=========================
For bodies of infinite length ($L\to\infty$), the swimming motion is steady and unidirectional, and hence $\dot{\theta(t})=0$. Without loss of generality, we assume the filament propagates the deformation wave in the positive $x$-direction. Then the velocity of a material point on the body can be written as $$\begin{aligned}
\label{eq:INFV}
\bu = -U \be_x +V\be_x-c\bt,\end{aligned}$$ where $U$ is the swimming speed [@lighthill1975mathematica]. For an infinite swimmer, the unidirectional swimming velocity for a given waveform can be obtained from only the force balance in the $x$-direction, $\bF\cdot\be_x=0$, over a single wavelength,
$$\begin{aligned}
\int_0^\Lambda\left(\frac{C_S \sin\beta_0}{\sin\psi}+C_F\right)\buh\cdot\be_x\d s -\int_0^\Lambda\frac{C_S \sin\beta_0}{\sin\psi}(\buh\cdot\bt)\bt\cdot\be_x\d s=0.\label{eq:Fx0}\end{aligned}$$
The above integral equation can be solved for $U$ numerically for a given waveform in general and but is analytically tractable in certain asymptotic regimes, which we discuss below.
Optimal shape: numerical results {#subsec:opt}
--------------------------------
A natural question for swimming organisms is how their swimming gaits evolve under the pressure of natural selection [@childress1981mechanics], since being able to swim does not necessarily mean one does it efficiently. The understanding of optimal swimming may reveal nature’s design principles and guide the engineering of robots capable of efficient self-propulsion. As a response, the optimal strategies of several Newtonian swimming configurations have been studied. Becker *et al*. [@becker2003self] determined the optimal strategy of Purcell’s three-link swimmer under constant forcing and minimum mechanical work. Tam and Hosoi [@TamPRL] improved the swimming speed and efficiency of the optimal strategy of Purcell’s three-link swimmer by allowing simultaneous rather than sequential movement of both hinges (kinematic optimization). Using viscous RFT, Lighthill showed that the optimal flagellar shape has constant angle between the local tangent to the flagellum and the swimming direction [@lighthill1976flagellar]. In 2D, the sawtooth profile with a tangent angle $\varphi \approx \pm 40^\circ$ (bending angle $\beta \approx 80^{\circ}$) was found to optimize the swimming efficiency of an infinite length swimming filament. Alternatively, this solution can be obtained through a variational approach [@spagnolie2010optimal]. In 3D, Lighthill’s solution leads to an optimal shape of a rotating helix. More recently, Spagnolie and Lauga studied the optimal shapes for both finite and infinite elastic flagellum by incorporating physical constraints such as bending and sliding costs [@spagnolie2010optimal]. Inspired by the investigations of optimal strategies for Newtonian swimming, we study the optimal shape for infinite swimmers in granular substrates using resistive force theory.
For bodies of infinite length, the optimal shape is time, scale and phase invariant [@spagnolie2010optimal]. Therefore, we take $\Lambda=L=1$ and consider the optimization for $t=0$. In other words, the local tangent angle for the optimization problem would be $$\begin{aligned}
\varphi(s,t=0)=\sum\limits_{n=1}^{n^*} a_n \cos(2\pi ns).\end{aligned}$$ We consider the optimal filament shape by maximizing the swimming efficiency $\eta$ defined in Sec. \[subsec:efficiency\]. Once the local tangent angle is obtained, the shape itself can be recovered by integration. The numerical methods used in this optimization can be found in the Appendix.
![image](opt.pdf)
The optimal shapes found by maximizing the swimming efficiency are presented in Fig. \[fig:opt\] for a LP granular substrate (red dashed line), a CP granular substrate (blue dash-dot line), and a viscous Newtonian fluid (black solid line) as a comparison. First, it is interesting that the optimal shape stays as sawtooth despite the nonlinearity in the resistive force model of granular substrates. The optimal bending angles for LP and CP granular media are, respectively, $\beta \approx 80^{\circ}$ and $\beta \approx 87^{\circ}$. The associated efficiencies of the optimal shapes are around $0.56$ for LP and $0.51$ for CP granular substrates, which are much greater than that of Newtonian swimming. In spite of the difference in the surrounding media, the optimal bending angle for granular substrates and viscous Newtonian fluids lie within the same range; in particular, the optimal sawtooth in LP closely resembles that in Newtonian fluids.
We argue that it is not surprising that the sawtooth waveform is optimal in both the viscous RFT and the nonlinear granular RFT. Given an angle that maximizes the efficiency of a local element. Without any penalty, the globally optimal shape would be the one that is locally optimal everywhere along the body. As a result, a local resistive force model should exhibit an optimal shape of a certain sawtooth waveform. Using this argument, we can simply drop the integration (or assume it is a sawtooth) in Eq. (\[eq:Fx0\]) and consider the local optimality. The local optimal angle obtained is indeed the same as that found using numerical global optimization (see Sec. \[subsec:sawNsine\]).
The existence of a locally optimal tangent angle $\varphi$ originates from the physical picture introduced by the drag-based propulsion model [@lauga2009hydrodynamics] (Fig. \[fig:schematic\]). Let $\bu_{d}=u_{d}\be_{y}$ be the transverse deformation velocity of an infinite swimming filament. Then a propulsive force, which is perpendicular to the direction of the deformation velocity, generated by this deformation can be given by $\bf_{\textrm{prop}} = -(C_{\perp}(\psi)-C_{\parallel}) \sin\varphi\cos\varphi \be_{x}$. Therefore, the propulsive force arising from a local deformation of the filament scales with its orientation as $\sin\varphi\cos\varphi/\sqrt{\tan^{2}\gamma_{0}+\cos^{2}\varphi}$, the maximum of which is achieved when $\varphi \approx 64^{\circ}$. However, as the tangent angle increases, the power consumption of the swimming filament increases. As a result, the swimmer tends to reduce the tangent angle to decrease the energy expenditure while maintaining a relatively high propulsive force. It is the interplay of these two factors that determines the optimal tangent angle.
Sawtooth and sinusoid {#subsec:sawNsine}
---------------------
The swimming speed of an infinite sawtooth in viscous fluids can be expressed as $$\begin{aligned}
\label{eq:sawNewtoninf}
\frac{U}{V} = \frac{1-\cos\beta}{3-\cos\beta} \cdot\end{aligned}$$
For a sawtooth profile in granular substrates, although an explicit analytical solution cannot be extracted, an implicit algebraic equation for the swimming speed $U$ can be obtained since the local resistive forces do not vary along the body: $$\begin{aligned}
\label{eq:SawtoothINF}
\left(\frac{C_S \sin\beta_0}{\sin\psi}+C_F\right)\buh\cdot\be_x -\frac{C_S \sin\beta_0}{\sin\psi}(\buh\cdot\bt)\bt\cdot\be_x=0,\end{aligned}$$ where $\bt\cdot\be_x = \cos(\beta/2)$. We then solve Eq. (\[eq:SawtoothINF\]) numerically (see Appendix) with the same convergence criterion as in the optimization (Sec. \[subsec:opt\]). For a sinusoidal wave in granular media, a simplification like Eq. \[eq:SawtoothINF\] is not available and we therefore directly solve Eq. (\[eq:Fx0\]) with the numerical method outlined in the Appendix.
For small amplitude sawtooth waveforms ($\epsilon \ll 1$), or small bending angle $\beta$, we obtain an asymptotic solution of the swimming speed $U$. Note that the swimming speed is invariant under a phase shift of $\pi$, which is equivalent to a sign change in the amplitude: $\epsilon \rightarrow -\epsilon$. Assuming a regular expansion in $\epsilon$, this symmetry argument leads to a quadratic scaling of the swimming speed in the wave amplitude [@pak2014theoretical] $$\begin{aligned}
\frac{U}{V}\sim \frac{4\cos\gamma_0C_S}{\pi^2C_F}\epsilon^2 \cdot\end{aligned}$$ When the bending angle is large, another asymptotic limit can be obtained. The swimming speed $U/V$ approaches a constant as $\beta \to \pi$ and analytically we find that $$\begin{aligned}
\frac{U}{V} \sim \frac{C_{S}}{C_{S}+C_{F}\tan\gamma_{0}} \cdot\end{aligned}$$ One can also show that this large amplitude asymptotic limit for a sawtooth equals that of a sinusoidal wave. For small amplitude sinusoidal waveforms, however, the nonlinearity of the shape and the resistive forces results in a non-uniform integral and a slowly converging asymptotic series. To leading order, the swimming speed $U/V$ scales as $\epsilon^{2}/\ln(1/\lvert\epsilon\rvert) $, which does not agree well with the numerical results even for $\epsilon< 0.1 $ as the higher order terms being truncated are not significantly smaller. We present the small and large amplitude asymptotic solutions for the granular swimming of a sawtooth profile in Fig. \[fig:inf\](a). The asymptotic solutions agree well with the numerical solutions even for wave amplitudes close to one. Fig. \[fig:inf\](b) shows the efficiency of swimming as a function of the bending angle for an infinite sawtooth in both granular media and viscous fluids. For swimming efficiency, a global maximum in bending angle exists for both viscous and granular swimming. Note that the optimal angles obtained here are equal to those obtained via the global optimization (Sec. \[subsec:opt\]).
![image](inf.pdf)
![image](sawnsine.pdf)
In Fig. \[fig:sawNsine\], we compare the swimming speed and efficiency of sawtooth and sinusoidal waveforms in both GM and Newtonian fluids as a function of the wave amplitude $\epsilon$. In both GM and Newtonian fluids, the swimming speed of a sawtooth is only slightly different from that of a sinusoid with the same dimensionless amplitude. This small difference indicates that the effects of the local curvature variations are not significant in both the granular and viscous RFT. Although the sawtooth is found to be the mathematically optimal shape, the undulatory gait of a sandfish resembles a smooth sinusoidal waveform [@maladen2009undulatory]. The slight difference in swimming performance between the two waveforms presented in this section might justify the adoption of a sinusoidal waveform instead of the mathematically optimal sawtooth waveform, since the kinks in the sawtooth may involve other energetic costs associated with bending and the deformation of the internal structure of the body [@spagnolie2010optimal].
Bodies of finite length {#sec:fin}
=======================
The infinite swimmer model only enforces a force balance in one direction and hence a swimmer is confined to swim only unidirectionally without any rotation. In reality, however, a swimmer has a finite size and more complex swimming kinematics, including transverse motion relative to the wave propagation direction and rotation. Previous studies employed slender body theory to investigate the swimming motion of finite filaments in a viscous Newtonian fluid and their swimming performance in relation to number of wavelengths and filament length [@pironneau1974optimal; @higdon1979hydrodynamic; @spagnolie2010optimal; @koehler2012pitching]. In this section, we investigate the swimming characteristics of finite-length sinusoidal swimmers in a granular medium and compare with their Newtonian counterparts. The numerical methods implemented to solve the equations of motion of a finite length swimmer are given in the Appendix.
Geometries {#subsec:geom}
----------
![image](oddeven.pdf)
For an undulating sinusoidal filament, the initial shape of the swimmer is determined by the number of waves $N$, the wave amplitude $\epsilon$, and the initial phase angle $kX_{0}$ (Eq. (\[eq:SineCart\])). The two specific categories of shapes that possess odd or even symmetry for a single wave sinusoidal swimmer are shown in Fig. \[fig:oddEven\]. A swimmer in an odd configuration is the one that has point symmetry about the midpoint of the filament as seen in Fig. \[fig:oddEven\](a), while an even configuration is the one that possesses mirror symmetry about the vertical line through the midpoint as in Fig. \[fig:oddEven\](b). In our paper, the shapes shown in Fig. \[fig:oddEven\](a) are referred to as odd sine swimmers, while even cosine swimmers are those shown in Fig. \[fig:oddEven\](b). Note that an even sine swimmer would be the one that has the number of waves $N \in \{1/2, 3/2,5/2, ...\}$ and a phase angle $kX_{0} \in \{ 0,\pm \pi, \pm2\pi, ...\}$; an even cosine swimmer is the one that has the number of waves $N \in \{1, 2, 3, ... \}$ and a phase angle $kX_{0}\in \{\pm\pi/2, \pm3\pi/2, ... \}$.
Pitching, drifting and reorientation
------------------------------------
Unlike the swimming of an infinite length undulatory swimmer whose motion is steady and unidirectional, the locomotion of a finite filament may also experience net motion normal to the initial direction wave propagation direction, also referred to as drifting, and unsteady rotational motion, known as pitching. Here we characterize in GM the re-orientation of a finite swimmer that results in drifting, and the dependence of swimming performance on pitching motion, previously reported to diminish performance in viscous Newtonian media [@spagnolie2010optimal; @koehler2012pitching].
![image](traj.pdf)
For an even symmetry filament in viscous fluids, Koehler *et al*. [@koehler2012pitching] showed that the velocity of the center of mass is along the centerline of the waveform, hence the net drifting is zero. This argument relies on the kinematic reversibility of Stokes flow: reflection about the vertical line is equivalent to a time reversal (or reversing the direction of the actuation), so the instantaneous swimming is identical to the mirror reflection of its time-reversal, and the linearity requires the reverse of velocity due to time-reversal, thus one can show that the transverse component of the velocity is zero. As a result, the net displacement in one period for a filament starts with the even configuration is along the initial waveform centerline.
Although the granular RFT is nonlinear, the aforementioned symmetry property ($\bu \to -\bu \Rightarrow \bf \to -\bf$, see Sec. \[subsec:RFT\]) means that the same argument for an even symmetry swimmer can be made in GM. Therefore, zero net transverse motion is achieved if the swimmer starts with an even symmetry, which is also corroborated by the numerical simulation. Fig. \[fig:traj\] shows the head trajectories of two swimming sinusoidal filaments with the same wave amplitude ($\epsilon=1$), one starts with even symmetry while the other starts with odd symmetry. The net displacement of the even cosine swimmer is in the negative $x$-direction, which is the opposite direction of the wave propagation at $t=0$. The odd sine swimmer, however, appears to be drifting upwards to the positive $y$-direction through time.
![image](theta.pdf)
The swimming behavior presented in Fig. \[fig:traj\] can be understood by examining the periodic instantaneous motion of the swimmer. In the moving frame, or the Lagrangian frame, the instantaneous motion of the swimmer can be viewed as being pulled through a waveform-shaped tube [@koehler2012pitching]. This motion, in turn, causes rotation and translation of the Lagrangian frame. The instantaneous rotation of the Lagrangian frame is described by $\theta(t)$, which is periodic due to the periodicity of the wave propagation. The average of $\theta(t)$ over one period, denoted as $\left< \theta\right>$, describes the average swimming direction. This angle $\left< \theta \right>$ is the same in every period which results in a straight line trajectory on average. If a filament, starts with an odd (even) configuration at $t =0$ (if aligned with the $x$-axis then $\theta_0=0$), it would possess even (odd) symmetry at $t = T/4$. Thus the filament alternates between even symmetry and odd symmetry after successive time steps of $T/4$. In this viewpoint, $\left<\theta\right>-\theta_0$ characterizes the amount of time $t_{1}$ required for the filament to reorient itself such that it reaches an even symmetry. After that, the swimmer would move in the direction of the waveform centerline at $t=t_{1}$. For a fixed number of waves $N$ and amplitude $\epsilon$, the odd configuration requires the largest amount of time ($T/4$) to reach an even symmetry, therefore has the largest angle of reorientation. Note that the angle of reorientation should be distinguished from pitching of the swimmer, which is the instantaneous rotation of the swimmer about its waveform centerline.
In Fig. \[fig:theta\], we present parametric plots of absolute value of the angle of reorientation $\lvert\left< \theta \right>-\theta_0\rvert$ by varying the wave phase angle $kX_{0}$ and the amplitude $\epsilon$ in both GM and viscous fluids. The number of waves is fixed as $N=1$, which approximates the shape of an undulating sandfish body [@maladen2009undulatory]. Note that a phase shift of $\pi$ would result in a reversal of the direction of the transverse motion, hence the sign of $\left<\theta\right>-\theta_0$. In both GM and Newtonian fluids, the maximum in $\lvert\left< \theta \right>-\theta_0\rvert$ is obtained when the filament possesses an odd symmetry at $t=0$, i.e., $kX_{0} \in\{ 0, \pi, 2\pi, ...\}$. For shapes that possess even symmetry, namely, $kX_{0} \in\{ \pi/2, 3\pi/2, ...\}$, zero transverse motion is observed. Within our parameter range, a maximum in $\lvert\left< \theta \right>-\theta_0\rvert$ is achieved around an intermediate value of the amplitude for a given phase angle. As an example, the variation of $\lvert\left< \theta \right>-\theta_0\rvert$ with the amplitude $\epsilon$ for the odd configuration is shown in Fig. \[fig:theta\](d). The largest amount of reorientation of an odd swimmer is achieved when $\epsilon \approx 1-1.2$ in GM while $\epsilon \approx 2.2$ in viscous fluids. We also note that the angle of reorientation decreases with the increasing of wave amplitude in the large amplitude region ($\epsilon>2$).
![\[fig:tm\] Maximum instantaneous pitching angle $\theta_{\textrm{mp}}$ as a function of the wave amplitude $\epsilon$ for single wave ($N=1$) sinusoidal swimmers in GM and Newtonian fluids. ](tm.pdf)
Although the transverse motion of the even configuration is minimal, the instantaneous pitching, $\theta(t)-\lb<\theta\rb>$, which generally diminishes performance, can be significant. Multiple metrics have been used to characterize pitching of a swimmer [@koehler2012pitching; @spagnolie2010optimal], here we use the maximal amount of instantaneous pitching a swimmer can experience in one cycle of its motion $\theta_{\textrm{mp}} = \lvert\theta(t)-\left< \theta \right>\rvert_{\max}$. Fig. \[fig:tm\] shows the maximal instantaneous pitching angle $\theta_{\textrm{mp}}$ for single wave sinusoidal swimmers in GM and Newtonian fluids. The maximal instantaneous pitching angle of a single wave sinusoid goes up to about $15^{\circ}$ in loosely packed GM while around $19^{\circ}$ in closely packed GM.
The instantaneous pitching of the swimmer results in a tortuous motion with a net swimming speed smaller than that of an infinite sinusoid. For a fixed number of waves and wave amplitude, a phase shift only leads to a variation in the direction of swimming. In other words, the velocity magnitude $U$ is independent of $kX_{0}$ but the $x$ and $y$ components vary. From a control point of view, one can change the phase angle of an artificial sinusoidal swimmer to obtain the desired direction of swimming.
Swimming performance
--------------------
The two typical metrics for swimming performance used in the literature are the dimensionless swimming speed $U/V$ and the swimming efficiency $\eta$, see Eq. (\[eq:effi\]). For a sinusoidal swimmer, the performance depends on the dimensionless amplitude $\epsilon$ and the number of waves $N$. Note that the initial phase angle $kX_{0}$ does not affect the two performance metrics. The desired motion of a finite swimmer is its translation, therefore the optimization of a finite sinusoidal filament requires minimizing pitching.
![\[fig:UNGV\]Swimming speed $U/V$ as a function of the dimensionless amplitude $\epsilon$ for different number of waves $N$ in (a) loosely packed GM and (b) closely packed GM. The solid lines denote the swimming speed of an infinite sinusoid.](ungv.pdf)
![\[fig:efi\]Swimming efficiency $\eta$ as a function of the dimensionless amplitude $\epsilon$ for different number of waves $N$ in (a) loosely packed GM and (b) closely packed GM. The shaded regions represent the observed values of $\epsilon$ for lizards reported in the literature [@maladen2009undulatory; @ding2012mechanics].](efi.pdf)
For an undulatory finite filament in viscous fluids, several studies have characterized the swimming performance and optimal strategies. Spagnolie and Lauga reported that the local maxima in swimming efficiency occur for around half-integer number of waves ($N \approx 3/2, 5/2, ...,$) when the bending cost is small [@spagnolie2010optimal]. Later studies by Koehler *et al*. [@koehler2012pitching] and Berman *et al*. [@berman2013undulatory] also showed that, for a sinusoidal swimmer, local maxima in performance are achieved for close to half-integer number of waves where pitching is small.
We first verify that the swimming velocity (Fig. \[fig:UNGV\]) and efficiency (Fig. \[fig:efi\]) of a finite sinusoidal swimmer in GM both converge to that of an infinite sinusoidal swimmer as the number of waves $N$ increases. For a single wave sinusoid ($N=1$) in loosely packed GM, the optimal dimensionless amplitude that maximizes the efficiency is $\epsilon\approx 1.68$. As the number of waves increases, the optimal dimensionless amplitude approaches that of an infinite sinusoid ($\epsilon \approx 1.33$). Similar observations can be made for closely packed GM. We also observe that for a given dimensionless amplitude $\epsilon$, the difference in the swimming velocity (or efficiency) between a short swimmer ($N=1$) and an infinite swimmer can be associated with the pitching motion: the largest difference in swimming speed (or efficiency) between the $N=1$ and $N=\infty$ swimmers occurs in the region $\epsilon \approx 1$ in Figs. \[fig:UNGV\] and \[fig:efi\], which is also the region where pitching is the most significant (Fig. \[fig:tm\]).
![\[fig:N\] (a) Swimming speed as a function of the number of waves in GM. (b) Swimming efficiency as a function of the number of waves in GM. The dimensionless amplitude is fixed ($\epsilon=1$).](n.pdf)
For a given waveform, the amount of pitching can be altered by changing the number of waves $N$. We investigate in Fig. \[fig:N\] the dependence of the performance metrics on the number of waves for a finite sinusoidal swimmer, keeping dimensionless amplitude fixed at $\epsilon=1$. Rather than approaching the swimming velocity (or efficiency) of the corresponding infinite sinusoid monotonically with increasing number of waves, the swimming speed and efficiency exhibit local maxima and minima. Similar to the Newtonian case, the local maxima in efficiency and swimming speed occur for the number of waves close to (but not equal) half-integers. The volume fraction of the GM has no significant influence on the number of waves where local maxima in swimming performance occur. As shown in Fig. \[fig:N\], the first local maximum in swimming performance for the number of waves greater than one occurs around $N\approx1.4$. The maxima in swimming performance are associated with minimal pitching as shown in Fig. \[fig:tmp\]. Finally we note that although both the first two local maxima have minimal pitching (Fig. \[fig:tmp\]), the swimmer with more number of waves ($N \approx 2.5$) still displays better swimming performance, which can be attributed to a smaller bobbing motion [@koehler2012pitching] (the relative motion of the center of mass of the swimmer to the net swimming direction) for the swimmer with more number of waves.
![\[fig:tmp\] Maximum instantaneous pitching angle as a function of the number of waves in GM. The dimensionless amplitude is fixed ($\epsilon=1$).](tmp.pdf)
Finally, we relate our findings to biological observation; we show, in the shaded regions of Fig. \[fig:efi\], the observed dimensionless amplitude (amplitude-to-wavelength ratio) for lizards reported in the literature ($\epsilon = 1.20-1.38$) [@maladen2009undulatory; @ding2012mechanics]. We see in the case of both loosely-packed and closely-packed granular media, that the biologically observed range of wave amplitudes sample high efficiencies not far from optimal ($\epsilon \approx 1.69$ for LP and $\epsilon\approx 1.95$ for CP for $N=1$). Since the efficiency peak is broad, a swimmer may adopt a close-to-optimal shape at the expense of only a modest drop in swimming efficiency to address other constraints (such as bending costs or internal dissipation).
Conclusion {#sec:discussion}
==========
In this paper, we have investigated locomotion of slender filaments in granular media using a resistive force theory proposed by Maladen *et al*. [@maladen2009undulatory]. While previous work focused on infinite swimmers (or 1-D swimming) in reality a swimmer has a finite size, which leads to more complex swimming motion. By taking into account full force and torque balances, a finite swimmer is no longer only confined to swim in a straight trajectory. The orientation of the swimmer can be controlled by adjusting the features of the waveform such as the amplitude, phase, and number of wavelengths, allowing a swimmer to move from an initial position to a final destination via a more complex, designated trajectory. These degrees of freedom enable the control of swimmers without the use of any external fields to actively steer the swimmer. Our studies characterize this complex swimming motion in granular media, which may be useful for the development of programmable and efficient autonomous locomotive systems in such environments, but also suggest that swimmers in nature are themselves closely tuned for optimality.
We also find that undulatory locomotion of filaments in granular media is distinctly similar to that in viscous fluids. We compared a number of observations made for swimming in viscous fluids with RFT both for finite and infinite swimmers and found qualitatively similar behavior using granular resistive force theory despite the nonlinearity of the force law. The reason is largely down to two distinct similarities. The first, is that both laws are still local and thus ignore interactions of distinct parts of the body through the medium in which they swim. Ultimately this leads to finding that a sawtooth profile optimizes locomotion in both viscous fluids and granular media. The second, is that both force laws display the symmetry that $\bu\to-\bu$ results in $\bf\to-\bf$. This leads to a kinematic reversibility in both cases, where a reversal of the wave speed leads to an reversal of the translational and rotational motion of the swimmer, and hence a myriad of qualitatively similar behaviors that we have explored and quantified in the paper.
Funding (to GJE) from the Natural Science and Engineering Research Council of Canada (NSERC) is gratefully acknowledged.
Numerical implementation {#sec:appendix.}
========================
In this appendix, we present the numerical methods implemented in the optimization of infinite filaments and the solution to the equations of motion of finite length filaments.
Optimization {#subsec:APPENDIX_opt}
------------
The numerical optimization (see Sec. \[subsec:opt\]) is performed using MATLAB’s built-in *fminsearch* function, which implements the Nelder-Mead simplex algorithm. We truncate the Fourier series by taking $n^*=100$ to have a sufficient spectral accuracy and use $m=1000$ points for the Gauss-Legendre integration scheme. Further increase in spectral and spatial resolution has a negligible effect on the optimization. The optimization search routine iterates until the algorithm detects a local solution gradient with a relative error tolerance of $10^{-14}$. For each iteration, the swimming speed $U$ is obtained by solving the force balance in the swimming direction using MATLAB’s *fzero* function, which runs until a relative error of $10^{-16}$ is reached. A variety of shapes are provided as the initial guess for the starting of the optimization. The optimization calculation is iterated by taking the converged shape of the previous calculation as the initial guess until the shapes acquired in two successive calculations are consistent. The optimal shape obtained does not vary with the initial guess.
We validate our approach by solving the optimal shape for the Newtonian case. For Newtonian swimming, the swimming speed $U$ can be obtained by a simple matrix inversion due to linearity. The optimal shape obtained from our numerical approach agrees with the analytical solution of Lighthill [@lighthill1975mathematica].
Numerical solution to the equations of motion for finite swimmers {#subsec:appendix_num}
-----------------------------------------------------------------
The study of swimming characteristics requires solving the force and torque balance of the finite swimmer as formulated in Sec. \[sec:form\]. The force- and torque-free conditions posed in Eq. (\[eq:Fbalance\]) and (\[eq:Tbalance\]) provide a system of non-linear ordinary differential equations (ODEs) for the swimmer’s linear and angular velocities in terms of its instantaneous location and orientation. The instantaneous velocities in turn, once obtained, can be integrated over time to determine the trajectory, location and orientation of the swimmer.
Having assumed that the centerline of the waveform is initially aligned with the $x-$axis of the lab frame, we solve the swimming problem numerically. Starting at $t=0$ with a time step $\Delta t$, we denote $t_i = i\Delta t$. With this notation, we employ a second order multi-step finite difference method to discretize the ODEs such that $$\begin{aligned}
\bx_{i+1} =\frac{4}{3}\bx_{i}-\frac{1}{3}\bx_{i-1}+\frac{2\Delta t}{3}(2\dot{\bx}_i-\dot{\bx}_{i-1}).\end{aligned}$$ We do similarly for $\theta_{i+1}$, and then $\Theta_{i+1}$ can be computed. To initialize this numerical scheme, we need both $[\bx_0, \theta_0]$ and $[\bx_1, \theta_1]$. At the first time step, $[\bx_1, \theta_1]$ is computed using the Runge-Kutta fourth order method. At each time step ($i \geq 1$), we obtain $[\dot{\bx}_i, \dot{\theta}_i]$ by solving the integral equations for force and torque balance using Gauss-Legendre quadrature integration coupled with MATLAB’s *fsolve* routine. We use $m$ points along the filament for the Gauss-Legendre integration method. The *fsolve* routine in Matlab Optimization Toolbox attempts to solve a system of equations by minimizing the sum of squares of all the components. We set the termination tolerance on both the function value and independent variables to $10^{-14}$. We generally use $m=1000$ points along the filament and $T_{m}=500$ time steps for one period $T$ of the motion. The number of Fourier modes is taken as $n^{*}=100$. Further increasing of the number of spatial points or time steps have no significant influence on the accuracy of the results. All the numerical simulations are performed using MATLAB.
[^1]: Electronic mail: opak@scu.edu
[^2]: Electronic mail: gelfring@mech.ubc.ca
|
---
abstract: 'In this paper, we study the inverse boundary value problem for the wave equation with a view towards an explicit reconstruction procedure. We consider both the anisotropic problem where the unknown is a general Riemannian metric smoothly varying in a domain, and the isotropic problem where the metric is conformal to the Euclidean metric. Our objective in both cases is to construct the metric, using either the Neumann-to-Dirichlet (N-to-D) map or Dirichlet-to-Neumann (D-to-N) map as the data. In the anisotropic case we construct the metric in the boundary normal (or semi-geodesic) coordinates via reconstruction of the wave field in the interior of the domain. In the isotropic case we can go further and construct the wave speed in the Euclidean coordinates via reconstruction of the coordinate transformation from the boundary normal coordinates to the Euclidean coordinates. Both cases utilize a variant of the Boundary Control method, and work by probing the interior using special boundary sources. We provide a computational experiment to demonstrate our procedure in the isotropic case with N-to-D data.'
author:
- 'Maarten V. de Hoop[^1]'
- 'Paul Kepley [^2]'
- 'Lauri Oksanen [^3]'
bibliography:
- 'Bibliography.bib'
title: 'Recovery of a Smooth Metric via Wave Field and Coordinate Transformation Reconstruction [^4]'
---
inverse problem, wave equation, boundary control, Riemannian metric
35R30, 35L05
Introduction
============
We study the inverse boundary value problem for the wave equation from a computational point of view. Specifically, let $M \subset \R^n$ be a compact connected domain with smooth boundary $\p M$, and let $c(x)$ be an unknown smooth strictly positive function on $M$. Let $u = u^f$ denote the solution to the wave equation on $M$, with Neumann source $f$, $$\label{wave_eq_iso}
\begin{array}{rcl}
\p_t^2 u - c^2(x)\Delta u &=& 0, \quad \textnormal{in $(0,\infty) \times M$}, \\
\p_{\vec n} u|_{x \in \p M} &=& f, \\
u|_{t=0} = \p_t u|_{t=0}, &=& 0.
\end{array}$$ Here $\vec n$ is the inward pointing (Euclidean) unit normal vector on $\p M$. Let $T > 0$ and let $\Rec \subset \p M$ be open. We suppose that the restriction of the Neumann-to-Dirichlet (N-to-D) map on $(0,2T) \times \Rec$ is known, and denote this map by $\Lambda_{\Rec}^{2T}$. It is defined by $$\Lambda_{\Rec}^{2T} : f \mapsto u^f|_{(0,2T) \times
\Rec}, \quad f \in C_0^\infty((0,2T) \times \Rec).$$ The goal of the inverse boundary value problem is to use the data $\Lambda_{\Rec}^{2T}$ to determine the wave speed $c$ in a subset $\Omega \subset M$ modelling the region of interest.
Our approach to solve this inverse boundary value problem is based on the Boundary Control method that originates from [@Belishev1987]. There exists a large number of variants of the Boundary Control method in the theoretical literature, see e.g. the review [@Belishev2007], the monograph [@Katchalov2001], and the recent theoretical uniqueness [@Eskin2015; @Kurylev2015] and stability results [@Bosi2017]. We face an even wider array of possibilities when designing computational implementations of the method. Previous computational studies of the method include [@Belishev1999; @Pestov2010] and the recent work [@Belishev2016].
Motivated by applications to seismic imaging, we are particularly interested in the problem with partial data, that is, the case $\Rec \ne \p M$. All known variants of the Boundary Control method that work with partial data require solving ill-posed control problems, and this appears to form the bottleneck of the resolution of the method. In this paper we consider this issue from two perspectives: we show that the steps of the method, apart from solving the control problems, are stable; and present a computational implementation of the method with a regularization for the control problems.
In addition to the above isotropic problem with the scalar speed of sound $c$, we consider an anisotropic problem and a variation where the data is given by the Dirichlet-to-Neumann map rather than the Neumann-to-Dirichlet map, see the definitions (\[wave\_eq\_aniso\]) and (\[DtoN\]) below. We propose a computational method to reduce the anisotropic inverse boundary value problem to a problem with data in the interior of $M$. Analogously to elliptic inverse problems with internal data [@Bal2013], this hyperbolic internal data problem may be of independent interest, and we show a Lipschitz stability result for the problem under a geometric assumption. We show the correctness of our method without additional geometric assumptions (Proposition \[prop:computingG\]), but for the stability of the internal data problem in the anisotropic case we require additional convexity condition to be satisfied (Theorem \[th\_main\]).
Our computational approach in the isotropic case combines two techniques that have been successfully used in the previous literature. To solve the ill-posed control problems, we use the regularized optimization approach that originates from [@Bingham2008]. This is combined with the use of the eikonal equation as in the previous computational studies [@Belishev1999; @Belishev2016]. The main difference between [@Belishev1999; @Belishev2016] and the present work is that in [@Belishev1999; @Belishev2016] the ill-posed control problems, and the subsequent reconstruction of internal information (see Section \[sec:interior\] below), are implemented using the so-called wave bases rather than regularized optimization. Another distinction is that we do not rely upon the amplitude formula from geometric optics to extract internal information. Instead, we use the boundary data to construct sources that allow us to extract localized averages of waves and harmonic functions in the interior.
Our motivation to study the Boundary Control method comes from potential applications in seismic imaging. The prospect is that the method could provide a good initial guess for the local optimization methods currently in use in seismic imaging. These methods suffer from the fact that they may converge to a local minimum of the cost function and thus fail to give the true solution to the imaging problem [@Symes2009]. On the other hand, the Boundary Control method is theoretically guaranteed to converge to the true solution, however, in practice, we need to give up resolution in order to stabilize the method. The numerical examples in this paper show that, when regularized suitably, the method can stably reconstruct smooth variations in the wave speed.
We reconstruct the wave speed only in a region near the measurement surface $\Rec$, since at least in theory, it is possible to iterate this procedure in a layer stripping fashion. The layer stripping alternates between the local reconstruction step as discussed in this paper and the so-called redatuming step that propagates the measurement data through the region where the wave speed is already known. We have developed the redatuming step computationally in [@Hoop2016a].
We will not attempt to give an overview of computational methods for coefficient determination problems for the wave equation that are not based on the Boundary Control method. However, we mention the interesting recent computational work [@Baudouin2016] that is based on the so-called Bukhgeim-Klibanov method [@Bukhgeuim1981]. We note that the Bukhgeim-Klibanov method uses different data from the Boundary Control method, requiring only a single instance of boundary values, but that it also requires that the initial data are non-vanishing. We mention also another reconstruction method that uses a single measurement [@Beilina2008; @Beilina2012]. This method is based on a reduction to a non-linear integro-differential equation, and there are several papers on how to solve this equation (or an approximate version of it), see [@Klibanov2015; @Klibanov2017] for recent results including computational implementations. Finally, we mention [@Kabanikhin2005] for a thorough comparison of several methods in the $1+1$-dimensional case.
Notation and techniques from the Boundary Control method
========================================================
The Boundary Control (BC) method is based on the geometrical aspects of wave propagation. These are best described using the language of Riemannian geometry, and in that spirit we define the isotropic Riemannian metric $g = c(x)^{-2} dx^2$ associated to the wave speed $c(x)$ on $M$. Put differently, in the Cartesian coordinates of $M$, the metric tensor $g$ is represented by $c(x)^{-2}$ times the identity matrix. Now the distance function of the Riemmannian manifold $(M,g)$ encodes the travel times of waves between points in $M$, and singular wave fronts propagate along the geodesics of $(M,g)$.
We will also discuss the case of an anisotropic wave speed and the Dirichlet-to-Neumann (D-to-N) map. This means that $g$ is allowed to be an arbitrary smooth Riemannian metric on $M$, and we consider the wave equation $$\label{wave_eq_aniso}
\begin{array}{rcl}
\p_t^2 u - \Delta_g u &=& 0, \quad \textnormal{in $(0,\infty) \times M$}, \\
u|_{x \in \p M} &=& f, \\
u|_{t=0} = \p_t u|_{t=0}, &=& 0
\end{array}$$ together with the map $$\label{DtoN}
\Lambda_{\Rec}^{2T} : f \mapsto -\partial_\nu u^f|_{(0,2T) \times
\Rec}, \quad f \in C_0^\infty((0,2T) \times \Rec).$$ Here $\Delta_g$ is the Laplace-Beltrami operator on the Riemannian manifold $(M,g)$, and $\nu$ is the inward pointing unit normal vector to $\p M$ with respect to the metric $g$. All the techniques in this section are the same for both the isotropic and anisotropic cases and for both the choices of data N-to-D and D-to-N. The negative sign is chosen in (\[DtoN\]) to unify the below formula (\[Blago\]) between the two choices of data. We leave it to the reader to adapt the formulations for the isotropic case with D-to-N and the anisotropic case with N-to-D.
The BC method is based upon approximately solving control problems of the form, $$\label{abstract_cp}
\textnormal{find $f$ for which $u^f(T,\cdot) = \phi$}$$ where the target function $\phi \in L^2(M)$ belongs to an appropriate class of functions so that the problem can be solved without knowing the wave speed. One could call this problem a [*blind control problem*]{}. The earliest formulations of the BC method solved such control problems by applying a Gram-Schmidt orthogonalization procedure to the data. However, as noted in [@Bingham2008], this procedure may itself be ill-conditioned. As a result, regularization techniques were introduced to the BC method [@Bingham2008]. One issue that arises with this particular regularized approach to the BC method is that there is no explicit way to choose the target function $\phi$. Thus in [@Oksanen2011] a variation of the regularized approach was introduced, where the target functions $\phi$ were restricted to the set of characteristic functions of domains of influence. This technique uses global boundary data (i.e. $\Rec = \p M$) to construct boundary distance functions. In [@Hoop2016] we introduced a modification of [@Oksanen2011] that allowed us to localize the problem and work with partial boundary data (i.e. $\Rec \neq \p M$). There we also studied the method computationally up to the reconstruction of boundary distance functions.
It is well-known [@Kurylev1997] that the boundary distance functions can be used to determine the geometry (i.e. to determine the metric $g$ up to boundary fixing isometries). While several methods to recover the geometry from the boundary distance functions have been proposed [@deHoop2014; @Katchalov2001; @Katsuda2007; @Pestov2015], these have not been implemented computationally to our knowledge. It appears to us that, at least in the isotropic case, it is better to recover the wave speed directly without first recovering the boundary distance functions. In the next two sections, we will describe techniques that allow us to do so in both the isotropic and anisotropic cases. These will be based on the control problem setup from [@Hoop2016] and we will recall the setup in this section.
The difference between [@Hoop2016] and the present paper is that we do not use the sources $f$ solving the control problems of the form (\[abstract\_cp\]) to construct boundary distance functions, instead we will use them to recover information in the interior of $M$. In the anisotropic case, this information is the internal data operator that gives wavefields solving (\[wave\_eq\_aniso\]) in semi-geodesic coordinates.
Semi-geodesic coordinates and wave caps
---------------------------------------
We consider an open subset $\Gamma \subset \p M$ satisfying $$\{x \in \p M : d(x,\Gamma) \leq T\} \subset \Rec,$$ where $d$ denotes the Riemannian distance associated with $g$. We may replace $T$ by a smaller time to guarantee that there exists a non-empty $\Gamma$ satisfying this. In what follows we will only use the following further restriction of the N-to-D or D-to-N map $$\Lambda_{\Gamma,\Rec}^{2T} f = \Lambda_{\Rec}^{2T} f|_{(0,2T) \times
\Rec}, \quad f \in C_0^\infty((0,2T) \times \Gamma).$$
We now recall the definition of semi-geodesic coordinates associated to $\Gamma$. For $y \in \Gamma$, we define $\sigma_{\Gamma}(y)$ to be the maximal arc length for which the normal geodesic beginning at $y$ minimizes the distance to $\Gamma$. That is, letting $\gamma(s;y,v)$ denote the point at arc length $s$ along the geodesic beginning at $y$ with initial velocity $v$, and $\nu$ the inward pointing unit normal field on $\Gamma$, we define $$\sigma_{\Gamma}(y) := \max \{ s \in (0, \tau_M(y, \nu)]:\ d(\gamma(s; y,
\nu), \Gamma) = s\}.$$ We recall that $\sigma_{\Gamma}(y) > 0$ for $y \in \overline{\Gamma}$ (see e.g. [@Katchalov2001 p. 50]). Defining, $$\textnormal{$x(y,s) := \gamma(s;y,\nu)$ \quad for $y \in \Gamma$ and $0 \leq
s < \sigma_{\Gamma}(y)$,}$$ the mapping $$\Phi_g : \{(y,s) : y \in \Gamma \textnormal{ and } s \in [0, \sigma_\Gamma(y))\} \to M,$$ given by $\Phi_g(y,s) := x(y,s)$ is a diffeomorphism onto its image in $(M,g)$, and we refer to the pair $(y,s)$ as the semi-geodesic coordinates of the point $x(y,s)$. We note that the semi-geodesic “coordinates” that we have defined here are not strictly coordinates in the usual sense of the term, since they associate points in $M$ with points in $\R \times \Gamma$ instead of points in $\R^n$. To obtain coordinates in the usual sense, one must specify local coordinate charts on $\Gamma$. Denoting the local coordinates on $\Gamma$ associated with these charts by $(y^1,\ldots,y^{n-1})$, one can then define local semi-geodesic coordinates by $(y^1,\ldots,y^{n-1},s)$. We will continue to make this distinction, using the term “local” only when we need coordinates in the usual sense.
In both the scalar and anisotropic cases, our approach to recover interior information relies on computing localized averages of functions inside of $M$. One of the main components used to compute these averages is a family of sources that solve blind control problems with target functions of the form $\phi = 1_B$, where $B$ is a set known as a *wave cap*. The construction of these sources will be recalled below in Lemma \[lemma:approxConstControl\], but first we recall how wave caps are defined:
Let $y \in \Gamma$, $s,h > 0$ with $s + h <
\sigma_{\Gamma}(y)$. The *wave cap,* $\wavecap_\Gamma(y,s,h)$, is defined as: $$\wavecap_\Gamma(y,s,h) := \{x \in M : d(x,y) \leq s + h \textnormal{ and } d(x,\Gamma) \geq s\}$$ See Figure \[fig:waveCapGeom\] for an illustration.
We recall that, for all $h > 0$, the point $x(y,s)$ belongs to the set $\wavecap_\Gamma(y,s,h)$ and $\diam(\wavecap_\Gamma(y,s,h))
\rightarrow 0$ as $h \rightarrow 0$, (see e.g. [@Hoop2016]). So, when $h$ is small and $\phi$ is smooth, averaging $\phi$ over $\wavecap_\Gamma(y,s,h)$ yields an approximation to $\phi(x(y,s))$. These observations play a central role in our reconstruction procedures.
![ \[fig:waveCapGeom\] Geometry of a wave cap in the Euclidean case. In this case, Pythagoras’ theorem suffices to show that $\diam(\wavecap_{\Gamma}(y,s,h)) = \mathcal{O}(h^{1/2})$, but this is also true in general.](./draft_images/wave_cap_diameter_bound.eps){width="3.50in"}
Elements of the BC method
-------------------------
As mentioned above, the BC method involves finding sources $f$ for which $u^f(T,\cdot) \approx \phi$ for appropriate functions $\phi \in
L^2(M)$. To that end, we recall the *control map*, $$W : f \mapsto u^f(T,\cdot), \quad \textnormal{for $f \in L^2([0,T]\times
\Gamma)$},$$ and note that $W$ is a bounded linear operator $W : L^2([0,T] \times
\Gamma) \rightarrow L^2(M)$, see e.g. [@Katchalov2001]. We remark that the output of $W$ is a wave in the interior of $M$ and hence cannot be observed directly from boundary measurements alone. Using $W$, one defines the *connecting operator* $K := W^*W$. The adjoint here is defined with respect to the Riemannian volume measure in the anisotropic case, and with respect to the scaled Lebesgue measure $c^{-2}(x) dx$ in the isotropic case. We denote these measures by $\Vol$ in both cases. In particular, we recall that $K$ can be computed by processing the N-to-D or D-to-N map via the Blagovescenskii identity, see e.g. [@Liu2016]. That is, $$\label{Blago}
K = J \Lambda_{\Gamma}^{2T} \Theta - R \Lambda_{\Gamma}^{T} R J \Theta,$$ where $\Lambda_{\Gamma}^T f := (\Lambda_{\Gamma,\Rec}^T
f)|_{[0,T]\times\Gamma}$, $R f(t) := f(T -t)$ for $0 \leq t \leq T$, $J f(t) := \int_t^{2T-t} f(s)\,ds$, and $\Theta $ is the inclusion operator $\Theta : L^2([0,T] \times \Gamma) \hookrightarrow L^2([0,2T]
\times \Gamma)$ given by $\Theta f(t) = f(t)$ for $0 \leq t \leq T$ and $\Theta f(t) = 0$ otherwise. We remark that the Blagovescenskii identity shows that $K$ can be computed by operations that only involve manipulating the boundary data.
We recall some mapping properties of $W$ that follow from finite speed of propagation for the wave equations (\[wave\_eq\_iso\]) and (\[wave\_eq\_aniso\]). Let $\tau : \overline{\Gamma} \rightarrow
[0,T]$, and define $S_\tau := \{(t,y) : T - \tau(y) \leq t \leq
T\}$. Then, finite speed of propagation implies that if $f$ is a boundary source supported in $S_\tau$, the wavefield $u^f(T,\cdot)$ will be supported in the domain of influence $M(\tau)$, defined by $$M(\tau) := \{ x \in M : d(x,\Gamma) < \tau(y) \textnormal{ for some
$y \in \Gamma$}\}.$$ In turn, this implies that $W$ satisfies, $W : L^2(S_\tau) \rightarrow
L^2(M(\tau))$. So, if we define $P_\tau : L^2([0,T]\times\Gamma)
\rightarrow L^2(S_\tau)$, then we can define a restricted control map $W_\tau := W P_\tau$, which satisfies $W_\tau : L^2(S_\tau)
\rightarrow L^2(M(\tau))$. The point here is that, although we do not have access to the the output of $W_\tau$, we know that the waves will be supported in the domain of influence $M(\tau)$. We also define the restricted connecting operator $K_\tau := (W_\tau)^*W_\tau = P_\tau K
P_\tau$, and note that $K_\tau$ can be computed by first computing $K$ via (\[Blago\]) and then applying the operator $P_\tau$.
To construct sources that produce approximately constant wavefields on wave caps, we use a procedure from [@Hoop2016]. This procedure uses the fact that a wave cap can be written as the difference of two domains of influence, and requires that distances between boundary points are known. Specifically, we will suppose that for any pair $x,y
\in \Gamma$ the distance $d(x,y)$ is known. As noted in [@Hoop2016], this is not a major restriction, since these distances can be constructed from the data $\Lambda_{\Gamma}^{2T}$. Then, using this collection of distances, we define a family of functions $\tau_y^R : \overline{\Gamma} \rightarrow
\R_+$ by: $$\textnormal{for $y \in \overline{\Gamma}$ and $R > 0$, define
$\tau_y^R(x) := (R - d(x,y)) \vee 0$.}$$ Here we use the notation $\phi \vee \psi$ to denote the point-wise maximum between $\phi$ and $\psi$, and we will continue to use this notation below. Finally, one can show that $\wavecap_\Gamma(y,s,h) =
\overline{M(\tau_y^{s+h} \vee s1_\Gamma) \setminus M(s1_\Gamma)}$. We also note that, since $\p M(\tau)$ has measure zero provided that $\tau$ is continuous on $\overline{\Gamma}$ [@Oksanen2011], one has that $1_{\wavecap_\Gamma(y,s,h)} = 1_{M(\tau_y^{s+h} \vee s1_\Gamma)} -
1_{M(s1_\Gamma)}$ a.e.
The following lemma is an amalgamation of results from [@Hoop2016], and shows that there is a family of sources $\psi_{h,\alpha}$ which produce approximately constant wavefields $u^{\psi_{h,\alpha}}(T,\cdot)$ on wave caps, and that these sources can be constructed from the boundary data $\Lambda_{\Gamma,\Rec}^{2T}$.
\[lemma:approxConstControl\] Let $y \in \Gamma$, $s,h > 0$ with $s + h < \sigma_{\Gamma}(y)$. Let $\tau_1 = s 1_\Gamma$ and $\tau_2 = \tau_{y}^{s+h} \vee
s1_\Gamma$. Define $b(t,y) := T - t$, and let $\widetilde{b} = b$ in the Neumann case, and $\widetilde{b} = (\Lambda_{\Gamma,\Rec}^T)^*b$ in the Dirichlet case. Then, for each $\alpha > 0$, let $f_{\alpha,i} \in
L^2(S_{\tau_i})$ be the unique solution to $$\label{eqn:ControlProblem}
\left(K_{\tau_i} + \alpha\right) f = P_{\tau_i} \widetilde{b}.$$ Define, $$\label{eqn:definePsi}
\psi_{h,\alpha} = f_{\alpha,2} - f_{\alpha,1}.$$ Using the notation $B_h = \wavecap_\Gamma(y,s,h)$, it holds that $$\label{eqn:limitingExpression}
\lim_{\alpha \rightarrow 0} u^{\psi_{h,\alpha}}(T,\cdot) = 1_{B_h}
\textnormal{\quad and \quad}
\lim_{\alpha \rightarrow 0} \langle \psi_{h,\alpha}, P_{\tau_2} b\rangle_{L^2(S_\tau)} = \Vol(B_h).$$
We briefly sketch the proof of Lemma \[lemma:approxConstControl\]. The main idea is to approximately solve the blind control problem (\[abstract\_cp\]) with $\phi \equiv
1$ over the spaces $L^2(S_{\tau_i})$ for $i=1,2$. To accomplish this, for $i=1,2$, one can consider a Tikhonov regularized version of (\[abstract\_cp\]) depending upon a small parameter $\alpha >
0$. Then, letting $f_{\alpha,i}$ denote the minimum of the associated Tikhonov functional for $\alpha > 0$, one can obtain $f_{\alpha,i}$ by solving this functional’s normal equation, given by (\[eqn:ControlProblem\]). Note that all of the terms defining $f_{\alpha,i}$ in (\[eqn:ControlProblem\]) can be computed in terms of the boundary data, so $f_{\alpha,i}$ can be obtained without knowing the wavespeed or metric. Appealing to properties of Tikhonov minimizers, one can then show that $Wf_{\alpha,i} \rightarrow
1_{M(\tau_i)}$ as $\alpha \rightarrow 0$, and hence $W\psi_{\alpha,h}
= Wf_{\alpha,1} - Wf_{\alpha,2} \rightarrow 1_{M(\tau_2)} -
1_{M(\tau_1)} = 1_{\wavecap_\Gamma(y,s,h)}$, where each limit and equality holds in the $L^2$ sense.
Recovery of information in the interior {#sec:interior}
=======================================
Propositions \[prop:wvfldReconstr\] and \[prop:harmonicReconstr\] below can be viewed as variants of Corollaries 1 and 2 in [@Bingham2008], the difference being that we use the control problem setup discussed in the previous section. One advantage of this setup is that we do not need to make the auxiliary assumption that the limit (14) in [@Bingham2008] is non-zero.
Wave field reconstruction in the anisotropic case
-------------------------------------------------
We begin with reconstruction of wavefields sampled in semi-geodesic coordinates, as encoded by the following map.
Let $(y,s)\in\domain(\Phi_g)$ and $f \in L^2([0,T]\times\Gamma)$. The map $L_g : L^2([0,T]\times\Gamma) \rightarrow
L^2(\domain(\Phi_g))$ is defined pointwise by $$L_g f(y,s) := u^f(T,x(y,s)).$$
We now show that $L_g$ can be computed from the N-to-D map.
\[prop:wvfldReconstr\] Let $f \in C_0^\infty([0,T] \times \Gamma)$. Let $t \in [0,T]$, $y
\in \Gamma$ and $s,h > 0$ with $s+h < \sigma_\Gamma(y)$ and $h$ sufficiently small. The family of sources $\{\psi_{h,\alpha}\}_{\alpha > 0}$ given in Lemma \[lemma:approxConstControl\] satisfies $$\label{eqn:accuracyOfApprox}
\lim_{\alpha \rightarrow 0} \frac{\langle \psi_{h,\alpha}, K f \rangle_{L^2([0,T]\times\Gamma)} }{ \langle \psi_{h,\alpha}, P_\tau b \rangle_{L^2([0,T]\times\Gamma)} } =
u^{f}(t,x(y,s)) + \mathcal{O}(h^{1/2}).$$
Applying Lemma \[lemma:approxConstControl\], we have that $$\lim_{\alpha \rightarrow 0} \frac{\langle \psi_{h,\alpha}, K f \rangle_{L^2(S_\tau)}}{\langle \psi_{h,\alpha}, P_\tau b \rangle_{L^2(S_{\tau})}} =
\frac{\lim_{\alpha \rightarrow 0} \langle W \psi_{h,\alpha}, W f \rangle_{L^2(M)}}{ \lim_{\alpha \rightarrow 0} \langle \psi_{h,\alpha}, P_\tau b \rangle_{L^2(S_{\tau})}}
= \frac{\langle 1_{B_h}, u^f(T,\cdot) \rangle_{L^2(M)}}{\Vol(B_h)}.$$ Thus it suffices to show that: $$\langle 1_{B_h}, u^f(T,\cdot) \rangle = \Vol(B_h) u^{f}(T,x(y,s)) + \Vol(B_h) \mathcal{O}(h^{1/2}).$$
Suppose that $h$ is sufficiently small that $B_h$ is contained in the image of a coordinate chart $(p, U)$ (that is, we use the convention that $p : U \subset \R^n \rightarrow p(U) \subset
M$). We denote the coordinates on this chart by $(x^1,\ldots,x^n)$, and also suppose that $x(y,s)$ corresponds to the origin in this coordinate chart. Since $f$ is $C_0^\infty$, it follows that $u^f$ is smooth. Thus we can Taylor expand $u^f(T,\cdot)$ in coordinates about $x(y,s) \in B_h$, giving, $$\qquad u^f(T, x^1,\ldots,x^n) = u^f(T, 0,\ldots,0) + \partial_i u^f(T, 0,\ldots,0) x^i + \sum_{|\beta| = 2} R_\beta(x^1,\ldots,x^n) x^{\beta}$$ Where $R_\beta$ is bounded by the $C^2$ norm of $u^f(T,x^1,\ldots,x^n)$ (i.e. of $u^f(T,\cdot)$ in coordinates), on any compact neighborhood $K$ satisfying $0 \in K\subset U$. In particular we choose $K$ such that $B_h \subset p(K)$ for $h$ sufficiently small. Combining these expressions and using that $x(y,s)$ corresponds to $0$ in $U$, $$\begin{aligned}
\left|\langle 1_{B_h}, u^f(T,\cdot) \rangle_{L^2(M)} - \Vol(B_h) u^f(T, x(y,s))\right|& \\
\leq C \int_{p^{-1}(B_h)} |\partial_i u^f(T, 0,\ldots,0) x^i| &+ \sum_{|\beta| = 2} |R_\beta(x^1,\ldots,x^n) x^{\beta_1} x^{\beta_2}| \,dx^1 \cdots dx^n
\end{aligned}$$ Then for points $p(x) \in M$ with coordinates $x \in U$ sufficiently close to $0$, there exist constants $g_*, g^*$ such that $g_* |x|_e
\leq d(p(x),0) \leq g^* |x|_e$, where $|x|_e$ denotes the Euclidean length of the coordinate vector $x$ in $\R^n$. So, let $x =
(0,\ldots,x^i,\ldots,0)$, then note that $|x^i| = |x|_e \leq (1/g_*) d(0, p(x))
\leq (1/g_*) \diam(B_h)$. Thus, for $h$ sufficiently small, $$\begin{aligned}
|\langle 1_{B_h}, u^f(T,\cdot) \rangle_{L^2(M)} - \Vol(B_h) u^f(T, x(y,s))|& \\
\leq \|u^f\|_{C^1(K)} C\diam(B_h) \Vol(B_h) &+ \|u^f\|_{C^2(K)} (C\diam(B_h))^2 \Vol(B_h)
\end{aligned}$$ Finally, the discussion in [@Bingham2008] implies that $\diam(B_h) = \mathcal{O}(h^{1/2})$, which completes the proof.
\[corr:wvfldReconstr\] For each $f \in C_0^\infty([0,T]\times\Gamma)$, $L_g f$ can be determined pointwise by taking the limit as $h \rightarrow 0$ in (\[eqn:accuracyOfApprox\]). Since $C_0^\infty([0,T]\times\Gamma)$ is dense in $L^2([0,T]\times\Gamma)$ and $L_g$ is bounded on $L^2([0,T]\times\Gamma)$, we have that $L_g f$ is determined for all $f
\in L^2([0,T]\times\Gamma)$.
First, let $f \in C_0^\infty([0,T]\times\Gamma)$. Taking the limit as $h \rightarrow 0$ in the preceding lemma shows that $L_g f(y,s)$ can be computed for any pair $(y,s) \in \domain(\Phi_g)$, and thus $L_g f$ can be determined in semi-geodesic coordinates.
Now we show that $L_g f$ can be determined for any $f \in
L^2([0,T]\times\Gamma)$. First we recall that $L_g f = \Phi_g^*
Wf$. Since the pull-back operator $\Phi_g^*$ just composes a function with a diffeomorphism, and $\overline{\Gamma}$ is compact, we have that $\Phi_g^*$ is bounded as an operator $\Phi_g^* :
L^2(\range(\Phi_g)) \rightarrow L^2(\domain(\Phi_g))$. Thus $L_g$ is a composition of bounded operators, and hence $L_g : L^2([0,T]\times
\Gamma) \rightarrow L^2(\domain(\Phi_g))$ is bounded. Let $f \in
L^2([0,T]\times\Gamma)$ be arbitrary. Since $C_0^\infty([0,T]\times\Gamma)$ is dense in $L^2$ one can find a sequence $\{f_j\}_{j=1}^\infty \subset
C_0^\infty([0,T]\times\Gamma)$ such that $f_j \rightarrow f$. Then, since $L_g$ is bounded, $L_g f = \lim_{j \rightarrow \infty} L_g f_j$.
Coordinate transformation reconstruction in the isotropic case
--------------------------------------------------------------
The map $\Lambda_{\p M}^T$ is invariant under diffeomorphisms that fix the boundary of $M$, and therefore in the anisotropic case it is not possible to compute $g$ in the Cartesian coordinates. The same is true for the wavefields. In the isotropic case, on the other hand, it is possible to compute the map $\Phi_g(y,s)$, and in fact, the wave speed was determined in Belishev’s original paper [@Belishev1987] by first showing that the internal data $u^f(t,x)$ can be recovered in the Cartesian coordinates, and then using the identity $$\frac{\Delta u(t,x)}{\p_t^2 u(t,x)} = c^{-2}(x).$$ It was later observed that the wave speed can be recovered directly from the map $\Phi_g$ without using information on the wavefields in the interior, see e.g. [@Belishev1999; @Bingham2008]. In the present paper we will compute $\Phi_g(y,s)$ by applying the following lemma to the Cartesian coordinate functions.
\[prop:harmonicReconstr\] Suppose that $g$ is isotropic, that is, $g = c^{-2}(x) dx^2$. Let $\phi \in C^\infty(M)$ be harmonic, that is, $\Delta \phi = 0$. Let $t \in [0,T]$, $y \in \Gamma$, and $s,h > 0$ with $s+h <
\sigma_\Gamma(y)$. Then, for $h$ small, the family of sources $\{\psi_{h,\alpha}\}_{\alpha > 0}$ given in Lemma \[lemma:approxConstControl\] satisfies $$\label{eqn:accuracyOfApproxHarm}
\lim_{\alpha \rightarrow 0} \frac{ B(\psi_{h,\alpha}, \phi) }{ B(\psi_{h,\alpha}, 1) } =
\phi(x(y,s)) + \mathcal{O}(h^{1/2}),$$ where $$\label{define_B}
B(f,\phi) = \langle f, b
\phi \rangle_{L^2([0,T]\times\Gamma; dy)} -
\langle\Lambda^T_{\Gamma,\Rec} f, b\p_\nu \phi
\rangle_{L^2([0,T]\times\Rec; dy)}.$$ Where $b(t) = T - t$.
The proof is analogous to that of Proposition \[prop:wvfldReconstr\] after observing that $$\lim_{\alpha \rightarrow 0} \frac{B(\psi_{h,\alpha}, \phi)}{ B(\psi_{h,\alpha}, 1)}
= \frac{\langle 1_{B_h}, \phi \rangle_{L^2(M;c^{-2} dx)}}{\langle 1_{B_h}, 1 \rangle_{L^2(M;c^{-2} dx)}}.$$ To see this, it suffices to show that for $\phi$ harmonic and $f \in
L^2([0,T]\times\Gamma)$, $$\label{eqn:Bintermed}
B(f,\phi) = \langle u^f(T),\phi\rangle_{L^2(M; c^{-2} dx)},$$ since then $$\lim_{\alpha \rightarrow 0} B(\psi_{h,\alpha},\phi) = \lim_{\alpha \rightarrow 0} \langle u^{\psi_{h,\alpha}}(T),\phi\rangle_{L^2(M; c^{-2} dx)} = \langle 1_{B_h}, \phi \rangle_{L^2(M;c^{-2} dx)}.$$ This expression holds, in particular, for the special case that $\phi
\equiv 1$, since constant functions are harmonic.
The demonstration of (\[eqn:Bintermed\]) is known, and is based upon the following computation, $$\begin{array}{rl}
\p_t^2 \langle u^f(t), \phi\rangle_{L^2(M; c^{-2} dx)} &= \langle \Delta u^f(t), \phi\rangle_{L^2(M; dx)} - \langle u^f(t), \Delta \phi\rangle_{L^2(M; dx)}\\
&= \langle f(t), \phi\rangle_{L^2(\p M; dy)} - \langle\Lambda f(t), \p_\nu \phi\rangle_{L^2(\p M; dy)},
\end{array}$$ where we have written $\Lambda f = u^f|_{\p M}$. Thus, the map $t
\mapsto \langle u^f(t), \phi \rangle$ satisfies an ordinary differential equation with vanishing initial conditions, since $u^f(0)
= \p_t u^f(0) = 0$. Solving this differential equation and evaluating the result at $t = T$, we get an explicit formula for $\langle u^f(T),
\phi\rangle$ depending upon $f$ and $\Lambda f$: $$\label{eqn:harmonicBlagoType}
\langle u^f(T), \phi \rangle_{L^2(M; c^{-2} dx)} = \langle f, b
\phi \rangle_{L^2([0,T]\times\Gamma; dy)} -
\langle\Lambda^T_{\Gamma,\Rec} f, b\p_\nu \phi
\rangle_{L^2([0,T]\times\Rec; dy)}.$$ Which completes the demonstration of (\[eqn:Bintermed\]). Notice that we only require $\Lambda f|_\Rec$, since, for $t \in [0,T]$, $\Lambda f(t)$ vanishes outside of $\Rec$ by finite speed of propagation. An analogous derivation can be found in [@Liu2016] (with full boundary measurements and the D-to-N map instead of the N-to-D map).
As in Corollary \[corr:wvfldReconstr\], letting $h \to 0$, we see that the map $$H_c : \{\phi \in C^\infty(M);\ \Delta \phi = 0\} \to C^\infty(\domain(\Phi_g)), \quad
H_c \phi(y,s) = \phi(\Phi_g(y,s)),$$ can be computed from the N-to-D map, where $g = c^{-2}(x) dx^2$. To see this, first recall that $\Phi_g(y,s) := \gamma(s; y, \nu)$. Since $\gamma(\cdot;y,\nu)$ is a geodesic and $\nu$ has unit-length vector with respect to the metric $g$, we have that $|\p_s \Phi_g(y,s)|_g = 1$. Then, recall that for $x \in M$ and $v \in T_x M$, the length $|v|_g$ is computed by $
|v|_g^2 = c(x)^{-2} |v|_e^2,
$ where $|v|_e$ is the Euclidean length of $v$. Writing $x^j$, $j=1,\dots,n$, for the Cartesian coordinate functions on $M$, it follows that $$\label{scheme_isotropic}
\hspace{-.5cm}\Phi_g(y,s) = (H_c x^1(y,s), \dots, H_c x^n(y,s)),
\quad c(\Phi_g(y,s))^2 = |\p_s \Phi_g(y,s)|_e^2.$$ Thus $c$ can be computed in the Cartesian coordinates by inverting the first function above and composing the inverse with the second function. We will show in Section \[sec\_stability\] that this simple inversion method is stable.
The recovery of the internal information encoded by $L_g$ and $H_c$ is the most unstable part of the Boundary Control method as used in this paper. The convergence with respect to $h$ is sublinear as characterized by (\[eqn:accuracyOfApprox\]) and (\[eqn:accuracyOfApproxHarm\]), and the convergence with respect to $\alpha$ is even worse. In general we expect it to be no better than logarithmic. The recent results [@Bosi2016; @Laurent2015] prove logarithmic stability for related control and unique continuation problems, and [@Hoop2016] describes how the instability shows up in numerical examples.
Recovery of the metric tensor
=============================
Due to the diffeomorphism invariance discussed above, we cannot recover $g$ in the Cartesian coordinates and it is natural to recover $g$ in the semi-geodesic coordinates. This is straightforward in theory when the internal information $L_g$ is known, and analogously to the elliptic inverse problems with internal data [@Bal2013], we expect that the problem has good stability properties when suitable sources $f$ are used. We will next describe a way to choose the sources by using an optimization technique. This technique is not stable in general, but as shown in the next section, stability holds under suitable convexity assumptions.
In any local coordinates $(x^1, \dots, x^n)$, $$\label{eqn:getMetricFromLaplacians}
g^{lk}(x) = \frac{1}{2}\left(\Delta_{g}(x^l x^k) - x^k \Delta_{g}x^l - x^l \Delta_{g}x^k\right).$$
[*Proof.*]{} Let $(x^1,\ldots,x^n)$ be local coordinates on $M$. Write $\alpha := \sqrt{g}$. Then $$\begin{aligned}
\Delta_{g}(x^l x^k) &= \frac{1}{\alpha} \p_i\left(\alpha g^{ij} \p_j (x^l x^k)\right) \\
&= \frac{1}{\alpha} \left(\p_i\left(\alpha g^{il} x^k\right) + \p_i\left(\alpha g^{ik} x^l\right) \right) \\
&= g^{kl} + x^k \frac{1}{\alpha} \p_i\left(\alpha g^{il}\right) + g^{lk} + x^l \frac{1}{\alpha} \p_i\left(\alpha g^{ik}\right) \\
&= 2 g^{lk} + x^k \Delta_{g}x^l + x^l \Delta_{g}x^k. \qquad \textnormal{\qed}\end{aligned}$$
\[prop:computingG\] The metric $g$ can be constructed in local semi-geodesic coordinates using the operator $L_g$ as data.
Let $\Omega = \range(\Phi_g)$, and $\omega \subset
\Omega$ be a coordinate neighborhood for the semi-geodesic coordinates. Let $(x^1,\ldots,x^n)$ denote local semi-geodesic coordinates on $\omega$. Fix $1\leq j,k \leq n$ and for $\ell = 1,2,3$ choose $\phi_\ell \in
C_0^\infty(\Omega)$, $\ell = 1,2,3$, such that for all $x \in
\omega$, $$\label{def_phi_ell}
\phi_1(x) = x^j x^k, \quad \phi_2(x) = x^j, \quad \phi_3(x) = x^k.$$ Consider the following Tikhonov regularized problem: for $\alpha > 0$ find $f \in L^2([0,T]\times
\Gamma)$ minimizing $$\|L_g f - \phi^\ell\|_{L^2(\Omega)}^2 + \alpha \|f\|_{L^2([0,T]\times\Gamma)}^2.$$ It is a well known consequence of [@Tataru1995], see e.g. [@Katchalov2001], that $L_g$ has dense range in $L^2(\Omega)$. Thus this problem has a minimizer $f_{\alpha,\ell}$ which can be obtained as the unique solution to the normal equation, see e.g. [@Kirsch2011 Th. 2.11], $$\label{scheme_anisotropic}
(L_g^*L_g + \alpha) f = L_g^* \phi^\ell.$$ It follows from [@Oksanen2013 Lemma 1] that the minimizers satisfy $$\lim_{\alpha \rightarrow 0} L_g f_{\alpha,\ell} = \phi^{\ell}.$$ As the wave equation (\[wave\_eq\_aniso\]) is translation invariant in time, we have $L_g \p_t^2 f =
\Delta_g u^f(T, \cdot)$, and therefore $$\begin{aligned}
\lim_{\alpha\rightarrow0}\|L_g \p_t^2 f_{\alpha,\ell} - \Delta_g \phi^\ell\|_{H^{-2}(\Omega)} &= \lim_{\alpha\rightarrow0}\|\Delta_g(u^{f_{\alpha,\ell}}(T,\cdot) - \phi^\ell)\|_{H^{-2}(\Omega)}\\ &\le C \lim_{\alpha\rightarrow0}\|u^{f_{\alpha,\ell}}(T,\cdot) - \phi^\ell\|_{L^2(\Omega)} = 0.
\end{aligned}$$ Thus for $\ell = 1,2,3$, $L_g \p_t^2 f_{\alpha,\ell} \rightarrow
\Delta_g\phi^\ell$ in the $H^{-2}(\Omega)$ sense. Using expression (\[eqn:getMetricFromLaplacians\]), and recalling the definitions of the target functions $\phi^{\ell}$, then in the local coordinates on $\omega$ we have $$\label{scheme_anisotropic_step2}
g^{jk} = \lim_{\alpha \rightarrow 0} \frac{1}{2}(L_g\p_t^2{f_{\alpha,1}} - x^k L_g\p_t^2 f_{\alpha,2} - x^j L_g\p_t^2 f_{\alpha,3}),$$ where the convergence is in $H^{-2}(\omega)$. Finally, since $\Omega$ can be covered with coordinate neighborhoods such as $\omega$, this argument can be repeated to determine $g^{lk}$ in any local semi-geodesic coordinate chart.
On stability of the reconstruction from internal data {#sec_stability}
=====================================================
When discussing stability near the set $\Gamma$, we will restrict our attention to $\Omega \subset
M$ and a set $\mathcal G$ of smooth Riemannian metrics on $M$ for which $$\label{Omega_cond}
\overline{\Omega} \subset \Phi_{\tilde g}(\Gamma \times [0, r_0)), \quad \tilde g \in \mathcal G,$$ where $r_0 > 0$ is fixed.
We begin by showing the following consequence of the implicit function theorem.
\[lem\_Phi\_inv\] Let $U \subset \R^n$ be open and let $\Phi_0 : \overline U \to \R^n$ be continuously differentiable. Let $p_0 \in U$ and suppose that the derivative $D\Phi_0$ is ivertible at $p_0$. Then there are neighbourhoods $W \subset \R^n$ of $\Phi_0(p_0)$ and $\mathcal U \subset C^1(\overline U)$ of $\Phi_0$ such that $${\left\|\Phi^{-1} - \Phi_0^{-1} \right\|}_{C^0(\overline W)}
\le C {\left\|\Phi - \Phi_0 \right\|}_{C^1(\overline U)},
\quad \Phi \in \mathcal U.$$
Define the map $$F : C^1(\overline U) \times \R^n \times \R^n \to \R^n, \quad
F(\Phi, q, p) = \Phi(p) - q.$$ Then $F$ is continuously differentiable, and $
D_p F(\Phi_0, p_0) = D \Phi_0(p_0).
$ Thus the implicit function theorem, see e.g. [@Lang1983 Th 6.2.1], implies that there are neighbourhoods $V, W' \subset \R^n$ of $p_0, \Phi_0(p_0)$ and $\mathcal U' \subset C^1(\overline U)$ of $\Phi_0$, and a continuously differentiable map $H : \mathcal U' \times W' \to V$ such that $F(\Phi, q, H(\Phi, q)) = 0$. But this means that $H(\Phi, \cdot) = \Phi^{-1}$ in $W'$. Choose a neighbourhood $W$ of $\Phi_0(p_0)$ such that $\overline W \subset W'$ and that $\overline W$ is compact. As $H$ is continuously differentiable, there is a neighbourhood $\mathcal U \subset \mathcal U'$ of $\Phi_0$ such that $$|H(\Phi, q) - H(\Phi_0, q)| \le 2 \max_{q \in \overline W}{\left\|D_\Phi H(\Phi_0, q) \right\|}_{C^1(\overline U) \to \R^n}
{\left\|\Phi - \Phi_0 \right\|}_{C^1(\overline U)}, \quad \Phi \in \mathcal U.$$
We have the following stability result in the isotropic case.
\[th\_main\_iso\] Consider a family $\mathcal G$ of smooth isotropic metrics $\tilde g = \tilde c^{-2} dx^2$ satisfying (\[Omega\_cond\]). Let $c^{-2} dx^2 \in \mathcal G$ and suppose that $$\label{smallness_C1_iso}
{\left\|\tilde c - c \right\|}_{C^2(M)} \le \epsilon, \quad \tilde c^{-2} dx^2 \in \mathcal G.$$ Then for small enough $\epsilon > 0$, there is $C > 0$ such that $${\left\|\tilde c^2 - c^2 \right\|}_{C(\Omega)} \le C {\left\|H_{\tilde c} - H_c \right\|}_{C^1(M) \to C^1(\Gamma \times [0,r_0))}.$$
We write $\Sigma = \Gamma \times (0,r_0)$, $\tilde g = \tilde c^{-2} dx^2$ and $g = c^{-2} dx^2$. Then (\[scheme\_isotropic\]) implies that $$\begin{aligned}
{\left\|\Phi_{\tilde g} - \Phi_g \right\|}_{C^1(\Sigma)}
&\le& C {\left\|H_{\tilde c} - H_c \right\|}_{C^1(M) \to C^1(\Sigma)}.\end{aligned}$$ Moreover, again by (\[scheme\_isotropic\]), $$\begin{aligned}
{\left\|\tilde c^2 \circ \Phi_{\tilde g} - c^2 \circ \Phi_g \right\|}_{C^0(\Sigma)} &\le&
C {\left\|H_{\tilde c} - H_c \right\|}_{C^1(M) \to C^1(\Sigma)}.\end{aligned}$$ This together with $$\begin{aligned}
{\left\|\tilde c^2 - c^2 \right\|}_{C^0(\Omega)} &\le&
{\left\|\tilde c^2 \circ \Phi_{\tilde g}\circ \Phi_{\tilde g}^{-1} - c^2 \circ \Phi_g\circ \Phi_{\tilde g}^{-1} \right\|}_{C^0(\Omega)}
\\&&\quad+
{\left\|c^2 \circ \Phi_g\circ \Phi_{\tilde g}^{-1} - c^2 \circ \Phi_g\circ \Phi_g^{-1} \right\|}_{C^0(\Omega)}\end{aligned}$$ implies that it is enough to study ${\left\|\Phi_{\tilde g}^{-1}- \Phi_g^{-1} \right\|}_{C^0(\Omega)}$.
Note that $(\tilde g, y, s) \mapsto \Phi_{\tilde g}(y,s)$ is continuously differentiable since it is obtained by solving the ordinary differential equation that gives the geodesics with respect to $\tilde g$. Indeed, this follows from [@Lang1983 Th. 6.5.2] by considering the vector field $F$ that generates the geodesic flow. In any local coordinates, $F(x,\xi, h) = (\xi, f(x, \xi, h), 0)$ where $f = (f^1, \dots, f^n)$, $
f^j(x, \xi, h) = -\Gamma_{k\ell}^j(x,h)\xi^k \xi^\ell,
$ and $\Gamma_{k\ell}^j(x,h)$ are the Christoffel symbols of a metric tensor $h$ at $x$, that is, $$\Gamma_{k\ell}^j(x,h) =
\frac{1}{2}h^{jm} \left(\frac{\partial h_{mk}}{\partial x^\ell} + \frac{\partial h_{m\ell}}{\partial x^k} - \frac{\partial h_{k\ell}}{\partial x^m} \right).$$ In particular, if $\omega$ is a neighbourhood of $p_0 \in \Sigma$ and $\overline\omega \subset \Sigma$, then the map $\tilde c \mapsto \Phi_{\tilde g}$ is continuous from $C^2(M)$ to $C^1(\overline \omega)$. Thus, for small enough $\epsilon > 0$ in (\[smallness\_C1\_iso\]), we may apply Lemma \[lem\_Phi\_inv\] to obtain $${\left\|\Phi_{\tilde g}^{-1}- \Phi_g^{-1} \right\|}_{C^0(W)}
\le C {\left\|\Phi_{\tilde g} - \Phi_g \right\|}_{C^1(\Sigma)},$$ where $W$ is a neighbourhood of $\Phi_g(p_0)$. As $\overline \Omega$ is compact, it can be covered by a finite number of sets like the above set $W$. Thus $${\left\|\Phi_{\tilde g}^{-1}- \Phi_g^{-1} \right\|}_{C^0(\Omega)}
\le C {\left\|\Phi_{\tilde g} - \Phi_g \right\|}_{C^1(\Sigma)}
\le C {\left\|H_{\tilde c} - H_c \right\|}_{C^1(M) \to C^1(\Sigma)}.$$
We now consider the anisotropic case, and describe a geometric condition on $(M,g)$ that will yield stable recovery of $g$ in the semi-geodesic coordinates of $\Gamma$ from $L_g$ in the set $\Omega$. Specifically, we will assume that the following problem, which is the dual problem to (\[wave\_eq\_aniso\]), $$\label{wave_eq_ad}
\begin{array}{rcl}
\p_t^2 w - \Delta_g w &=& 0, \quad \textnormal{in $(0,T) \times M$},\\
w|_{x \in \p M} &=& 0\\
w|_{t=T} = 0,\ \p_t w|_{t=T} &=& \phi.
\end{array}$$ is *stably observable* in the following sense.
\[def:stable\_obs\] Let $\mathcal G$ be a subset of smooth Riemannian metrics on $M$. Then, (\[wave\_eq\_ad\]) is *stably observable* for $\Omega$ and $\mathcal G$ from $\Gamma$ in time $T > 0$ if there is a constant $C > 0$ such that for all $g \in \mathcal G$ and for all $\phi \in L^2(\Omega)$ the solutions $w = w^{\phi} = w^{\phi,g}$ of (\[wave\_eq\_ad\]) uniformly satisfy $$\label{stable_obs}
{\left\|\phi \right\|}_{L^2(\Omega)} \le C {\left\|\p_\nu w^\phi \right\|}_{L^2((0,T) \times \Gamma)}.$$
A complete characterization of metrics exhibiting stable observability is not presently known, however, it is known that stable observability holds under suitable convexity conditions. Indeed, if $(M,g)$ admits a strictly convex function $\ell$ without critical points, and satisfies $$\{x \in \p M;\ (\nabla \ell(x), \nu)_g \ge 0 \} \subset \Gamma,$$ then there is a neighbourhood $\mathcal G$ of $g$ and $T > 0$ such that (\[wave\_eq\_ad\]) is stably observable for $M$ and $\mathcal G$ from $\Gamma$ in time $T > 0$, see [@Liu2016]. Note that this result gives stable observability over the complete manifold $M$ but we will need it only over the set $\Omega$.
Stable observability in the case of Neumann boundary condition is poorly understood presently. For instance, stable observability can not be easily derived from an estimate like [@Oksanen2014 Th 3], the reason being that the $H^1$-norm of the Dirichlet trace of a solution to the wave equation is not bounded by the $L^2$-norm of the Neumann trace, while the opposite is true [@Oksanen2014 Th. 4]. See also [@Tataru1998] for a detailed discussion. For this reason we restrict our attention to the case of Dirichlet boundary condition.
We use the notation $$W_g f(x) = u^f(T,\cdot)|_{\Omega}, \quad f \in L^2([0,T] \times \Gamma).$$ The stable observability (\[stable\_obs\]) says that $W_g^*$ is injective, and by duality, it implies that $W_g : L^2([0,T]
\times \Gamma) \rightarrow L^2(\Omega)$ is surjective (see [@Bardos1996]). In this case (\[wave\_eq\_aniso\]) is said to exactly controllable on $\Omega$, and in particular, for any $\phi \in L^2(\Omega)$ the control problem $W_g f = \phi$ has the minimum norm solution $f = W_g^\dag \phi$ given by the pseudoinverse of $W_g$.
\[th\_main\] Consider a family $\mathcal G$ of metrics $\tilde g$ satisfying (\[Omega\_cond\]) and suppose that (\[wave\_eq\_ad\]) is stably observable for $\Omega$ and $\mathcal G$ from $\Gamma$ in time $T > 0$. Let $g \in \mathcal G$ and suppose that $$\label{smallness_C1}
{\left\|\tilde g - g \right\|}_{C^2(M)} \le \epsilon, \quad \tilde g \in \mathcal G.$$ Then for small enough $\epsilon > 0$, there is $C > 0$ such that $${\left\|\Psi^* \tilde g - g \right\|}_{H^{-2}(\Omega)} \le C {\left\|L_{\tilde g} - L_g \right\|}_*,
\quad \tilde g \in \mathcal G,$$ where $\Psi^* = (\Phi_g^*)^{-1} \Phi_{\tilde g}^*$ and $${\left\|L_g \right\|}_* = {\left\|L_g \right\|}_{L^2((0,T) \times \Gamma) \to L^2(\Gamma\times(0,\epsilon))}
+ {\left\|L_g \circ \p_t^2 \right\|}_{L^2((0,T) \times \Gamma) \to H^{-2}(\Gamma\times(0,\epsilon))}.$$
We use again the notation $\Sigma = \Gamma \times (0,r_0)$ and write also $\Sigma_T = \Gamma \times (0,T)$. Let $p \in
\Sigma$, and denote by $(x^1, \dots, x^n)$ the coordinates on $\Sigma$ corresponding to local semi-geodesic coordinates $(y,r)$. Let $j,k = 1,\dots, n$ and let $\omega \subset
\Sigma$ be a neighbourhood of $p$. Choose $\phi_\ell \in
C_0^\infty(\Sigma)$, $\ell = 1,2,3$, as in (\[def\_phi\_ell\]). Note that solving (\[scheme\_anisotropic\]) and taking the limit $\alpha \to 0$ is equivalent with computing $L^\dagger \phi^\ell$ see e.g. [@Engl1996 Th. 5.2].
Analogously to (\[scheme\_anisotropic\_step2\]), writing the change to local coordinates explicitly, it holds that $$\begin{aligned}
(\Phi_g^* g)^{jk}(x) =
\frac 1 2 (L_g \p_t^2 h_1(x) - &x^k L_g \p_t^2 h_2(x) - x^j L_g \p_t^2 h_3(x)),\end{aligned}$$ where $h_\ell = L_g^\dagger \phi_\ell$, $\ell = 1, 2, 3$. It will be enough to bound $${\left\|L_{\tilde g} \p_t^2 L_{\tilde g}^\dagger \phi_\ell - L_g \p_t^2 L_g^\dagger \phi_\ell \right\|}_{H^{-2}(\omega)}, \quad \ell = 1, 2, 3,$$ in terms of the difference $L_{\tilde g} - L_g$. We have $$\begin{aligned}
&{\left\|L_{\tilde g} \p_t^2 L_{\tilde g}^\dagger \phi_\ell - L_g \p_t^2 L_g^\dagger \phi_\ell \right\|}_{H^{-2}(\omega)}
\\&\quad\le
{\left\|L_{\tilde g} \p_t^2 L_{\tilde g}^\dagger \phi_\ell - L_g \p_t^2 L_{\tilde g}^\dagger \phi_\ell \right\|}_{H^{-2}(\omega)}
+
{\left\|L_{g} \p_t^2 L_{\tilde g}^\dagger \phi_\ell - L_g \p_t^2 L_g^\dagger \phi_\ell \right\|}_{H^{-2}(\omega)}
\\&\quad\le
{\left\|(L_{\tilde g} - L_g) \circ \p_t^2 \right\|}_{L^2(\Sigma_T) \to H^{-2}(\Sigma)}
{\left\|L_{\tilde g}^\dagger \right\|}_{L^2(\Sigma) \to L^2(\Sigma_T)} {\left\|\phi_\ell \right\|}_{L^2(\Sigma)}
\\&\qquad\quad+ {\left\|L_g \circ \p_t^2 \right\|}_{L^2(\Sigma_T) \to H^{-2}(\Sigma)}
{\left\|L_{\tilde g}^\dagger - L_g^\dagger \right\|}_{L^2(\Sigma) \to L^2(\Sigma_T)}{\left\|\phi_\ell \right\|}_{L^2(\Sigma)}.\end{aligned}$$
We omit writing subscripts in operator norms below as their meaning should be clear from the context. Pseudoinversion is continuous in the sense that $${\left\|L_{\tilde g}^\dagger - L_g^\dagger \right\|}
\le 3 \max\left({\left\|L_{\tilde g}^\dagger \right\|}, {\left\|L_{g}^\dagger \right\|}\right)
{\left\|L_{\tilde g} - L_g \right\|},$$ see e.g. [@Izumino1983]. It remains to show that ${\left\|L_{\tilde g}^\dagger \right\|}$ is uniformly bounded for $\tilde g$ satisfying (\[smallness\_C1\]). Note that $L_{\tilde g} = \Phi_{\tilde g}^* W_{\tilde g}$ and recall that (\[stable\_obs\]) implies ${\left\|(W_{\tilde g}^*)^\dagger \right\|} \le C$, which again implies that ${\left\|W_{\tilde g}^\dagger \right\|} \le C$. Here the constant $C$ is uniform for $\tilde g
\in \mathcal G$. Moreover Lemma \[lem\_Phi\_inv\] implies that, for small enough $\epsilon > 0$ in (\[smallness\_C1\]), we have ${\left\|(\Phi_{\tilde g}^*)^{-1} \right\|} \le C$. To summarize, there is uniform constant $C$ for $\tilde g$ satisfying (\[smallness\_C1\]) such that $${\left\|\Phi_{\tilde g}^* \tilde g - \Phi_g^* g \right\|}_{H^{-2}(\omega)}
\le C {\left\|L_{\tilde g} - L_g \right\|}_* {\left\|\phi_\ell \right\|}_{L^2(\Sigma)}.$$ The claim follows by using a partition of unity. Note that the functions $\phi_\ell$ can be chosen so that they are uniformly bounded in $L^2$ when $\omega$ is varied.
Computational experiment
========================
In this section, we provide a computational experiment to demonstrate our approach to recovering an isotropic wave speed from the N-to-D map. We conduct our computational experiment in the case where $M$ is a domain in $\R^2$, however, we stress that our approach generalizes to any $n \geq 2$.
Forward modelling and control solutions
---------------------------------------
For our computational experiment, we consider waves propagating in the lower half-space $M = \R \times (-\infty,0]$ with respect to the following wave speed: $$ c(x_1,x_2) = 1 + \frac{1}{2}x_2-\frac{1}{2}\exp\left(-4\left(x_1^2 + (x_2-0.375)^2\right)\right).$$ See Figure \[fig:lens\_model\]. Waves are simulated and recorded at the boundary for time $2T$, where $T = 1.0$. Sources are placed inside the accessible set $\Gamma = [-\ell_s,\ell_s] \times \{0\}$, where $\ell_s = 3.0$, and receiver measurements are made in the set $\Rec =
[-\ell_r,\ell_r] \times \{0\}$, where $\ell_r = 4.5$.
[.85-1.3em]{} ![ \[fig:lens\_model\] (a) True wave speed $c$. (b) Semi-Geodesic coordinate grid associated with $c$. (c) Some example ray paths with non-orthogonal intersection to $\p M$.](./draft_images/TRIP_model.png "fig:"){width="\linewidth"}
[.85-1.3em]{} ![ \[fig:lens\_model\] (a) True wave speed $c$. (b) Semi-Geodesic coordinate grid associated with $c$. (c) Some example ray paths with non-orthogonal intersection to $\p M$.](./draft_images/TRIP_BNC.png "fig:"){width="\linewidth"}
[.85-1.3em]{} ![ \[fig:lens\_model\] (a) True wave speed $c$. (b) Semi-Geodesic coordinate grid associated with $c$. (c) Some example ray paths with non-orthogonal intersection to $\p M$.](./draft_images/TRIP_Replica.png "fig:"){width="\linewidth"}
For sources, we use a collection of Gaussian functions spanning a subspace of $L^2([0,T] \times \Gamma)$. Specifically, we consider sources of the form $$\varphi_{i,j}(t,x) = C \exp\left(-a ((t-t_{s,i})^2 + (x-x_{s,j})^2)\right).$$ Here, the pairs $(t_{s,i}, x_{s,j})$ are chosen to form a uniformly spaced grid in $[0.025,0.975] \times [-\ell_s,\ell_s]$ with spacing $\Delta t_s = \Delta x_s = 0.025$. In total, we consider $N_{t,s} =
39$ source times $t_{s,i}$ and $N_{x,s} = 241$ source locations $x_{s,j}$. The constant $a$, controlling the width of the basis functions in space and time, is taken as $a = 1381.6$, and the constant $C$ is chosen to normalize the functions $\varphi_{i,j}$ in $L^2([0,T]\times\Gamma)$.
Wave propagation is simulated using a continuous Galerkin finite element method with Newmark time-stepping. Waves are simulated for $t
\in [-t_0, 2T]$, where $t_0 = 0.1$, although N-to-D measurements are only recorded in $[0,2T]$. The short buffer interval, $[-t_0, 0.0]$, is added to the simulation interval in order to avoid numerical dispersion from non-vanishing sources at $t = 0$. The sources are extended to Receiver measurements are simulated by recording the Dirichlet trace $\Lambda^{2T}_{\Gamma,\Rec} \varphi_{i,j}$ at uniformly spaced points $x_{s,r} \in [\ell_r,\ell_r]$ with spatial separation $\Delta x_r = 0.0125$ at uniformly spaced times $t_{s,r}
\in [0,2T]$ with temporal spacing $\Delta t_r = 0.0025$. Note that our receiver measurements are sampled more densely in both space and time than our source applications. In particular, $\Delta x_r = 0.5 \Delta
x_s$ and $\Delta t_r = 0.1 \Delta t_s$. In total, we take $N_{t,r} =
801$ receiver measurements at each of the $N_{x,r} = 721$ receiver positions.
We briefly comment on the physical scales associated with the computational experiment. In the units above, the wave speed is approximately $1$ at the surface. If we take this to represent a wave speed of approximately $2000$m/s and suppose that the receiver spacing corresponds to $\Delta x_r = 12.5$m, then in the same units $\Delta
t_r = .00125$s. In addition, we have that $\ell_s = 4.5$km and $T =
1.0$s, which implies that receivers are placed within a $9.0$km region and traces are recorded for a total of $2.0$s. In Fig. \[fig:spectrum\] we plot the power spectrum for one of the sources at a fixed source location, to give a sense of the frequencies involved. Note that the source mostly consists of frequencies below $15$Hz.
![ \[fig:spectrum\] Power spectrum of $\varphi_{ij}(\cdot,x_{s,i})$, measured in Hz. We have rescaled the power spectrum so that it has a maximum value of $1$.](./draft_images/source_spectrum/phi_t_power_spectrum.png){width=".5\linewidth"}
In this computational experiment, we have used sources that have a significant frequency component at $0$ Hz. Such low frequency contributions are not representative of physical source wavelets, so it may be of interest to note that the data we have used can be synthesized from sources which lack $0$ Hz components. In particular, these data used can be synthesized by post-processing data from sources that are products of Gaussians in space and Ricker wavelets in time. We note that Ricker wavelets are the second derivatives of Gaussian functions, and that they are zero-mean sources (hence they vanish at $0$ Hz) that are frequently used as sources when simulating synthetic seismic data. To demonstrate the claim, we first show that $u^{If} = I(u^f)$, where $I$ denotes the integral $Ih(t,\cdot) :=
\int_{0}^t h(s,\cdot)\,ds$. To see this, we first observe that, $$\begin{aligned}
\p_t^2(Iu^f) &= \p_t^2 \left( \int_0^t u^f(s,\cdot)\,ds\right) = \p_t u^f(t,\cdot) = \int_0^t \p_t^2 u^f(s,\cdot)\,ds - \p_t u^f(0, \cdot)\\
&= \int_0^t c^2(x) \Delta u^f(s,\cdot) \, ds = c^2(x) \Delta (Iu^f).\end{aligned}$$ Here, we have used the fact that $\p_t u^f(0,\cdot) = 0$ and $(\p_t^2
- c^2(x)\Delta) u^f = 0$ since $u^f$ solves (\[wave\_eq\_iso\]). Likewise, because $u^f$ solves (\[wave\_eq\_iso\]), it follows that $\p_n (Iu^f) = I(\p_n u^f) = If$ and that $\p_t Iu^f(0,\cdot) = u^f(0,\cdot) = 0$. Since $Iu^f(0,\cdot) = \int_0^0 u^f(s,\cdot) \,ds = 0$, we see that $Iu^f$ satisfies: $$\begin{array}{rcl}
\p_t^2 w - c^2(x)\Delta w &=& 0, \quad \textnormal{in $(0,\infty) \times M$}, \\
\p_{\vec n} w|_{x \in \p M} &=& If, \\
w|_{t=0} = \p_t w|_{t=0}, &=& 0,
\end{array}$$ thus $Iu^f$ solves (\[wave\_eq\_iso\]) with Neumann source $If$. Since solutions to (\[wave\_eq\_iso\]) are unique, we see that $Iu^f =
u^{If}$, as claimed. An immediate consequence is that, $$I\Lambda_{\Gamma,\Rec}^{2T}f = Iu^f|_{\Rec} = u^{If}|_\Rec = \Lambda_{\Gamma,\Rec}^{2T} I f.$$ Thus, $\Lambda_{\Gamma,\Rec}^{2T}I^{j} f = I^{j} \Lambda_{\Gamma,\Rec}^{2T} f$ for $j \in
\N$. Next, we let $\psi_{i,j} = \p_t^2 \varphi_{i,j}$, and note that $\psi_{i,j}$ is a product of a Ricker wavelet in time (since it is the second time derivative of a Gaussian function) and a Gaussian in space. We then observe that, $$\varphi_{i,j}(t,x) = \varphi_{i,j}(0,x) + t \p_t \varphi_{i,j}(0,x) + I^2
\psi_{i,j}(0,x).$$ Under the parameter choices for $\varphi_{i,j}$, the first two terms are considerably smaller than the third for $i \geq 4$, since $0$ belongs to the tail of the Guassian $\varphi_{ij}$. For $i \leq 3$, the same comment holds if we replace $t = 0$ by the buffer interval start-time, $t = -t_0$ (likewise, we would need to replace $t = 0$ by $t = -t_0$ when applying $I$). In either event, $\varphi_{i,j} \approx
I^2 \psi_{i,j}$, and $\Lambda_{\Gamma,\Rec}^{2T}\varphi_{i,j} \approx
I^2 (\Lambda_{\Gamma,\Rec}^{2T} \psi_{i,j})$. For our particular set-up, the N-to-D data agreed to within an error of about $1$ part in $10^{-4}$. To recapitulate, the data that we have used could be approximately synthesized by first using the (more) realistic sources $\psi_{i,j}$ to simulate the data $\Lambda_{\Gamma,\Rec}^{2T}\psi_{i,j}$, and then post-processing these data by integrating them twice in time.
We introduce some notation, which we will use when discussing our discretization of the connecting operator and control problems. First, let $f \in L^2([0,T]\times\Gamma)$. We use the notation $[f]$ to denote the vector of inner-products with entries $[f]_i = \langle f,
\varphi_i \rangle_{L^2([0,T]\times\Gamma)}$. In addition, we let $\hat{f}$ denote the coefficient-vector for the projection of $f$ onto $\linearSpan\{\varphi_i\}$. Let $A$ be an operator on $L^2([0,T]\times\Gamma)$. We will use the notation $[A]$ to denote the matrix of inner-products $[A]_{ij} = \langle A \varphi_i,
\varphi_j\rangle_{L^2([0,T]\times\Gamma)}$. We approximate all such integrals by successively applying the trapezoidal rule in each dimension.
After the N-to-D data has been generated, we use the data $\Lambda_{\Gamma}^{2T} \varphi_{i,j}$ to discretize the connecting operator. We accomplish this using a minor modification of the procedure outlined in [@Hoop2016]. In particular, we discretize the connecting operator by computing a discrete approximation to (\[Blago\]): $$[K] = [J\Lambda_\Gamma^{2T}] - [R\Lambda_\Gamma^T]G^{-1}[RJ].$$ Here, $G^{-1}$ denotes the inverse of the Gram matrix $G_{ij}
=\langle\varphi_i,\varphi_j\rangle_{L^2([0,T]\times\Gamma)}$.
Next, we describe our implementation of Lemma \[lemma:approxConstControl\]. Let $y \in \Gamma$, $s \in [0,T]$ and $h \in [0,T-s]$. To obtain the control $\psi_{\alpha,h}$ associated with $\wavecap_{\Gamma}(y,s,h)$, we solve two discrete versions of the boundary control problem (\[eqn:ControlProblem\]). Specifically, for $\tau_1 = s 1_\Gamma$ and $\tau_2 = \tau_{y}^{s+h} \vee
s1_\Gamma$, we solve the discretized control problems: $$\label{eqn:discreteControl}
([K_{\tau_k}] + \alpha) \hat{f} = [b_{\tau_k}].$$ This yields coefficient vectors $\hat{f}_{\alpha,k},$ for $k = 1,2$ associated with the approximate control solutions. Here, we use the notation $[K_{\tau_k}]$ to denote a matrix that deviates slightly from the definition given above. In particular, we obtain $[K_{\tau_k}]$ from $[K]$ by masking rows and columns corresponding to basis functions $\varphi_{i,j}$ localized near $(t_{s,i}, x_{s,j}) \not\in
S_{\tau_k}$. This gives an approximation to the matrix for $K_{\tau_k}
= P_{\tau_k} K P_{\tau_k}$, which we have observed performs well for our particular basis. The right-hand side vector $[b_{\tau_k}]$ is a discrete approximation to $P_{\tau_k}b$ and we obtain it by first computing the vector of inner-products $[b]_l = \langle b,
\varphi_l\rangle_{L^2([0,T]\times\Gamma)}$, and then masking the entries of $[b]$ using the same strategy that we use to compute $[K_{\tau_k}]$. We solve the control problems (\[eqn:discreteControl\]) using Matlab’s back-slash function. After computing the solutions $f_{\alpha,k}$, we then compute $\psi_{h,\alpha} = f_{\alpha,2} - f_{\alpha,1}$.
In the inversion step, we use the boundary data to approximate harmonic functions in semi-geodesic coordinates in the interior of $M$. To describe this step, let $\phi$ be a harmonic function in $M$. Fix $y,
s,$ and $h$, and let $\psi_{\alpha,h}$ denote the control constructed as in the previous paragraph. We define: $$\label{define_Hch}
H_{c,h}\phi(y,s) := \frac{B(\psi_{h,\alpha} , \phi)}{ B(\psi_{h,\alpha} , 1)},$$ and we calculate the right hand side directly using (\[define\_B\]). Note that this expression coincides with an approximation to the leading term in the right-hand side of (\[eqn:accuracyOfApprox\]), so for small $h$ and $\alpha$, $H_{c,h}\phi(y,s)$ will approximate $H_c\phi(y,s)$. However, we recall that (\[eqn:accuracyOfApprox\]) is only accurate to $\mathcal{O}(h^{1/2})$, and in practice we found that (\[define\_Hch\]) tends to be closer to $H_c\phi(y,s+h/2) =
\phi(x(y,s+h/2))$. This is not unexpected, since (\[define\_Hch\]) approximates $H_c\phi(y,s)$ by approximating the average of $\phi$ over $B_h = \wavecap_\Gamma(y,s,h)$, and the point $x(y,s)$ belongs to the topological boundary of $B_h$, whereas $x(y,s+h/2)$ belongs to the interior of $B_h$. Consequently, we will compare $H_{c,h}\phi(y,s)$ to $H_c\phi(y,s+h/2)$ below.
Inverting for the wave speed
----------------------------
Our approach to reconstruct the wave speed $c$ consists of two steps. In the first step, we implement Proposition \[prop:harmonicReconstr\] to compute an approximation to the coordinate transform $\Phi_c$ on a grid of points $(y_i,s_j) \in
\Gamma \times [0,T]$. The second step is to differentiate the approximate coordinate transform in the $s$-direction and to apply (\[scheme\_isotropic\]) to compute the wave speed at the estimated points.
To approximate the coordinate transform $\Phi_c$, we first fix a small wave cap height $h > 0$, which we use at every grid point. The wave cap height controls the spatial extent of the waves $u^{\psi_{h,\alpha}}(T,\cdot)$ in the interior of $M$. Because the vertical resolution of our basis is controlled by the separation between sources in time, we choose $h$ to be an integral multiple of $\Delta t_s$, and in particular, we take $h = 2\Delta t_s$. Likewise, we choose the grid-points $(y_i,s_j)$ to coincide with the source centers for a subset of our basis functions. Specifically, we take $y_i = x_{s,i}$ and $s_j = t_{s,j}$ for the source locations $x_{s,i}
\in [-1.5,1.5]$ and times $t_{s,j} \in [0.05,0.65]$. In total, the reconstruction grid contains $N_{x,g} = 121$ horizontal positions, and $N_{t,g} = 27$ vertical positions. Then, for each grid point $(y_i,s_j)$ we solve (\[eqn:discreteControl\]) for $k = 1,2$, and obtain the source $\psi_{i,j} = \psi_{\alpha,h}$ for the point $(y_i,s_j)$. Since the Cartesian coordinate functions $x^1$ and $x^2$ are both harmonic, we then apply (\[define\_Hch\]) to both functions at each grid point, and define $$\label{def_phi_ch}
\Phi_{c,h}(y_i, s_j) := \left(H_{c,h} x^1 (y_i, s_j) , H_{c,h} x^2 (y_i, s_j)\right).$$ This yields the desired approximate coordinate transform. We plot the estimated coordinates in Figure \[fig:est\_coords\] and compare the estimated transform $\Phi_{c,h}(y_i,s_j)$ to the points $\Phi(y_i, s_j
+ h/2)$ in Figure \[fig:coord\_compare\].
[.85-1.3em]{} ![ (a) The estimated coordinate transform. We have only plotted points for half of the $y_i$ and $s_j$. (b) Estimated points $\Phi_{c,h}(y_i,s_j)$ (purple dots) compared to the semi-geodesic coordinate grid $\Phi_c(y_i,s_j+h/2)$ (black lines) and wave speed.](./draft_images/reconstructions/images_low_velocity_lens_in_gradient/new_imgs/estimated_points.png "fig:"){width="\linewidth"}
[.85-1.3em]{} ![ (a) The estimated coordinate transform. We have only plotted points for half of the $y_i$ and $s_j$. (b) Estimated points $\Phi_{c,h}(y_i,s_j)$ (purple dots) compared to the semi-geodesic coordinate grid $\Phi_c(y_i,s_j+h/2)$ (black lines) and wave speed.](./draft_images/reconstructions/images_low_velocity_lens_in_gradient/new_imgs/actual_vs_true_coords.png "fig:"){width="\linewidth"}
[.85-1.3em]{} ![ \[fig:reconstruction\_comparison\] (a) True wave speed $c$. (b) Reconstructed wave speed, plotted at the estimated coordinates given by $\Phi_{c,h}$.](./draft_images/reconstructions/images_low_velocity_lens_in_gradient/new_imgs/true_c.png "fig:"){width="\linewidth"}
[.85-1.3em]{} ![ \[fig:reconstruction\_comparison\] (a) True wave speed $c$. (b) Reconstructed wave speed, plotted at the estimated coordinates given by $\Phi_{c,h}$.](./draft_images/reconstructions/images_low_velocity_lens_in_gradient/new_imgs/estimated_c.png "fig:"){width="\linewidth"}
The last step is to approximate the wave speed. To accomplish this, we first recall that $c(\Phi_c(y,s))^2 = |\p_s \Phi_c(y,s)|_e^2$. Thus, for each base point $y_i$, we fit a smoothing spline to each of the reconstructed coordinates in the $s$-direction, that is, we fit a smoothing spline to the data sets $\{H_{c,h} x^k (y_i, s_j) : j
=1,\ldots, N_{t,g}\}$ for $k =1,2$ for each $i=1,\ldots,N_{x,g}$. We then differentiate the resulting splines at $s_j$, for $j=1,\ldots,
N_{t,g}$ to approximate the derivatives $\p_s H_{c,h}x^k(y_i,s_j)$, at each grid point. Finally, we estimate $c(\Phi_{c,h}(y_i,s_j))$ by computing $|(\p_s H_{c,h}x^1(y_i,s_j),\p_s
H_{c,h}x^2(y_i,s_j))|_e$. We plot the results of this process in Figure \[fig:reconstruction\_comparison\], along with the true wave speed for comparison. We also compare the reconstructed wave speed against the true wave speed in Figure \[fig:compare\_along\_slice\] along coordinate slices.
Inspecting the bottom row of Figure \[fig:compare\_along\_slice\], we see that the reconstruction is generally good at the estimated points. In particular, the reconstruction quality generally decreases as $s_j$ increases, which is expected, since the points $\Phi_{c,h}(y_i,s_j)$ with large $s_j$ correspond to the points which are furthest from the set $\Gamma$. Hence the N-to-D data contains a shorter window of signal returns from these points, and thus less information about the wave speed there.
[.85-1.3em]{} ![\[fig:compare\_along\_slice\] Top: Reconstructed wave speed with three approximated geodesics. Bottom row: true wave speed (blue curve) and reconstructed wave speed (red triangles) evaluated at the estimated coordinates for each of the indicated geodesics. The $x$-axis denotes the $x^2$-coordinate (depth) along the approximated geodesic.](./draft_images/reconstructions/images_low_velocity_lens_in_gradient/new_imgs/rays.png "fig:"){width="\linewidth"}
[.32]{} ![\[fig:compare\_along\_slice\] Top: Reconstructed wave speed with three approximated geodesics. Bottom row: true wave speed (blue curve) and reconstructed wave speed (red triangles) evaluated at the estimated coordinates for each of the indicated geodesics. The $x$-axis denotes the $x^2$-coordinate (depth) along the approximated geodesic.](./draft_images/reconstructions/images_low_velocity_lens_in_gradient/new_imgs/compare_along_slice_indepth_21.png "fig:"){width="\textwidth"} \[fig:test1\]
[.32]{} ![\[fig:compare\_along\_slice\] Top: Reconstructed wave speed with three approximated geodesics. Bottom row: true wave speed (blue curve) and reconstructed wave speed (red triangles) evaluated at the estimated coordinates for each of the indicated geodesics. The $x$-axis denotes the $x^2$-coordinate (depth) along the approximated geodesic.](./draft_images/reconstructions/images_low_velocity_lens_in_gradient/new_imgs/compare_along_slice_indepth_41.png "fig:"){width="\textwidth"} \[fig:test2\]
[.32]{} ![\[fig:compare\_along\_slice\] Top: Reconstructed wave speed with three approximated geodesics. Bottom row: true wave speed (blue curve) and reconstructed wave speed (red triangles) evaluated at the estimated coordinates for each of the indicated geodesics. The $x$-axis denotes the $x^2$-coordinate (depth) along the approximated geodesic.](./draft_images/reconstructions/images_low_velocity_lens_in_gradient/new_imgs/compare_along_slice_indepth_61.png "fig:"){width="\textwidth"} \[fig:test3\]
[^1]: Simons Chair in Computational and Applied Mathematics and Earth Science, Rice University, Houston TX 77005, USA ().
[^2]: Department of Mathematics, Purdue University, West Lafayette, IN 47907 ().
[^3]: Department of Mathematics, University College London, Gower Street, London WC1E 6BT, UK ().
[^4]: Submitted to the editors October 6, 2017.
|
---
abstract: 'Two parties observing correlated data seek to exchange their data using interactive communication. How many bits must they communicate? We propose a new interactive protocol for data exchange which increases the communication size in steps until the task is done. We also derive a lower bound on the minimum number of bits that is based on relating the data exchange problem to the secret key agreement problem. Our single-shot analysis applies to all discrete random variables and yields upper and lower bounds of a similar form. In fact, the bounds are asymptotically tight and lead to a characterization of the optimal rate of communication needed for data exchange for a general source sequence such as a mixture of IID random variables as well as the optimal second-order asymptotic term in the length of communication needed for data exchange for IID random variables, when the probability of error is fixed. This gives a precise characterization of the asymptotic reduction in the length of optimal communication due to interaction; in particular, two-sided Slepian-Wolf compression is strictly suboptimal.'
author:
- 'Himanshu Tyagi, , Pramod Viswanath, , and Shun Watanabe, [^1] [^2] [^3]'
bibliography:
- 'IEEEabrv.bib'
- 'references.bib'
title: Interactive Communication for Data Exchange
---
Introduction
============
Random correlated data $(X,Y)$ is distributed between two parties with the first observing $X$ and the second $Y$. What is the optimal communication protocol for the two parties to exchange their data? We allow (randomized) interactive communication protocols and a nonzero probability of error. This basic problem was introduced by El Gamal and Orlitsky in [@OrlEl84] where they presented bounds on the average number of bits of communication needed by deterministic protocols for data exchange without error[^4]. When interaction is not allowed, a simple solution is to apply Slepian-Wolf compression [@SleWol73] for each of the two one-sided data transfer problems. The resulting protocol was shown to be of optimal rate, even in comparison with interactive protocols, when the underlying observations are [*independent and identically distributed*]{} (IID) by Csiszár and Narayan in [@CsiNar04]. They considered a multiterminal version of this problem, namely the problem of attaining [*omniscience*]{}, and established a lower bound on the rate of communication to show that interaction does not help in improving the asymptotic rate of communication if the probability of error vanishes to $0$. However, interaction is known to be beneficial in one-sided data transfer ($cf.$ [@Orlitsky90; @YanH10; @YanHUY08; @Dra04]). Can interaction help to reduce the communication needed for data exchange, and if so, what is the minimum length of interactive communication needed for data exchange?
We address the data exchange problem, illustrated in Figure \[f:problem\_description\], and provide answers to the questions raised above. We provide a new approach for establishing [*converse bounds*]{} for problems with interactive communication that relates efficient communication to secret key agreement and uses the recently established conditional independence testing bound for the length of a secret key [@TyaWat14]. Furthermore, we propose an [ *interactive protocol for data exchange*]{} which matches the performance of our lower bound in several asymptotic regimes. As a consequence of the resulting single-shot bounds, we obtain a characterization of the optimal rate of communication needed for data exchange for a general sequence $(X_n, Y_n)$ such as a mixture of IID random variables as well as the optimal second-order asymptotic term in the length of communication needed for data exchange for the IID random variables $(X^n, Y^n)$, first instance of such a result in source coding with interactive communication[^5]. This in turn leads to a precise characterization of the gain in asymptotic length of communication due to interaction.
#### Related work {#related-work .unnumbered}
The role of interaction in multiparty data compression has been long recognized. For the data exchange problem, this was first studied in [@OrlEl84] where interaction was used to facilitate data exchange by communicating optimally few bits in a single-shot setup with zero error. In a different direction, [@Dra04; @YanH10; @YanHUY08] showed that interaction enables a universal variable-length coding for the Slepian-Wolf problem (see, also, [@FedS02] for a related work on universal encoding). Furthermore, it was shown in [@YanH10] that the redundancy in variable-length Slepian-Wolf coding with known distribution can be improved by interaction. In fact, the first part of our protocol is essentially the same as the one in [@YanH10] (see, also, [@BraRao11]) wherein the length of the communication is increased in steps until the second party can decode. In [@YanH10], the step size was chosen to be $\cO(\sqrt{n})$ for the universal scheme and roughly $\cO(n^{1/4})$ for the known distribution case. We recast this protocol in an information spectrum framework (in the spirit of [@HayTyaWat14ii]) and allow for a flexible choice of the step size. By choosing this step size appropriately, we obtain exact asymptotic results in various regimes. Specifically, the optimal choice of this step size $\Delta$ is given by the square root of the essential [*length of the spectrum*]{} of $\bPP{X|Y}$, $i.e.$, $\Delta = \sqrt{\lamax - \lamin}$ where $\lamax$ and $\lamin$ are large probability upper and lower bounds, respectively, for the random variable $h(X|Y) = -\log
\bP{X|Y}{X|Y}$. The $\cO(\sqrt{n})$ choice for the universal case of [@YanH10] follows as a special case since for the universal setup with IID source $h(X^n|Y^n)$ can vary over an interval of length $\cO(n)$. Similarly, for a given IID source, $h(X^n|Y^n)$ can essentially vary over an interval of length $\cO(\sqrt{n})$ for which the choice of $\Delta =
\cO(n^{1/4})$ in [@YanH10] is appropriate by our general principle. While the optimal choice of $\Delta$ (upto the order) was identified in [@YanH10] for special cases, the optimality of this choice was not shown there. Our main contribution is a converse which shows that our achieved length of communication is optimal in several asymptotic regimes. As a by-product, we obtain a precise characterization of gain due to interaction, one of the few such instances available in the literature. Drawing on the techniques introduced in this paper, the much more involved problem of simulation of interactive protocols was addressed in [@TyagiVVW16; @TyagiVVW17].
#### Organization {#organization .unnumbered}
The remainder of this paper is organized as follows: We formally describe the data exchange problem in Section \[sec:problem\]. Our results are summarized in Section \[sec:main\_results\]. Section \[sec:achievability\] contains our single-shot achievability scheme, along with the necessary prerequisites to describe it, and Section \[sec:converse\] contains our single-shot converse bound. The strong converse and second-order asymptotics for the communication length and the optimal rate of communication for general sources are obtained as a consequence of single-shot bounds in Section \[s:general\_sources\] and \[s:strong\_converse\], respectively. The final section contains a discussion of our results and extensions to the error exponent regime.
Problem formulation {#sec:problem}
===================
![The data exchange problem.[]{data-label="f:problem_description"}](problem_description)
Let the first and the second party, respectively, observe discrete random variables $X$ and $Y$ taking values in finite sets $\cX$ and $\cY$. The two parties wish to know each other’s observation using interactive communication over a noiseless (error-free) channel. The parties have access to local randomness (private coins) $U_\cX$ and $U_\cY$ and shared randomness (public coins) $U$ such that the random variables $U_\cX, U_\cY, U$ are finite-valued and mutually independent and independent jointly of $(X, Y)$. For simplicity, we resrict to [*tree-protocols*]{} ($cf.$ [@KushilevitzNisan97]). A tree-protocol $\pi$ consists of a binary tree, termed the [*protocol-tree*]{}, with the vertices labeled by $1$ or $2$. The protocol starts at the root and proceeds towards the leaves. When the protocol is at vertex $v$ with label $i_v\in\{1,2\}$, party $i_v$ communicates a bit $b_v$ based on its local observations $(X, U_\cX, U)$ for $i_v=1$ or $(Y, U_\cY, U)$ for $i_v =2$. The protocol proceeds to the left- or right-child of $v$, respectively, if $b_v$ is $0$ or $1$. The protocol terminates when it reaches a leaf, at which point each party produces an output based on its local observations and the bits communicated during the protocol, namely the transcript $\Pi=\pi(X,Y, U_\cX,U_\cY, U)$. Figure \[f:tree\_protocols\] shows an example of a protocol tree.
![A two-party protocol tree.[]{data-label="f:tree_protocols"}](protocol-tree2.pdf)
The [*length of a protocol*]{} $\pi$, denoted $|\pi|$, is the maximum accumulated number of bits transmitted in any realization of the protocol, namely the depth of the protocol tree.
For $0 \le \ep < 1$, a protocol $\pi$ attains $\ep$-[*data exchange*]{} ($\ep$-DE) if there exist functions $\hat{Y}$ and $\hat{X}$ of $(X,\Pi, U_\cX, U)$ and $(Y,\Pi, U_\cX, U)$, respectively, such that $$\begin{aligned}
\bPP{}(\hat{X} = X,~\hat{Y} = Y) \ge 1 - \varepsilon.
\label{e:omn_error_def}\end{aligned}$$ The [*minimum communication for $\ep$-DE*]{} $L_\ep(X,Y)$ is the infimum of lengths of protocols[^6] that attain $\ep$-DE, $i.e.$, $L_\ep(X,Y)$ is the minimum number of bits that must be communicated by the two parties in order to exchange their observed data with probability of error less than $\ep$.
Protocols with $2$ rounds of communication $\Pi_1$ and $\Pi_2$ which are functions of only $X$ and $Y$, respectively, are termed [ *simple protocols*]{}. We denote by $L_\ep^{\mathrm{s}}(X,Y)$ the minimum communication for $\ep$-DE by a simple protocol.
Summary of results {#sec:main_results}
==================
To describe our results, denote by $h(X) = -\log \bP{X}{X}$ and $h(X|Y) = -\log \bP{X|Y}{X|Y}$, respectively, the [*entropy density*]{} of $X$ and the [ *conditional entropy density*]{} of $X$ given $Y$. Also, pivotal in our results is a quantity we call the [*sum conditional entropy density*]{} of $X$ and $Y$ defined as $$\romn XY := h(X|Y) + h(Y|X).$$
[**An interactive data exchange protocol.**]{} Our data exchange protocol is based on an interactive version of the Slepian-Wolf protocol where the length of the communication is increased in steps until the second party decodes the data of the first. Similar protocols have been proposed earlier for distributed data compression in [@FedS02; @YanH10], for protocol simulation in [@BraRao11], and for secret key agreement in [@HayTyaWat14i; @HayTyaWat14ii].
In order to send $X$ to an observer of $Y$, a single-shot version of the Slepian-Wolf protocol was proposed in [@MiyKan95] (see, also, [@Han03 Lemma 7.2.1]). Roughly speaking, this protocol simply hashes $X$ to as many bits as the right most point in the spectrum[^7] of $\bPP{X|Y}$. The main shortcoming of this protocol for our purpose is that it sends the same number of bits for every realization of $(X,Y)$. However, we would like to use as few bits as possible for sending $X$ to party 2 so that the remaining bits can be used for sending $Y$ to party 1. Note that once $X$ is recovered by party 2 correctly, it can send $Y$ to Party 1 without error using, say, Shannon-Fano-Elias coding (eg. see [@CovTho06 Section 5]); the length of this second communication is $\lceil h(Y|X) \rceil$ bits. Our protocol accomplishes the first part above using roughly $h(X|Y)$ bits of communication.
Specifically, in order to send $X$ to $Y$ we use a [*spectrum slicing technique*]{} introduced in [@Han03] (see, also, [@HayTyaWat14i; @HayTyaWat14ii]). We divide the support $[\lamin,
\lamax]$ of spectrum of $\bPP{X|Y}$ into $N$ slices size $\Delta$ each; see Figure \[f:spectrum\_slicing\] for an illustration.
![Spectrum slicing in Protocol \[p:slepian\_wolf\_interactive\].[]{data-label="f:spectrum_slicing"}](spectrum_slicing)
The protocol begins with the left most slice and party 1 sends $\lamin+\Delta$ hash bits to party 2. If party 2 can find a unique $x$ that is compatible with the received hash bits and $h(x|Y)$ is within the current slice of conditional information spectrum, it sends back an ${\rm
ACK}$ and the protocol stops. Else, party 2 sends back a ${\rm
NACK}$ and the protocol now moves to the next round, in which Party 1 sends additional $\Delta$ hash bits. The parties keep on moving to the next slice until either party 2 sends an ${\rm ACK}$ or all slices are covered. We will show that this protocol is reliable and uses no more than $h(X|Y) + \Delta + N$ bits of communication for each realization of $(X,Y)$. As mentioned above, once party 2 gets $X$, it sends back $Y$ using $h(Y|X) + 1$ bits, thereby resulting in an overall communication of $\romn XY+ \Delta +N+1$ bits. In our applications, we shall choose $N$ and $\Delta$ to be of negligible order in comparison with the tail bounds for $\romn XY$. Thus, we have the following upper bound on $L_\ep(X,Y)$. (The statement here is rough; see Theorem \[t:interactive\_data\_exchange\] below for a precise version.)
For every $0< \ep <1$, $$L_\ep(X,Y) \,\lesssim\, \inf\{\gamma: \bPr{\romn XY >\gamma} \leq
\ep\}.$$
[**A converse bound.**]{} Our next result, which is perhaps the main contribution of this paper, is a lower bound on $L_\ep(X,Y)$. This bound is derived by connecting the data exchange problem to the two-party secret key agreement problem. For an illustration of our approach in the case of IID random variables $X^n$ and $Y^n$, note that the optimal rate of a secret key that can be generated is given by $I(X\wedge Y)$, the mutual information between $X$ and $Y$ [@Mau93; @AhlCsi93]. Also, using a privacy amplification argument ($cf.$ [@BenBraCreMau95; @Ren05]), it can be shown that a data exchange protocol using $nR$ bits can yield roughly $n(H(XY) - R)$ bits of secret key. Therefore, $I(X\wedge Y)$ exceeds $H(XY) - R$, which further gives $$R \geq H(X| Y) + H(Y| X).$$ This connection between secret key agreement and data exchange was noted first in [@CsiNar04] where it was used for designing an optimal rate secret key agreement protocol. Our converse proof is, in effect, a single-shot version of this argument.
Specifically, the “excess” randomness generated when the parties observing $X$ and $Y$ share a communication $\Pi$ can be extracted as a secret key independent of $\Pi$ using the [*leftover hash lemma*]{} [@ImpLevLub89; @RenWol05]. Thus, denoting by $S_\ep(X,Y)$ the maximum length of secret key and by $H$ the length of the common randomness ($cf.$ [@AhlCsi93]) generated by the two parties during the protocol, we get $$H - L_\ep(X,Y) \leq S_\ep(X,Y).$$
Next, we apply the recently established [*conditional independence testing*]{} upper bound for $S_\ep(X,Y)$ [@TyaWat14; @TyaWat14ii], which follows by reducing a binary hypothesis testing problem to secret key agreement. However, the resulting lower bound on $L_\ep(X,Y)$ is good only when the spectrum of $\bPP{XY}$ is concentrated. Heuristically, this slack in the lower bound arises since we are lower bounding the worst-case communication complexity of the protocol for data exchange – the resulting lower bound need not apply for every $(X,Y)$ but only for a few realizations of $(X,Y)$ with probability greater than $\ep$. To remedy this shortcoming, we once again take recourse to spectrum slicing and show that there exists a slice of the spectrum of $\bPP{XY}$ where the protocol requires sufficiently large number of bits; Figure \[f:spectrum\_slicing2\] illustrates this approach. The resulting lower bound on $L_\ep(X,Y)$ is stated below roughly, and a precise statement is given in Theorem \[t:converse\_general1\].
![Bounds on secret key length leading to the converse. Here $L_\ep$ abbreviates $L_\ep(X,Y)$ and $H_\ep$ denotes the $\ep$-tail of $\romn
XY$.[]{data-label="f:spectrum_slicing2"}](spectrum_slicing2)
\[res:lower\_bound\] For every $0< \ep <1$, $$L_\ep(X,Y) \,\gtrsim\, \inf\{\gamma: \bPr{\romn XY >\gamma} \leq
\ep\}.$$
Note that the upper and the lower bounds for $L_\ep(X,Y)$ in the two results above appear to be of the same form (upon ignoring a few error terms). In fact, the displayed term dominates asymptotically and leads to tight bounds in several asympotitic regimes. Thus, the imprecise forms above capture the spirit of our bounds.
[**Asymptotic optimality.**]{} The single-shot bounds stated above are asymptotically tight up to the first order term for any sequence of random variables $(X_n, Y_n)$, and up to the second order term for a sequence of IID random variables $(X^n, Y^n)$. Specifically, consider a general source sequence $(\bX, \bY) = \{(X_n,
Y_n)\}_{n=1}^\infty$. We are interested in characterizing the minimum asymptotic rate of communication for asymptotically error-free data exchange, and seek its comparison with the minimum rate possible using simple protocols.
\[d:R\_star\] The minimum rate of communication for data exchange $R^*$ is defined as $$R^*(\bX,\bY) = \inf_{\ep_n}\limsup_{n\rightarrow \infty} \frac 1n
L_{\ep_n}(X_n, Y_n),$$ where the infimum is over all $\ep_n\rightarrow 0$ as $n \rightarrow
\infty$. The corresponding minimum rate for simple protocols is denoted by $R^*_s$.
Denote by $\Romn$, $\overline{H}(\bX| \bY)$, and $\overline{H}(\bY|\bX)$, respectively, the $\limsup$ in probability of random variables $\romn{X_n}{Y_n}$, $h(X_n |Y_n)$, and $h(Y_n|X_n)$. The quantity $\overline{H}(\bX| \bY)$ is standard in information spectrum method [@HanVer93; @Han03] and corresponds to the asymptotically minimum rate of communication needed to send $X_n$ to an observer of $Y_n$ [@MiyKan95] (see, also, [@Han03 Lemma 7.2.1]). Thus, a simple communication protocol of rate $\overline{H}(\bX| \bY) + \overline{H}(\bY|\bX)$ can be used to accomplish data exchange. In fact, a standard converse argument can be used to show the optimality of this rate for simple communication. Therefore, when we restrict ourselves to simple protocols, the asymptotically minimum rate of communication needed is $$\begin{aligned}
R^*_s(\bX,\bY) = \overline{H}(\bX| \bY) + \overline{H}(\bY|\bX).\end{aligned}$$ As an illustration, consider the case when $(X_n, Y_n)$ are generated by a mixture of two $n$-fold IID distributions $\mathrm{P}_{X^nY^n}^{(1)}$ and $\mathrm{P}_{X^nY^n}^{(2)}$. For this case, the right-side above equals ($cf.$ [@Han03]) $$\begin{aligned}
& \max\{H(X^{(1)}\mid Y^{(1)}),H(X^{(2)}\mid Y^{(2)})\} \\
&~~~+ \max\{H(Y^{(1)}\mid X^{(1)}),H(Y^{(2)}\mid X^{(2)})\}.\end{aligned}$$ Can we improve this rate by using interactive communication? Using our single-shot bounds for $L_\ep(X,Y)$, we answer this question in the affirmative.
For a sequence of sources $(\bX,\bY) = \{(X_n, Y_n)\}_{n=1}^\infty$, $$R^*(\bX,\bY) = \Romn.$$
For the mixture of IID example above, $$\begin{aligned}
\Romn &= \max\{H(X^{(1)}\mid Y^{(1)}) + H(Y^{(1)}\mid X^{(1)}), \\
&~~~H(X^{(2)}\mid Y^{(2)}) +H(Y^{(2)}\mid X^{(2)})\},\end{aligned}$$ and therefore, simple protocols are strictly suboptimal in general. Note that while the standard information spectrum techniques suffice to prove the converse when we restrict to simple protocols, their extension to interactive protocols is unclear and our single-shot converse above is needed.
Turning now to the case of IID random variables, $i.e.$ when $X_n =
X^n = (X_1,..., X_n)$ and $Y_n = Y^n = (Y_1, ..., Y_n)$ are $n$-IID repetitions of random variables $(X, Y)$. For brevity, denote by $R^*(X,Y)$ the corresponding minimum rate of communication for data exchange, and by $H(X\triangle Y)$ and $V$, respectively, the mean and the variance of $\romn XY$. Earlier, Csiszár and Narayan [@CsiNar04] showed that $R^*(X,Y) = H(X\triangle Y)$. We are interested in a finer asymptotic analysis than this first order characterization.
In particular, we are interested in characterizing the asymptotic behavior of $L_\ep(X^n, Y^n)$ up to the second-order term, for every fixed $\ep$ in (0,1). We need the following notation: $$R^*_\ep(X,Y) = \lim_{n \rightarrow \infty}\frac 1n L_\ep(X^n, Y^n),
\quad 0< \ep < 1.$$ Note that $R^*(X,Y) = \sup_{\ep\in (0,1)}R^*_\ep(X,Y)$. Our next result shows that $R^*_\ep(X,Y)$ does not depend on $\ep$ and constitutes a [*strong converse*]{} for the result in [@CsiNar04].
\[result:strong-converse\] For every $0< \ep <1$, $$R^*_\ep(X,Y) = H(X\triangle Y).$$
In fact, this result follows from a general result characterizing the second-order asymptotic term[^8].
For every $0< \ep < 1 $, $$\begin{aligned}
L_{\ep}\left(X^n, Y^n\right) = nH(X\triangle Y) + \sqrt{n V}
Q^{-1}(\ep) + o(\sqrt{n}),\end{aligned}$$ where $Q(a)$ is the tail probability of the standard Gaussian distribution and $V$ is the variance of the sum conditional entropy density $h(X \triangle
Y)$.
While simple protocols are optimal for the first-order term for IID observations, Example \[ex:second\_order\_suboptimal\] in Section \[s:strong\_converse\] exhibits the strict suboptimality of simple protocols for the second-order term.
A single-shot data exchange protocol {#sec:achievability}
====================================
We present a single-shot scheme for two parties to exchange random observations $X$ and $Y$. As a preparation for our protocol, we consider the restricted problem where only the second party observing $Y$ seeks to know the observation $X$ of the first party. This basic problem was introduced in the seminal work of Slepian and Wolf [@SleWol73] for the case where the underlying data is IID where a scheme with optimal rate was given. A single-shot version of the Slepian-Wolf scheme was given in [@MiyKan95] (see, also, [@Han03 Lemma 7.2.1]). which we describe below.
Using the standard “random binning” and “typical set” decoding argument, it follows that there exists an $l$-bit communication $\Pi_1 =
\Pi_1(X)$ and a function $\hat X$ of $(\Pi_1, Y)$ such that $$\begin{aligned}
\bPr{X \neq \hat{X}} \le \bPr{h(X| Y) > l-\eta} + 2^{-\eta}.
\label{e:SW_error_bound}\end{aligned}$$
In essence, the result of [@MiyKan95] shows that we can send $X$ to Party 2 with a probability of error less than $\ep$ using roughly as many bits as the $\ep$-tail of $h(X|Y)$. However, the proposed scheme uses the same number of bits for every realization of $(X,Y)$. In contrast, we present an interactive scheme that achieves the same goal and uses roughly $h(X|Y)$ bits when the underlying observations are $(X,Y)$
While the bound in can be used to establish the asymptotic rate optimality of the Slepian-Wolf scheme even for general sources, the number of bits communicated can be reduced for specific realizations of $X,Y$. This improvement is achieved using an interactive protocol with an ${\rm ACK-NACK}$ feedback which halts as soon as the second party decodes first’s observation; this protocol is described in the next subsection. A similar scheme was introduced by Feder and Shulman in [@FedS02], a variant of which was shown to be of least [*average-case complexity*]{} for stationary sources by Yang and He in [@YanH10], requiring $H(X\mid Y)$ bits on average. Another variant of this scheme has been used recently in [@HayTyaWat14ii] to generate secret keys of optimal asymptotic length upto the second-order term.
Interactive Slepian-Wolf Compression Protocol
---------------------------------------------
We begin with an interactive scheme for sending $X$ to an observer of $Y$, which hashes (bins) $X$ into a few values as in the scheme of [@MiyKan95], but unlike that scheme, increases the hash-size gradually, starting with $\lambda_1 = \lambda_{\min}$ and increasing the size $\Delta$-bits at a time until either $X$ is recovered or $\lambda_{\max}$ bits have been sent. After each transmission, Party 2 sends either an $\mathrm{ACK}$-$\mathrm{NACK}$ feedback signal; the protocol stops when an $\mathrm{ACK}$ symbol is received.
As mentioned in the introduction, we rely on spectrum slicing. Our protocol focuses on the “essential spectrum” of $h(X|Y)$, $i.e.$, those values of $(X,Y)$ for which $h(X|Y) \in (\lamin, \lamax)$. For $\lambda_{\min}, \lambda_{\max}, \Delta > 0$ with $\lambda_{\max} >
\lambda_{\min}$, let $$\begin{aligned}
\label{eq:number-of-round}
N = \frac{\lambda_{\max} - \lambda_{\min}}{\Delta},\end{aligned}$$ and $$\begin{aligned}
\lambda_i = \lambda_{\min} + (i-1) \Delta,~~1 \le i \le N.
\label{e:lambda_i}\end{aligned}$$ Further, let $$\begin{aligned}
\cT_0 = \Big\{ (x,y) : h_{\bPP{X|Y}}(x|y) \ge \lambda_{\max} \mbox{ or
} h_{\bPP{X|Y}}(x|y) < \lambda_{\min} \Big\},
\label{e:typical}\end{aligned}$$ and for $1 \le i \le N$, let $\cT_i$ denote the $i$th slice of the spectrum given by $$\begin{aligned}
\cT_i = \Big\{ (x,y) : \lambda_i \le h_{\bPP{X|Y}}(x|y) < \lambda_i +
\Delta \Big\}.
\label{e:slice_i}\end{aligned}$$ Note that $\cT_0$ corresponds to the complement of the “typical set.” Finally, let $\cH_l(\cX)$ denote the set of all mappings $h:\cX \to \{0,1\}^l$.
Our protocol for transmitting $X$ to an observer of $Y$ is described in Protocol \[p:slepian\_wolf\_interactive\]. The lemma below bounds the probability of error for Protocol \[p:slepian\_wolf\_interactive\] when $(x,y)\in \cT_i$, $1\le i \le N$.
Both parties use $U$ to select $h_1$ uniformly from $\hash_{l}(\cX)$\
Party 1 sends $\prot_1 = h_1(X)$\
\[t:slepian\_wolf\_interactive\] Protocol \[p:slepian\_wolf\_interactive\] with $l = \lamin+\Delta +
\eta$ sends at most $(\edxgy + \Delta + N+\eta)$ bits when the observations are $(X,Y) \notin \cT_0$ and has probability of error less than $$\bPr{\hat X \neq X} \leq \bP{XY}{\cT_0} + N2^{-\eta}.$$
Note that when $\cT_0$ is chosen to be of small probability, Protocol \[p:slepian\_wolf\_interactive\] sends essentially the same number of bits in the worst-case as the Slepian-Wolf protocol.
Interactive protocol for data exchange
--------------------------------------
Returning to the data exchange problem, our protocol for data exchange builds upon Protocol \[p:slepian\_wolf\_interactive\] and uses it to first transmit $X$ to the second party (observing $Y$). Once Party 2 has recovered $X$ correctly, it sends $Y$ to Party 1 without error using, say, Shannon-Fano-Elias coding (eg. see [@CovTho06 Section 5]); the length of this second communication is $\lceil
h(Y|X) \rceil$ bits. When the accumulated number of bits communicated in the protocol exceeds a prescribed length $l_{\max}$, the parties abort the protocol and declare an error.[^9] Using Theorem \[t:slepian\_wolf\_interactive\], the probability of error of the combined protocol is bounded above as follows.
\[t:interactive\_data\_exchange\] Given $\lambda_{\min},\lambda_{\max}, \Delta,\eta > 0$ and for $N$ in , there exists a protocol for data exchange of length $l_{\max}$ such that $$\begin{aligned}
\lefteqn{ \bPr{X \neq \hat{X} \mbox{ or } Y \neq \hat{Y}} } \\
&\le \bPr{\romn XY +
\Delta + N +\eta+ 1 > l_{\max}} \\
&~~~ + \bP{XY}{\cT_0} + N 2^{-\eta}.\end{aligned}$$
Thus, we attain $\ep$-DE using a protocol of length $$l_{\max} = \lambda_\ep+ \Delta+N + \eta+1,$$ where $\lambda_{\ep}$ is the $\ep$-tail of $\romn XY$. Note that using the noninteractive Slepian-Wolf protocol on both sides will require roughly as many bits of communication as the sum of $\ep$-tails of $h(X|Y)$ and $h(Y|X)$, which, in general, is more than the $\ep$-tail of $h(X|Y) + h(Y|X)$.
Proof of Theorem \[t:slepian\_wolf\_interactive\]
-------------------------------------------------
The theorem follows as a corollary of the following observation.
\[l:slepian\_wolf\_interactive\] For $(x,y)\in \cT_i$, $1\le i \le N$, denoting by $\hat X = \hat
X(x,y)$ the estimate of $x$ at Party 2 at the end of the protocol (with the convention that $\hat X = \emptyset$ if an error is declared), Protocol \[p:slepian\_wolf\_interactive\] sends at most $(l+(i-1)\Delta + i)$ bits and has probability of error bounded above as follows: $$\bPr{\hat X \neq x \mid X=x, Y=y} \leq i2^{\la_{\min}+\Delta - l}.$$
[*Proof.*]{} Since $(x,y)\in \cT_i$, an error occurs if there exists a $\hat{x}\neq x$ such that $(\hat x,y)\in \cT_j$ and $\prot_{2k-1} =
h_{2k-1}(\hat x)$ for $1 \leq k \leq j$ for some $j\leq i$. Therefore, the probability of error is bounded above as $$\begin{aligned}
& \bPr{\hat X \neq x \mid X=x, Y=y} \\ &\leq \sum_{j=1}^{i}\sum_{\hat
x\neq x} \bPr{h_{2k-1}(x) = h_{2k-1}(\hat x),\, \forall\, 1\leq k
\leq j} \\
&~~~\times \mathbbm{1}\big((\hat x, y)\in \cT_j\big) \\
&\leq
\sum_{j=1}^{i}\sum_{\hat x\neq x}
\frac{1}{2^{l+(j-1)\Delta}}\mathbbm{1}\big((\hat x, y)\in \cT_j\big) \\
&= \sum_{j=1}^{i}\sum_{\hat x\neq x}\frac{1}{2^{l+(j-1)\Delta}}
|\{\hat x \mid (\hat x, y)\in \cT_j\}| \\&\leq i2^{\lamin - l
+\Delta},\end{aligned}$$ where we have used the fact that $\log |\{\hat x \mid (\hat x, y)\in
\cT_j\}| \leq \la_j+\Delta$. Note that the protocol sends $l$ bits in the first transmission, and $\Delta$ bits and $1$-bit feedback in every subsequent transmission. Therefore, no more than $(l+(i-1)\Delta + i)$ bits are sent.
Converse bound {#sec:converse}
==============
Our converse bound, while heuristically simple, is technically involved. We first state the formal statement and provide the high level ideas underlying the proof; the formal proof will be provided later. Our converse proof, too, relies on spectrum slicing to find the part of the spectrum of $\bPP{XY}$ where the protocol communicates large number of bits. As in the achievability part, we shall focus on the “essential spectrum” of $h(XY)$.
Given $\lamax$, $\lamin$, and $\Delta>0$, let $N$ be as in and the set $\cT_0$ be as in , with $h_{\bPP{X|Y}}(x|y)$ replaced by $h_{\bPP{XY}}(xy)$ in those definitions.
\[t:converse\_general1\] For $0\le \ep <1$, $0< \eta < 1-\ep$, and parameters $\Delta, N$ as above, the following lower bound on $L_\ep(X,Y)$ holds for every $\gamma>0$: $$\begin{aligned}
L_\ep(X,Y) &\ge \gamma + 3\log\left(\bPP \gamma- \ep - \bP{XY}{\cT_0} -
\frac 1N\right)_+ \\
&~~~+\log(1-2\eta)-\Delta -6\log N - 4\log\frac 1
{\eta}-1,\end{aligned}$$ where $\bPP \gamma := \bP{XY}{\romn{X}{Y} > \gamma}$.
Thus, a protocol attaining $\ep$-DE must communicate roughly as many bits as $\ep$-tail of $\romn XY$.
The main idea is to relate data exchange to secret key agreement, which is done in the following two steps:
1. Given a protocol $\pi$ for $\ep$-DE of length $l$, use the leftover hash lemma to extract an $\ep$-secret key of length roughly $\lamin - l$.
2. The length of the secret key that has been generated is bounded above by $S_\ep(X,Y)$, the maximum possible length of an $\ep$-secret key. Use the conditional independence testing bound in [@TyaWat14; @TyaWat14ii] to further upper bound $S_\ep(X,Y)$, thereby obtaining a lower bound for $l$.
This approach leads to a loss of $\lamax- \lamin$, the length of the spectrum of $\bPP{XY}$. However, since we are lower bounding the worse-case communication complexity, we can divide the spectrum into small slices of length $\Delta$, and show that there is a slice where the communication is high enough by applying the steps above to the conditional distribution given that $(X,Y)$ lie in a given slice. This reduces the loss from $\lamax-\lamin$ to $\Delta$.
Review of two party secret key agreement {#s:secret_keys}
----------------------------------------
Consider two parties with the first and the second party, respectively, observing the random variable $X$ and $Y$. Using an interactive protocol $\Pi$ and their local observations, the parties agree on a secret key. A random variable $K$ constitutes a secret key if the two parties form estimates that agree with $K$ with probability close to $1$ and $K$ is concealed, in effect, from an eavesdropper with access to communication $\Pi$. Formally, let $K_x$ and $K_y$, respectively, be randomized functions of $(X, \Pi)$ and $(Y,
\Pi)$. Such random variables $K_x$ and $K_y$ with common range $\cK$ constitute an [*$\ep$-secret key*]{} ($\ep$-SK) if the following condition is satisfied: $$\begin{aligned}
\frac{1}{2}\left\| \bPP{K_xK_y\Pi} -
\mathrm{P}_{\mathtt{unif}}^{(2)}\times \bPP{\Pi}\right\| &\leq \ep,
\nonumber\end{aligned}$$ where $$\begin{aligned}
\mathrm{P}_{\mathtt{unif}}^{(2)}\left(k_x, k_y\right) =
\frac{\mathbbm{1}(k_x= k_y)}{|\cK|},\end{aligned}$$ and $\| \cdot \|$ is the variational distance. The condition above ensures both reliable [*recovery*]{}, requiring $\bPr{K_x \neq K_y}$ to be small, and information theoretic [*secrecy*]{}, requiring the distribution of $K_x$ (or $K_y$) to be almost independent of the communication $\Pi$ and to be almost uniform. See [@TyaWat14] for a discussion on connections between the combined condition above and the usual separate conditions for recovery and secrecy.
Given $0\le \ep < 1$, the supremum over lengths $\log|\cK|$ of an $\ep$-SK is denoted by $S_\ep(X, Y)$.
A key tool for generating secret keys is the [*leftover hash lemma*]{} [@ImpLevLub89; @RenWol05] which, given a random variable $X$ and an $l$-bit eavesdropper’s observation $Z$, allows us to extract roughly $H_{\min}(\bPP X) - l$ bits of uniform bits, independent of $Z$. Here $H_{\min}$ denotes the [*min-entropy*]{} and is given by $$H_{\min}\left(\bPP X\right) = \sup_x \log \frac{1}{\bP X x}.$$ Formally, let $\cF$ be a [*$2$-universal family*]{} of mappings $f:
\cX\rightarrow \cK$, $i.e.$, for each $x'\neq x$, the family $\cF$ satisfies $$\frac{1}{|\cF|} \sum_{f\in \cF} \mathbbm{1}(f(x) = f(x')) \leq
\frac{1}{|\cK|}.$$
\[l:leftover\_hash\] Consider random variables $X$ and $Z$ taking values in a countable set $\cX$ and a finite set $\cZ$, respectively. Let $S$ be a random seed such that $f_S$ is uniformly distributed over a $2$-universal family $\cF$. Then, for $K = f_S(X)$ $$\begin{aligned}
\ttlvrn{\bPP{KZS}}{\bPP{\mathtt{unif}}\bPP{ZS}} \leq
\sqrt{|\cK||\cZ|2^{- H_{\min} \left(\bPP{X}\right)}},\end{aligned}$$ where $\bPP{\mathtt{unif}}$ is the uniform distribution on $\cK$.
The version above is a straightforward modification of the leftover hash lemma in, for instance, [@Ren05] and can be derived in a similar manner (see Appendix B of [@HayTyaWat14ii]).
Next, we recall the [*conditional independence testing*]{} upper bound on $S_{\ep}(X, Y)$, which was established in [@TyaWat14; @TyaWat14ii]. In fact, the general upper bound in [@TyaWat14; @TyaWat14ii] is a single-shot upper bound on the secret key length for a multiparty secret key agreement problem with side information at the eavesdropper. Below, we recall a specialization of the general result for the two party case with no side information at the eavesdropper. In order to state the result, we need the following concept from binary hypothesis testing.
Consider a binary hypothesis testing problem with null hypothesis $\mathrm{P}$ and alternative hypothesis $\mathrm{Q}$, where $\mathrm{P}$ and $\mathrm{Q}$ are distributions on the same alphabet ${\cal V}$. Upon observing a value $v\in \cV$, the observer needs to decide if the value was generated by the distribution $\bPP{}$ or the distribution $\mathrm{Q}$. To this end, the observer applies a stochastic test $\mathrm{T}$, which is a conditional distribution on $\{0,1\}$ given an observation $v\in \cV$. When $v\in \cV$ is observed, the test $\mathrm{T}$ chooses the null hypothesis with probability $\mathrm{T}(0|v)$ and the alternative hypothesis with probability $T(1|v) = 1 - T(0|v)$. For $0\leq \ep<1$, denote by $\beta_\ep(\mathrm{P},\mathrm{Q})$ the infimum of the probability of error of type II given that the probability of error of type I is less than $\ep$, , $$\begin{aligned}
\beta_\ep(\mathrm{P},\mathrm{Q}) := \inf_{\mathrm{T}\, :\,
\mathrm{P}[\mathrm{T}] \ge 1 - \ep} \mathrm{Q}[\mathrm{T}],
\label{e:beta-epsilon}\end{aligned}$$ where $$\begin{aligned}
\mathrm{P}[\mathrm{T}] &=& \sum_v \mathrm{P}(v) \mathrm{T}(0|v),
\\ \mathrm{Q}[\mathrm{T}] &=& \sum_v \mathrm{Q}(v) \mathrm{T}(0|v).\end{aligned}$$
The following upper bound for $S_\ep(X,Y)$ was established in [@TyaWat14; @TyaWat14ii].
\[theorem:one-shot-converse-source-model\] Given $0\leq \ep <1$, $0<\eta<1-\ep$, the following bound holds: $$\begin{aligned}
S_{\ep}\left(X, Y\right) \le -\log
\beta_{\ep+\eta}\big(\bPP{XY},\mathrm{Q}_{X}\mathrm{Q}_{Y}\big) + 2
\log(1/\eta), \nonumber\end{aligned}$$ for all distributions $\bQQ X$ and $\bQQ Y$ on on $\cX$ and $\cY$, respectively.
We close by noting a further upper bound for $\beta_\ep(\bPP{},
\bQQ{})$, which is easy to derive ($cf.$ [@Pol10]).
\[l:bound\_beta\_epsilon\] For every $0\leq \ep \leq 1$ and $\lambda$, $$-\log \beta_\ep(\bPP{}, \bQQ{}) \leq \lambda -
\log\left(\mathrm{P}\left(\log\frac{ \bP {} X}{\bQ{} X} <
\lambda\right) - \ep\right)_+,$$ where $(x)_+ = \max\{0,x\}$.
Converse bound for almost uniform distribution
----------------------------------------------
First, we consider a converse bound under the almost uniformity assumption. Suppose that there exist $\lamin$ and $\lamax$ such that $$\begin{aligned}
&\lamin \le -\log \bPP {XY}(x,y) \le \lamax,
\nonumber
\\
&\hspace{4cm} \forall (x,y) \in
\mathrm{supp}(\bPP{XY}),
\label{e:uniformity_assumption}\end{aligned}$$ where $\mathrm{supp}(\bPP{XY})$ denotes the support of $\bPP{XY}$. We call such a distribution $\bPP{XY}$ an almost uniform distribution with margin $\Delta=(\lamax- \lamin)$.
\[t:converse\_AU\] Let $\bPP{XY}$ be almost uniform with margin $\Delta$. Given $0\le \ep <1$, for every $0< \eta< 1-\ep$, and all distributions $\bQQ X$ and $\bQQ Y$, it holds that $$\begin{aligned}
\lefteqn{ L_\ep(X,Y) } \\
&\ge \gamma + \log\left(\bPr{-\log \frac{\bP
{XY}{X,Y}^2}{\bQ{X}{X}\bQ{Y}{Y}} \geq \gamma}- \ep -
2\eta\right)_+ \\
&~~~ -\Delta - 4\log\frac 1 {\eta}-1.\end{aligned}$$
If $\Delta\approx 0$ (the almost uniform case), the bound above yields Result \[res:lower\_bound\] upon choosing $\bQQ X = \bPP X$ and $\bQQ
Y = \bPP Y$.
[*Proof.*]{} Given a protocol $\pi$ of length $l$ that attains $\ep$-DE, using Lemma \[l:leftover\_hash\] we can generate an $(\ep+\eta)$-SK that is almost independent of $\Pi$ and takes values in $\cK$ with $$\log |\cK| \geq \lamin - l - 2\log(1/\eta)-1.$$ Also, by Theorem \[theorem:one-shot-converse-source-model\] $$\log|\cK| \leq -\log \beta_{\ep+2\eta}(\bPP {XY}, \bQQ{X}\bQQ{Y})
+2\log(1/\eta),$$ which along with the inequality above and Lemma \[l:bound\_beta\_epsilon\] yields $$\begin{aligned}
l &\geq \lamin + \log\left(\bPr{\log\frac{ \bP {XY}{X, Y}}{\bQ{X}
{X}\bQ{Y}{Y}} < \lambda} - \ep-2\eta\right)_+ \\
&~~~ - \lambda
-4\log(1/\eta)-1.\end{aligned}$$ The claimed bound follows upon choosing $\lambda = \lamax-\gamma$ and using assumption .
Converse bound for all distributions
------------------------------------
The shortcoming of Theorem \[t:converse\_AU\] is the $\Delta$-loss, which is negligible only if $\lamax\approx \lamin$. To circumvent this loss, we divide the spectrum of $\bPP {XY}$ into slices such that, conditioned on any slice, the distribution is almost uniform with a small margin $\Delta$. To lower bound the worst-case communication complexity of a given protocol, we identify a particular slice where appropriately many bits are communicated; the required slice is selected using Lemma \[l:good\_index\] below.
Given $\lamax$, $\lamin$, and $\Delta>0$, let $N$ be as in , $\cT_0$ be as in , and $\lambda_i$ and $\cT_i$, too, be as defined there, with $h_{\bPP{X|Y}}(x|y)$ replaced by $h_{\bPP{XY}}(xy)$ in those definitions. Let random variable $J$ take the value $j$ when $\{(X,Y)
\in \cT_j\}$. For a protocol $\Pi$ attaining $\ep$-DE, denote $$\begin{aligned}
\cE_{\mathtt{correct}} &:= \{X= \hat X, Y = \hat Y\},
\nonumber
\\
\cE_\gamma
&:= \{\romn XY\ge \gamma\},
\label{e:E_gamma_def}
\\
\cE_j &:= \cE_{\mathtt{correct}} \cap
\cT_0^c\cap \cE_\gamma \cap \{J=j\},\quad 1\leq j \leq N,
\nonumber
\\ \bPP
\gamma &:= \bP {XY}{\cE_\gamma}.
\nonumber\end{aligned}$$
\[l:good\_index\] There exists an index $1\leq j\leq N$ such that $\bP J j> 1/N^2$ and $$\bP {XY\mid J}{\cE_j\mid j}\ge \left(\bPP \gamma -\ep - \bP{XY}{\cT_0}
- \frac 1N\right).$$
[*Proof.*]{} Let $\cJ_1$ be the set of indices $1\leq j \leq N$ such that $\bP J j >1/N^2$, and let $\cJ_2 = \{1, ..., N\}\setminus
\cJ_1$. Note that $\bP J{\cJ_2} \leq 1/N$. Therefore, $$\begin{aligned}
\bPP \gamma - \ep - \bP{XY}{\cT_0} &\le \bPr
{\cE_{\mathtt{correct}}\cap \cT_0^c\cap \cE_\gamma} \\ &\le
\sum_{j \in \cJ_1}\bP J j\bP{XY|J}{\cE_j\mid j} + \bP J {\cJ_2}
\\ &\le \max_{j \in \cJ_1}\bP {XY|J}{\cE_j \mid j} + \frac 1N.\end{aligned}$$ Thus, the maximizing $j\in \cJ_1$ on the right satisfies the claimed properties.
We now state our main converse bound.
\[t:converse\_general\] For $0\le \ep <1$, $0< \eta < 1-\ep$, and parameters $\Delta, N$ as above, the following lower bound on $L_\ep(X,Y)$ holds: $$\begin{aligned}
L_\ep(X,Y) &\ge \gamma + 3\log\left(\bPP \gamma- \ep - \bP{XY}{\cT_0} -
\frac 1N\right)_+ \\
&~~~ +\log(1-2\eta)-\Delta -6\log N - 4\log\frac 1
{\eta}-1.\end{aligned}$$
[*Proof.*]{} Let $j$ satisfy the properties stated in Lemma \[l:good\_index\]. The basic idea is to apply Theorem \[t:converse\_AU\] to $\bPP{XY|\cE_j}$, where $\bPP{XY\mid \cE_j}$ denotes the conditional distributions on $X,Y$ given the event $\cE_j$.
First, we have $$\begin{aligned}
\label{eq:lower-PXY}
\bP{XY|\cE_j}{x,y} \ge \bP{XY}{x,y}.\end{aligned}$$ Furthermore, denoting $\alpha = \bPP \gamma - \ep -\bP{XY}{\cT_0} -
1/N$ and noting $\bP{J}{j} > 1/N^2$, we have for all $(x,y) \in \cE_j$ that $$\begin{aligned}
\bP{XY|\cE_j}{x,y} &\le \frac {1}{\alpha}\bP {XY|J=j}{x,y} \\ &\le
\frac{N^2}{\alpha} \bP{XY}{x,y},
\label{eq:upper-PXY} \end{aligned}$$ where $\bPP{XY\mid J=j}$ denotes the conditional distributions on $X,Y$ given $\{J=j\}$. Thus, and together imply, for all $(x,y)\in \cE_j$, $$\lambda_j + \log\alpha - 2 \log N \le -\log \bP{XY|\cE_j}{x,y} \le
\lambda_j+\Delta,$$ $i.e.$, $\bPP{XY|\cE_j}$ is almost uniform with margin $\Delta-\log \alpha + 2 \log N$ ($cf.$ ). Also, note that implies $$\begin{aligned}
&\bP {XY|\cE_j}{-\log\frac{\bPP{XY|\cE_i}(X,Y)^2}{\bP X X\bP Y Y} \ge
\gamma + 2\log \alpha -4\log N}
\\
&\geq \bP {XY|\cE_j}{-\log\frac{\bP{XY}{X,Y}^2}{\bP X X\bP Y Y} \ge
\gamma}
\\
&= \bP {XY\mid
\cE_i}{\cE_\gamma}
\\
&= 1,\end{aligned}$$ where the final equality holds by the definition of $\cE_\gamma$ in . Moreover, $$\bP{XY|\cE_j}{X=\hat X, Y=\hat Y} = 1.$$ Thus, the proof is completed by applying Theorem \[t:converse\_AU\] to $\bPP{XY|\cE_j}$ with $\bQQ{X} = \bPP X$ and $\bQQ Y = \bPP Y$, and $\Delta-\log \alpha + 2 \log N$ in place of $\Delta$.
Converse bound for simple communication protocol {#subsec:converse-simple-protocol}
------------------------------------------------
We close by noting a lower bound for the length of communication when we restrict to simple communication. For simplicity assume that the joint distribution $\bPP{XY}$ is indecomposable, $i.e.$, the [*maximum common function*]{} of $X$ and $Y$ is a constant (see [@GacKor73]) and the parties can’t agree on even a single bit without communicating ($cf.$ [@Wit75]). The following bound holds by a standard converse argument using the information spectral method ($cf.$ [@Han03 Lemma 7.2.2]).
\[proposition:simple\] For $0 \le \ep < 1$, we have $$\begin{aligned}
&L_\ep^{\mathrm{s}}(X,Y) \\
&\ge \inf\bigg\{ l_1 + l_2 : \forall \delta >
0, \\
&~\mathbb{P}\Big( h(X|Y) > l_1 +\delta \mbox{ or } h(Y|X) > l_2 +
\delta \Big) \le \ep + 2\cdot 2^{-\delta} \bigg\}.\end{aligned}$$
Since randomization (local or shared) does not help in improving the length of communication ($cf.$ [@KushilevitzNisan97 Chapter 3]) we can restrict to deterministic protocols. Then, since $\bPP{XY}$ is indecomposible, both parties have to predetermine the lengths of messages they send; let $l_1$ and $l_2$, respectively, be the length of message sent by the first and the second party. For $\delta > 0$, let $$\begin{aligned}
\cT_1 &:= \Big\{ (x,y): -\log \bP{X|Y}{x|y} \le l_1 + \delta \Big\},
\\ \cT_2 &:= \Big\{ (x,y): - \log \bP{Y|X}{y|x} \le l_2 + \delta
\Big\},\end{aligned}$$ and $\cT := \cT_1 \cap \cT_2$. Let $\cA_1$ and $\cA_2$ be the set of all $(x,y)$ such that party 2 and party 1 correctly recover $x$ and $y$, respectively, and let $\cA := \cA_1 \cap \cA_2$. Then, for any simple communication protocol that attains $\ep$-DE, we have $$\begin{aligned}
\bP{XY}{\cT^c} &= \bP{XY}{\cT^c \cap \cA^c} + \bP{XY}{\cT^c \cap \cA}
\\ &\le \bP{XY}{\cA^c} + \bP{XY}{\cT_1^c \cap \cA} + \bP{XY}{\cT_2^c
\cap \cA} \\ &\le \ep + \bP{XY}{\cT_1^c \cap \cA_1} +
\bP{XY}{\cT_2^c \cap \cA_2} \\ &\le \ep + 2 \cdot 2^{-\delta},\end{aligned}$$ where the last inequality follows by a standard argument ($cf.$ [@Han03 Lemma 7.2.2]) as follows: $$\begin{aligned}
\bP{XY}{\cT_1^c \cap \cA_1} &\leq \sum_{y}\bP Y
y\bP{X|Y}{\cT_1^c\cap \cA_1| y}
\\
&\leq \sum_{y}\bP Y y |\{x: (x,y)\in \cA_1\}| 2^{-l_1 - \delta}
\\
&\leq \sum_{y}\bP Y y |\{x: (x,y)\in \cA_1\}| 2^{-l_1 - \delta}
\\
&\leq \sum_{y}\bP Y y 2^{ - \delta}
\\
&= 2^{-\delta},\end{aligned}$$ and similarly for $\bP{XY}{\cT_2^c \cap \cA_2}$; the desired bound follows.
General sources {#s:general_sources}
===============
While the best rate of communication required for two parties to exchange their data is known [@CsiNar04], and it can be attained by simple (noninteractive) Slepian-Wolf compression on both sides, the problem remains unexplored for general sources. In fact, the answer is completely different in general and simple Slepian-Wolf compression is suboptimal.
Formally, let $(X_n, Y_n)$ with joint distribution[^10] $\bPP{X_nY_n}$ be a sequence of sources. We need the following concepts from the information spectrum method; see [@Han03] for a detailed account. For random variables $(\bX,\bY) =
\{(X_n, Y_n)\}_{n=1}^\infty$, the the [*inf entropy rate*]{} $\underline{H}(\bX \bY)$ and the [*sup entropy rate*]{} $\overline{H}(\bX \bY)$ are defined as follows: $$\begin{aligned}
\underline{H}(\bX \bY) &= \sup\left\{\alpha \mid \lim_{n\rightarrow
\infty} \bPr{\frac{1}{n}h(X_nY_n) < \alpha} = 0\right\},
\\ \overline{H}(\bX \bY) &= \inf\left\{\alpha \mid \lim_{n\rightarrow
\infty} \bPr{\frac{1}{n}h(X_nY_n) > \alpha} = 0\right\};\end{aligned}$$ the [*sup-conditional entropy rate*]{} $\overline{H}(\bX| \bY)$ is defined analogously by replacing $h(X_nY_n)$ with $h(X_n| Y_n)$. To state our result, we also need another quantity defined by a limit-superior in probability, namely the [*sup sum conditional entropy rate*]{}, given by $$\begin{aligned}
&\Romn \\ &= \inf\left\{\alpha \mid \lim_{n\rightarrow \infty}
\bPr{\frac{1}{n} h(X_n \triangle Y_n)> \alpha} = 0\right\}.\end{aligned}$$
The result below characterizes $R^*(\bX, \bY)$ (see Definition \[d:R\_star\]).
\[t:communication\_omniscience\_general\] For a sequence of sources $(\bX,\bY) = \{(X_n, Y_n)\}_{n=1}^\infty$, $$R^*(\bX,\bY) = \Romn.$$
[*Proof.*]{} The claim follows from Theorems \[t:interactive\_data\_exchange\] and \[t:converse\_general\] on choosing the spectrum slicing parameters $\lamin, \lamax$, and $\Delta$ appropriately.
Specifically, using Theorem \[t:interactive\_data\_exchange\] with $$\begin{aligned}
\lamin &= n(\underline{H}(\bX, \bY) -\delta), \\ \lamax &=
n(\overline{H}(\bX, \bY) +\delta), \\ \Delta &= \sqrt{\lamax- \lamin}
\\&= N \\ \eta &= \Delta, \\ l_{\max} &= n (\Romn+\delta) + 3\Delta +1
\\ & = n(\Romn+\delta) + O(\sqrt{n}),\end{aligned}$$ where $\delta>0$ is arbitrary, we get a communication protocol of rate $\Romn +\delta + O(n^{-1/2})$ attaining $\ep_n$-DE with $ \ep_n
\rightarrow 0$. Since $\delta > 0$ is arbitrary, $R^*(\bX,\bY) \leq
\Romn$.
For the other direction, given a sequence of protocols attaining $\ep_n$-DE with $\ep_n \rightarrow 0$. Let $$\begin{aligned}
\lamin &= n(\underline{H}(\bX, \bY) -\Delta), \\ \lamax &=
n(\overline{H}(\bX, \bY) +\Delta),\end{aligned}$$ and so, $N = O(n)$. Using Theorem \[t:converse\_general\] with $$\gamma = n(\Romn - \delta)$$ for arbitrarily fixed $\delta > 0$, we get for $n$ sufficiently large that $$\begin{aligned}
L_{\ep_n}(X_n, Y_n) &\geq n(\Romn - \delta) + o(n).\end{aligned}$$ Since $\delta > 0$ is arbitrary, the proof is complete.
Strong converse and second-order asymptotics {#s:strong_converse}
============================================
We now turn to the case of IID observations $(X^n, Y^n)$ and establish the second-order asymptotic term in $L_{\ep}(X^n, Y^n)$.
\[t:second\_order\] For every $0< \ep < 1 $, $$\begin{aligned}
L_{\ep}\left(X^n, Y^n\right) = n H(X \triangle Y) + \sqrt{n V}
Q^{-1}(\ep) + o(\sqrt{n}).\end{aligned}$$
[*Proof.*]{} As before, we only need to choose appropriate parameters in Theorems \[t:interactive\_data\_exchange\] and \[t:converse\_general\]. Let $T$ denote the third central moment of the random variable $\romn XY$.
For the achievability part, note that for IID random variables $(X^n,Y^n)$ the spectrum of $P_{X^nY^n}$ has width $O(\sqrt{n})$. Therefore, the parameters $\Delta$ and $N$ can be $O(n^{1/4})$. Specifically, by standard measure concentration bounds (for bounded random variables), for every $\delta>0$ there exists a constant $c$ such that with $\lamax = nH(XY) + c\sqrt n$ and $\lamin =
nH(XY) -c\sqrt n$, $$\bPr{(X^n, Y^n)\in \cT_0} \leq \delta.$$ For $$\begin{aligned}
\lambda_n &= n H(X \triangle Y) + \sqrt{n V} Q^{-1}\left(\ep - 2\delta
- \frac{T^3}{2V^{3/2}\sqrt{n}}\right),\end{aligned}$$ choosing $\Delta = N = \eta = \sqrt{2c}n^{1/4}$, and $l_{\max} =
\lambda_n + 3\Delta + 1$ in Theorem \[t:interactive\_data\_exchange\], we get a protocol of length $\l_{\max}$ satisfying $$\begin{aligned}
\bPr{X \neq \hat X, \text{ or } Y \neq \hat Y} \leq \bPr{\sum_{i=1}^n
h(X_i \triangle Y_i) > \lambda_n} + 2\delta,\end{aligned}$$ for $n$ sufficiently large. Thus, the Berry-Esséen theorem ( [@Fel71]) and the observation above gives a protocol of length $\l_{\max}$ attaining $\ep$-DE. Therefore, using the Taylor approximation of $Q(\cdot)$ yields the achievability of the claimed protocol length; we skip the details of this by-now-standard argument (see, for instance, [@PolPooVer10]).
Similarly, the converse follows by Theorem \[t:converse\_general\] and the Berry-Esséen theorem upon choosing $\lamax$, $\lamin$, and $N$ as in the proof of converse part of Theorem \[t:communication\_omniscience\_general\] when $\lambda_n$ is chosen to be $$\begin{aligned}
\lambda_n &= n H(X \triangle Y) + \sqrt{n V} Q^{-1}\left(\ep -
2\frac1N - \frac{T^3}{2V^{3/2}\sqrt{n}}\right) \\ &= n H(X \triangle
Y) + \sqrt{n V} Q^{-1}\left(\ep\right) + O(\log n),\end{aligned}$$ where the final equality is by the Taylor approximation of $Q(\cdot)$.
In the previous section, we saw that interaction is necessary to attain the optimal first order asymptotic term in $L_\ep(X_n, Y_n)$ for a mixture of IID random variables. In fact, even for IID random variables interaction is needed to attain the correct second order asymptotic term in $L_\ep(X^n, Y^n)$, as shown by the following example.
\[ex:second\_order\_suboptimal\] Consider random variables $X$ and $Y$ with an indecomposable joint distribution $\bPP{XY}$ such that the matrix $$\begin{aligned}
\mathbf{V} = \mathrm{Cov}([-\log \bP{X|Y}{X|Y}, -\log \bP{Y|X}{Y|X}])\end{aligned}$$ ${\bf V}$ is nonsingular. For IID random variables $(X^n,Y^n)$ with common distribution $\bPP{XY}$, using Proposition \[proposition:simple\] and a multidimensional Berry-Esséen theorem ($cf.$ [@TanKos14]), we get that the second-order asymptotic term for the minimum length of simple communication for $\ep$-DE is given by[^11] $$\begin{aligned}
L_\ep^{\mathrm{s}}(X^n,Y^n) = n H(X \triangle Y)+ \sqrt{n} D_\ep +
o(\sqrt{n}),\end{aligned}$$ where $$\begin{aligned}
D_\ep := \inf\Big\{ r_1 + r_2 : \bPr{ Z_1\leq r_1, Z_2\leq r_2} \ge 1 - \ep \Big\},\end{aligned}$$ for Gaussian vector $\bZ = [Z_1,Z_2]$ with mean $[0,0]$ and covariance matrix ${\bf V}$. Since $\mathbf{V}$ is nonsingular,[^12] we have $$\begin{aligned}
\sqrt{V} Q^{-1}(\ep) &= \inf\Big\{ r: \mathbb{P}\Big(Z_1 + Z_2 \le r
\Big) \ge 1 - \ep \Big\} \\ &< D_\ep.\end{aligned}$$ Therefore, $L_\ep(X^n , Y^n)$ has strictly smaller second order term than $L_\ep^s(X^n, Y^n)$, and interaction is necessary for attaining the optimal second order term in $L_\ep(X^n, Y^n)$.
Discussion
==========
We have presented an interactive data exchange protocol and a converse bound which shows that, in a single-shot setup, the parties can exchange data using roughly $h(X\Delta Y)$ bits when the parties observe $X$ and $Y$. Our analysis is based on the information spectrum approach. In particular, we extend this approach to enable handling of interactive communication. A key step is the [*spectrum slicing*]{} technique which allows us to split a nonuniform distribution into almost uniform “spectrum slices”. Another distinguishing feature of this work is our converse technique which is based on extracting a secret key from the exchanged data and using an upper bound for the rate of this secret key. In effect, this falls under the broader umbrella of [*common randomness decomposition*]{} methodology presented in [@TyaThesis] that studies a distributed computing problem by dividing the resulting common randomness into different independent components with operational significance. As a consequence, we obtain both the optimal rate for data exchange for general sources as well as the precise second-order asymptotic term for IID observations (which in turn implies a strong converse). Interestingly, none of these optimal results can be obtained by simple communication and interaction is necessary, in general. Note that our proposed scheme uses $O(n^{1/4})$ rounds of interaction; it remains open if fewer rounds of interaction will suffice.
Another asymptotic regime, which was not considered in this paper, is the error exponent regime where we seek to characterize the largest possible rate of exponential decay of error probability with blocklength for IID observations. Specifically, denoting by $\bP{\mathtt{err}}{l|X,Y}$ the least probability of error $\ep$ that can be attained for data exchange by communicating less than $l$ bits, $i.e.$, $$\begin{aligned}
\bP{\mathtt{err}}{l|X,Y} := \inf\{ \ep : L_\ep(X,Y) \le l \},\end{aligned}$$ we seek to characterize the limit of $$- \frac{1}{n} \log \bP{\mathtt{err}}{2^{nR}|X^n,Y^n}.$$ The following result is obtained using a slight modification of our single-shot protocol for data exchange where the slices of the spectrum $\cT_i$ in are replaced with type classes and the decoder is replaced by a special case of the $\alpha$-decoder introduced in [@Csi82]. For a fixed rate $R \geq 0$, our modified protocol enables data exchange, with large probability, for every $(\bx, \by)$ with joint type $\bPP{\overline{X}\,\overline{Y}}$ such that (roughly) $$R> H(\overline X\triangle \overline Y).$$ The converse part follows from the strong converse of Result \[result:strong-converse\], together with a standard measure change argument ($cf.$ [@CsiKor11]). The formal proof is given in Appendix A.
\[result:exponent\] For a given rate $R > H(X \triangle Y)$, define $$E_{\mathtt{r}}(R) := \min_{\bQQ{\overline{X}\,\overline{Y}}} \left[
D(\bQQ{\overline{X}\,\overline{Y}} \| \bPP{XY}) + | R -
H(\overline{X} \triangle \overline{Y})|^+ \right]$$ and $$E_{\mathtt{sp}}(R) := \inf_{\bQQ{\overline{X}\,\overline{Y}} \in
\cQ(R)} D(\bQQ{\overline{X}\,\overline{Y}} \| \bPP{XY}),$$ where $|a|^+ = \max\{a,0\}$ and $$\begin{aligned}
\cQ(R) := \left\{ \bQQ{\overline{X}\,\overline{Y}} : R <
H(\overline{X} \triangle \overline{Y}) \right\}.\end{aligned}$$ Then, it holds that $$\begin{aligned}
\liminf_{n\to\infty} - \frac{1}{n} \log
\bP{\mathtt{err}}{2^{nR}|X^n,Y^n} \ge E_{\mathtt{r}}(R) \nonumber\end{aligned}$$ and that $$\begin{aligned}
\limsup_{n\to\infty} - \frac{1}{n} \log
\bP{\mathtt{err}}{2^{nR}|X^n,Y^n} &\le E_{\mathtt{sp}}(R). \nonumber\end{aligned}$$
$E_{\mathtt{r}}(R)$ and $E_{\mathtt{sp}}(R)$, termed the [*random coding exponent*]{} and the [*sphere-packing exponent*]{}, may not match in general. However, when $R$ is sufficiently close to $H(X
\triangle Y)$, the two bounds can be shown to coincide. In fact, in Appendix B we exhibit an example where the optimal error exponent attained by interactive protocols is strictly larger than that attained by simple communication. Thus, in the error exponent regime, too, interaction is strictly necessary.
Achievability Proof of Result \[result:exponent\]
-------------------------------------------------
In this appendix, we consider the error exponent and prove Result \[result:exponent\]. We use the method of types. The type of a sequence $\mathbf{x}$ is denoted by $\bPP{\mathbf{x}}$. For a given type $\bPP{\overline{X}}$, the set of all sequences of type $\bPP{\overline{X}}$ is denoted by $\cT_{\overline{X}}^n$. The set of all types on alphabet $\cX$ is denoted by $\cP_n(\cX)$. We use similar notations for joint types and conditional types. For a pair $(\mathbf{x},\mathbf{y})$ with joint type $\bPP{\overline{X}\,\overline{Y}}$, we denote $H(\mathbf{x}
\triangle \mathbf{y}) = H(\overline{X} \triangle \overline{Y})$. We refer the reader to [@CsiKor11] for basic results on the method of type.
Fix $R > 0$ as the rate of communication exchanged (by both the parties), but without adding the rate contributed by ACK-NACK messages exchanged. We consider $r$ rounds protocol, where $r = \lceil \frac{R}{\Delta} \rceil$ for a fixed $\Delta > 0$. Let $R_i = i \Delta$ for $i =1 ,\ldots,r$. Basic idea of the protocol is the same as our single-shot protocol, i.e., we increment the hash size in steps. However, when we consider the error exponent regime, to reduce the contribution of “binning error” to the error exponent, we need a more carefully designed protocol.
For a given joint type $\bPP{\overline{X}\,\overline{Y}}$, the key modification we make is to delay the start of communication by Party 2 (which started once $R_i >
H(\overline{X} | \overline{Y})$ was satisfied). Heurisitcally, once Party 2 can decode $\mathbf{x}$ correctly, he can send $\mathbf{y}$ to Party 1 without error by using roughly[^13] $n H(\overline{Y}|\overline{X})$ bits, where $\bPP{\overline{X}\,\overline{Y}} = \bPP{\mathbf{x}\mathbf{y}}$. Thus, the budget Party 1 can use is $R - H(\overline{Y}|\overline{X})$, which is larger than $H(\overline{X}|\overline{Y})$ when $R >
H(\overline{X} \triangle \overline{Y})$. Therefore, allowing Party 1 to communicate more before Party 2 starts may reduce the binning error probability.
Motivated by this reason, we assign the timing of decoding to each joint type as follows: $$\begin{aligned}
\phi(\bPP{\overline{X}\,\overline{Y}})
&:= \min\left\{ i : 1 \le i \le r, R_i \ge R - H(\overline{Y}|\overline{X}) - \Delta \right\} \\
&= \max\left\{ i : 1 \le i \le r, R_i < R - H(\overline{Y}|\overline{X}) \right\}\end{aligned}$$ if $R - H(\overline{Y}|\overline{X}) - \Delta > 0$, and $\phi(\bPP{\overline{X}\,\overline{Y}}) = 0$ is $R - H(\overline{Y}|\overline{X}) - \Delta \le 0$.
For given hash functions $\mathbf{h} = (h_1,\ldots,h_r)$ with $h_i:\cX^n \to \{1,\ldots, 2^{\lceil n \Delta \rceil} \}$, let $N_{\mathbf{h}}(\overline{X}\,\hat{X}\,\overline{Y})$ denote, for each joint type $\bPP{\overline{X}\,\hat{X}\,\overline{Y}}$, the number of pairs $(\mathbf{x},\mathbf{y}) \in \cT_{\overline{X}\,\overline{Y}}^n$ such that for some $\hat{\mathbf{x}} \neq \mathbf{x}$ with $\bPP{\mathbf{x}\hat{\mathbf{x}}\mathbf{y}} = \bPP{\overline{X}\,\hat{X}\,\overline{Y}}$, the relations $$\begin{aligned}
h_i(\mathbf{x}) = h_i(\hat{\mathbf{x}}),~i =1,\ldots,\phi(\bPP{\hat{X}\,\overline{Y}})\end{aligned}$$ hold. The next result is a slight modification of a lemma in [@Csi82 Section 3]; the proof is almost the same and is omitted.
There exist hash functions $\mathbf{h} = (h_1,\ldots,h_r)$ such that for every joint type $\bPP{\overline{X}\,\hat{X}\,\overline{Y}}$ such that $\phi(\bPP{\hat{X}\,\overline{Y}}) \neq 0$, the following bound holds: $$\begin{aligned}
\frac{N_{\mathbf{h}}(\overline{X}\,\hat{X}\,\overline{Y})}{|\cT_{\overline{X}\,\overline{Y}}^n|} \le \exp\left\{ - n ( R_{\phi(\bPP{\hat{X}\,\overline{Y}})} - H(\hat{X} | \overline{X}\,\overline{Y}) - \delta_n) \right\},\end{aligned}$$ where $$\begin{aligned}
\delta_n = |\cX|^2 |\cY| \frac{\log (n+1)}{n}.\end{aligned}$$
For the decoder, we use the [*minimum sum conditional entropy decoder*]{}, which is a kind of $\alpha$-decoder introduced in [@Csi82].
Our protocol is described in Protocol \[p:interactive-dx-exponent\].
Party 2 send the joint type $\bPP{\overline{X}\,\overline{Y}}$ of $(\hat{X}^n,Y^n) = (\mathbf{x},\mathbf{y})$, and send the index of $\mathbf{y}$ among $\cT_{\overline{Y}|\overline{X}}^n(\mathbf{x})$.
The achievability part of Result \[result:exponent\] can be seen as follows. Fix a joint type $\bPP{\overline{X}\,\overline{Y}}$. If $\phi(\bPP{\overline{X}\,\overline{Y}}) = 0$, then an error occurs whenever $(\mathbf{x},\mathbf{y}) \in
\cT_{\overline{X}\,\overline{Y}}^n$. We also note that $R - H(\overline{Y}|\overline{X}) - \Delta \le 0$ implies $|R - H(\overline{X} \triangle \overline{Y}) - \Delta |^+ = 0$. Thus, the probability of this kind of error is upper bounded by $$\begin{aligned}
&\sum_{\bPP{\overline{X}\,\overline{Y}} \in \cP_n(\cX \times \cY) \atop \phi(\bPP{\overline{X}\,\overline{Y}}) = 0}
\exp\{ - n [ D(\bPP{\overline{X}\,\overline{Y}} \| \bPP{XY}) \nonumber \\
&\hspace{35mm} + | R - H(\overline{X} \triangle \overline{Y}) - \Delta|^+ ]\}.
\label{eq:error-first-kind}\end{aligned}$$ Next, consider the case when $\phi(\bPP{\overline{X}\,\overline{Y}}) \ge 1$. For given conditional type $\bPP{\hat{X}|\overline{X}\,\overline{Y}}$, a sequence $\hat{\mathbf{x}}$ with $(\mathbf{x},\hat{\mathbf{x}},\mathbf{y}) \in \cT^n_{\overline{X}\,\hat{X}\,\overline{Y}}$ causes an error when
1. $\phi(\bPP{\hat{X}\,\overline{Y}}) \le \phi(\bPP{\overline{X}\,\overline{Y}})$,
2. $$\begin{aligned}
h_i(\hat{\mathbf{x}}) = h_i(\mathbf{x}),~i=1,\ldots,\phi(\bPP{\hat{X}\,\overline{Y}})\end{aligned}$$
3. $$\begin{aligned}
H(\hat{X} \triangle \overline{Y}) \le H(\overline{X} \triangle \overline{Y}).\end{aligned}$$
Also note that $\phi(P_{\overline{X}\,\overline{Y}}) = i$ implies $$\begin{aligned}
H(\overline{Y}|\overline{X}) < R - R_i,\end{aligned}$$ i.e., once $\mathbf{x}$ is recovered correctly, $\mathbf{y}$ can be sent without an error. Thus, the error probability of this kind is upper bounded by $$\begin{aligned}
& \sum_{ P_{\overline{X}\,\overline{Y} }\in \cP_n(\cX \times \cY) \atop \phi(\bPP{\overline{X}\,\overline{Y}}) \ge 1 }
\exp\{ -nD(\bPP{\overline{X}\,\overline{Y} } \| \bPP{XY})\} \\
&~~~ \sum_{ \bPP{\hat{X}|\overline{X}\,\overline{Y}} \atop H(\hat{X} \triangle \overline{Y}) \le H(\overline{X} \triangle \overline{Y}) }
\frac{N_{\mathbf{h}}(\overline{X}\,\hat{X}\,\overline{Y})}{|\cT_{\overline{X}\,\overline{Y}}^n|} \\
&\le \sum_{ \bPP{\overline{X}\,\overline{Y} } \in \cP_n(\cX \times \cY) \atop \phi(\bPP{\overline{X}\,\overline{Y}}) \ge 1}
\exp\{ -nD(\bPP{\overline{X}\,\overline{Y} } \| \bPP{XY})\}
\sum_{ \bPP{\hat{X}|\overline{X}\,\overline{Y}} \atop H(\hat{X} \triangle \overline{Y}) \le H(\overline{X} \triangle \overline{Y}) } \\
&~~~
\exp\{ - n |R_{\phi(\bPP{\hat{X}\overline{Y}})} - H(\hat{X}|\overline{X}\,\overline{Y}) - \delta_n|^+ \} \\
&\le \sum_{\bPP{\overline{X}\,\overline{Y} } \in \cP_n( \cX \times \cY) \atop \phi(\bPP{\overline{X}\,\overline{Y}}) \ge 1}
\exp\{ -nD(\bPP{\overline{X}\,\overline{Y} } \| \bPP{XY})\}
\sum_{\bPP{\hat{X}|\overline{X}\,\overline{Y}} \atop H(\hat{X} \triangle \overline{Y}) \le H(\overline{X} \triangle \overline{Y}) } \\
&~~~
\exp\{ - n |R - H(\overline{Y}|\hat{X}) - H(\hat{X}|\overline{X}\,\overline{Y}) - \Delta - \delta_n|^+ \} \\
&\le \sum_{\bPP{\overline{X}\,\overline{Y} } \in \cP_n(\cX \times \cY) \atop \phi(\bPP{\overline{X}\,\overline{Y}}) \ge 1}
\exp\{ -nD(\bPP{\overline{X}\,\overline{Y} } \| \bPP{XY})\}
\sum_{\bPP{\hat{X}|\overline{X}\,\overline{Y}} \atop H(\hat{X} \triangle \overline{Y}) \le H(\overline{X} \triangle \overline{Y}) } \\
&~~~ \exp\{ - n |R - H(\overline{Y}|\hat{X}) - H(\hat{X}|\overline{Y}) - \Delta - \delta_n|^+ \} \\
&= \sum_{\bPP{\overline{X}\,\overline{Y} } \in \cP_n(\cX \times \cY) \atop \phi(\bPP{\overline{X}\,\overline{Y}}) \ge 1}
\exp\{ -nD(\bPP{\overline{X}\,\overline{Y} } \| \bPP{XY})\}
\sum_{\bPP{\hat{X}|\overline{X}\,\overline{Y}} \atop H(\hat{X} \triangle \overline{Y}) \le H(\overline{X} \triangle \overline{Y}) } \\
&~~~ \exp\{ - n |R - H(\hat{X} \triangle \overline{Y}) - \Delta - \delta_n|^+ \} \\
&\le \sum_{\bPP{\overline{X}\,\overline{Y} } \in \cP_n(\cX \times \cY) \atop \phi(\bPP{\overline{X}\,\overline{Y}}) \ge 1}
\exp\{ -nD(\bPP{\overline{X}\,\overline{Y} } \| \bPP{XY})\}
\sum_{\bPP{\hat{X}|\overline{X}\,\overline{Y}} \atop H(\hat{X} \triangle \overline{Y}) \le H(\overline{X} \triangle \overline{Y}) } \\
&~~~ \exp\{ - n |R - H(\overline{X} \triangle \overline{Y}) - \Delta - \delta_n|^+ \} \\
&\le \sum_{ \bPP{\overline{X}\,\overline{Y} } \in \cP_n(\cX \times \cY) \atop \phi(\bPP{\overline{X}\,\overline{Y}}) \ge 1}
\exp\{ -nD(\bPP{\overline{X}\,\overline{Y} } \| \bPP{XY})\}
(n+1)^{|{\cal X}|^2 |{\cal Y}|} \\
&~~~ \exp\{ - n |R - H(\overline{X} \triangle \overline{Y}) - \Delta - \delta_n|^+ \}.\end{aligned}$$ Thus, by combining this with , the total error probability is upper bounded by $$\begin{aligned}
&(n+1)^{|\cX|^2|\cY|+ |\cX||\cY|}
\exp\{ - n \min_{\bPP{\overline{X}\,\overline{Y}}} [ D(\bPP{\overline{X}\,\overline{Y} } \| \bPP{XY}) \\
&~~~~~~~~~~~~~~~~~~~~~~~~~+ |R - H(\overline{X} \triangle \overline{Y}) - \Delta - \delta_n|^+ ] \}.\end{aligned}$$ Since $\Delta$ can be taken arbitrarily small, and the number of bits needed to send ACK-NACK is at most $r$.[^14] Consequently, Protocol \[p:interactive-dx-exponent\] attains the exponent given in Result \[result:exponent\].
An Example Such That Interaction Improves Error Exponent
--------------------------------------------------------
Consider the following source: $\cX$ and $\cY$ are both binary, and $\bPP{XY}$ is given by $$\begin{aligned}
\bP{XY}{0,0} = \bP{XY}{1,0} = \bP{XY}{1,1} = \frac{1}{3},\end{aligned}$$ that is, $X$ and $Y$ are connected by a $Z$-channel. To evaluate $E_{\mathtt{sp}}(R )$, without loss of generality, we can assume that $\bQ{\overline{X}\,\overline{Y}}{0,1} = 0$ (since otherwise $D(\bQQ{\overline{X}\,\overline{Y}} \| \bPP{XY}) = \infty$). Let us consider the following parametrization: $$\begin{aligned}
\bQ{\overline{X}\,\overline{Y}}{0,0} =
u,~\bQ{\overline{X}\,\overline{Y}}{1,0} = 1 - u -
v,~\bQ{\overline{X}\,\overline{Y}}{1,1} = v,\end{aligned}$$ where $0 \le u,v \le 1$. Then, we have $$\begin{aligned}
\label{eq:divergence-u-v-form}
D(\bQQ{\overline{X}\,\overline{Y}} \| \bPP{XY}) = \log 3 -
H(\{u,1-u-v,v \})\end{aligned}$$ and $$\begin{aligned}
\lefteqn{ H(\overline{X}|\overline{Y}) + H(\overline{Y}|\overline{X}) } \\
&=
\kappa(u,v) \\ &:= (1-v) h\left(\frac{u}{1-v}\right) + (1-u)
h\left(\frac{v}{1-u}\right).\end{aligned}$$ When the rate $R$ is sufficiently close to $H(X\triangle Y) =
\kappa(1/3,1/3) = 4/3$, the set $\cQ(R )$ is not empty.[^15] Since and $\kappa(u,v)$ are both symmetric with respect to $u$ and $v$ and and $\cQ(R )$ are convex function and convex set, respectively, the optimal solution $(u^*,v^*)$ in the infimum of $E_{\mathtt{sp}}(R )$ satisfies $u^*= v^*$. Furthermore, since $R > \kappa(1/3,1/3)$, we also have $u^* = v^* \neq 1/3$.
Note that for $R$ sufficiently close to $H(X\triangle Y)$, $E_{\mathtt{sp}}(R )$ can be shown to equal $E_{\mathtt{r}}(R )$. Thus, to show that a simple communication is strictly suboptimal for error exponent, it suffices to show that $E_{\mathtt{sp}}(R ) >
E_{\mathtt{sp}}^{\mathrm{s}}(R )$, where the latter quantity $E_{\mathtt{sp}}^{\mathrm{s}}(R )$ corresponds to the sphere packing bound for error exponent using simple communication and is given by $$\begin{aligned}
E^{\mathrm{s}}_{\mathtt{sp}}(R ) := \max_{\substack{(R_1,R_2):
\\ R_1+R_2 \le R}} \inf_{\bQQ{\overline{X}\,\overline{Y}} \in
\cQ^{\mathrm{s}}(R_1,R_2)} D(\bQQ{\overline{X}\,\overline{Y}} \|
\bPP{XY})\end{aligned}$$ and $$\begin{aligned}
\cQ^{\mathrm{s}}(R_1,R_2) := \left\{ \bQQ{\overline{X}\,\overline{Y}}
: R_1 < H(\overline{X}|\overline{Y}) \mbox{ or } R_2 <
H(\overline{Y}|\overline{X}) \right\}.\end{aligned}$$ Since the source is symmetric with respect to $X$ and $Y$, for evaluating $E_{\mathtt{sp}}^{\mathrm{s}}(R )$ we can assume without loss of generality that $R_1 \ge R_2$. Let $u^\dagger := u^*$ and $v^\dagger := \frac{1-u^\dagger}{2}$ so that $\frac{v^\dagger}{1-u^\dagger} = \frac{1}{2}$. Let $\mathrm{Q}^{\dagger}_{\overline{X}\,\overline{Y}}$ be the distribution that corresponds to $(u^\dagger,v^\dagger)$. Note that $\mathrm{Q}^{\dagger}_{\overline{X}\,\overline{Y}}$ satisfies $$\begin{aligned}
H(\overline{Y}|\overline{X}) &= (1-u^\dagger) h\left(
\frac{v^\dagger}{1-u^\dagger} \right) \\ &> (1-u^*) h\left(
\frac{v^*}{1-u^*} \right) \\ &\ge \frac{R}{2} \\ &\ge R_2,\end{aligned}$$ and so, $\mathrm{Q}^{\dagger}_{\overline{X}\,\overline{Y}} \in
\cQ^{\mathrm{s}}(R_1,R_2)$. For this choice of $\mathrm{Q}^{\dagger}_{\overline{X}\,\overline{Y}}$, we have $$\begin{aligned}
D(\mathrm{Q}^{*}_{\overline{X}\,\overline{Y}} \| \bPP{XY}) &= \log 3 -
H(\{ u^*,1-u^* - v^*, v^*\}) \\ &= \log 3 - h(1-u^*) - (1-u^*) h\left(
\frac{v^*}{1-u^*}\right) \\ &> \log 3 - h(1-u^\dagger) - (1-u^\dagger)
h\left( \frac{v^\dagger}{1-u^\dagger} \right) \\ &=
D(\mathrm{Q}^{\dagger}_{\overline{X}\,\overline{Y}} \| \bPP{XY}),\end{aligned}$$ which implies $E_{\mathtt{sp}}(R ) > E_{\mathtt{sp}}^{\mathrm{s}}(R
)$.
[^1]: H. Tyagi is with the Department of Electrical Communication Engineering, Indian Institute of Science, Bangalore 560012, India. Email: htyagi@ece.iisc.ernet.in
[^2]: P. Viswanath is with the Department of Electrical and Computer Engineering, University of Illinois, Urbana-Champaign, IL 61801, USA. Email: {pramodv}@illinois.edu
[^3]: S. Watanabe is with the Department of Computer and Information Sciences, Tokyo University of Agriculture and Technology, Tokyo 184-8588, Japan. Email: shunwata@cc.tuat.ac.jp
[^4]: They also illustrated the advantage of using randomized protocols when error is allowed
[^5]: In a different context, recently [@AltWag14] showed that the second-order asymptotic term in the size of good channel codes can be improved using feedback.
[^6]: By derandomizing , it is easy to see that local and shared randomness does not help, and deterministic protocols attain $L_\ep(X,Y)$.
[^7]: Spectrum of a distribution $\bPP{X}$ refers, loosely, to the distribution of the random variable $-\log \bP{X}{X}$.
[^8]: Following the pioneering work of Strassen [@Str62], study of these second-order terms in coding theorems has been revived recently by Hayashi [@Hayashi08; @Hay09] and Polyanskiy, Poor, and Verdú [@PolPooVer10].
[^9]: Alternatively, we can use the (noninteractive) Slepian-Wolf coding by setting the size of hash as $l_{\max} - (h(X|Y) + \Delta+N+\eta)$.
[^10]: The distributions $\bPP{X_nY_n}$ need not satisfy the consistency conditions.
[^11]: The achievability part can be derived by a slight modification of the arguments in [@MiyKan95],[@Han03 Lemma 7.2.1].
[^12]: For instance, when $X$ is uniform random variable on $\{0,1\}$ and $Y$ is connected to $X$ via a binary symmetric channel, the covariance matrix $\mathbf{V}$ is singular and interaction does not help.
[^13]: Since Party 2 has to send the joint type $\bPP{\overline{X}\,\overline{Y}}$ to Party 1, additional $|\cX||\cY|\log(n+1)$ bits are needed.
[^14]: Our type-based protocol uses only constant number of rounds of interaction (independent of $n$).
[^15]: In fact, we can check that $\frac{\kappa(z,z)}{dz}\Big|_{z=1/3} = -1$, and thus the function $\kappa(z,z)$ takes its maximum away from $(1/3,1/3)$.
|
---
author:
- |
M.M. Giannini, E. Santopinto and A. Vassallo\
[Dipartimento di Fisica dell’Università di Genova and I.N.F.N., Sezione di Genova]{}
title: AN OVERVIEW OF THE HYPERCENTRAL CONSTITUENT QUARK MODEL
---
Introduction
============
In recent years much attention has been devoted to the description of the internal nucleon structure in terms of quark degrees of freedom. Besides the now classical Isgur-Karl model [@is], the Constituent Quark Model has been proposed in quite different approaches: the algebraic one [@bil], the hypercentral formulation [@pl] and the chiral model [@olof; @ple]. In the following the hypercentral Constituent Quark Model (hCQM), which has been used for a systematic calculation of various baryon properties, will be briefly reviewed.
The hypercentral model
======================
The experimental $4-$ and $3-$star non strange resonances can be arranged in $SU(6)-$multiplets (see Fig. 1). This means that the quark dynamics has a dominant $SU(6)-$invariant part, which accounts for the average multiplet energies. In the hCQM it is assumed to be [@pl] $$\label{eq:pot}
V(x)= -\frac{\tau}{x}~+~\alpha x,$$ where $x$ is the hyperradius $$x=\sqrt{{\vec{\rho}}^2+{\vec{\lambda}}^2} ~~,$$ where $\vec{\rho}$ and $\vec{\lambda}$ are the Jacobi coordinates describing the internal quark motion. The dependence of the potential on the hyperrangle $\xi=arctg(\frac{{\rho}}{{\lambda}})$ has been neglected.
![The experimental spectrum of the 4- and 3-star non strange resonances. On the left the standard assignements to $SU(6)$ multiplets is reported, with the total orbital angular momentum and the parity.](spettro_SU6.eps){width="14cm"}
Interactions of the type linear plus Coulomb-like have been used since time for the meson sector, e.g. the Cornell potential. This form has been supported by recent Lattice QCD calculations [@bali]. In the case of baryons a so called hypercentral approximation has been introduced [@has; @rich], which amounts to average any two-body potential for the three quark system over the hyperangle $\xi$ and works quite well, specially for the lower part of the spectrum [@hca]. In this respect, the hypercentral potential Eq.\[eq:pot\] can be considered as the hypercentral approximation of the Lattice QCD potential. On the other hand, the hyperradius $x$ is a collective coordinate and therefore the hypercentral potential contains also three-body effects.
The hypercoulomb term $1/x$ has important features [@pl; @sig]: it can be solved analytically and the resulting form factors have a power-law behaviour, at variance with the widely used harmonic oscillator; moreover, the negative parity states are exactly degenerate with the first positive parity excitation, providing a good starting point for the description of the spectrum.
The splittings within the multiplets are produced by a perturbative term breaking $SU(6)$, which as a first approximation can be assumed to be the standard hyperfine interaction $H_{hyp}$ [@is]. The three quark hamiltonian for the hCQM is then: $$\label{eq:ham}
H = \frac{p_{\lambda}^2}{2m}+\frac{p_{\rho}^2}{2m}-\frac{\tau}{x}~
+~\alpha x+H_{hyp},$$ where $m$ is the quark mass (taken equal to $1/3$ of the nucleon mass). The strength of the hyperfine interaction is determined in order to reproduce the $\Delta-N$ mass difference, the remaining two free parameters are fitted to the spectrum, reported in Fig. 2, leading to the following values: $$\label{eq:par}
\alpha= 1.16~fm^{-2},~~~~\tau=4.59~.$$
![The spectrum obtained with the hypercentral model Eq. (3) and the parameters Eq. (4) (full lines)), compared with the experimental data of PDG [@pdg] (grey boxes).](spettro_a_t.epsi){width="12cm"}
Keeping these parameters fixed, the model has been applied to calculate various physical quantities of interest: the photocouplings [@aie], the electromagnetic transition amplitudes [@aie2], the elastic nucleon form factors [@mds] and the ratio between the electric and magnetic proton form factors [@rap]. Some results of such parameter free calculations are presented in the next section.
The results
===========
The electromagnetic transition amplitudes, $A_{1/2}$ and $A_{3/2}$, are defined as the matrix elements of the transverse electromagnetic interaction, $H_{e.m.}^t$, between the nucleon, $N$, and the resonance, $B$, states: $$\label{eq:amp}
\begin{array}{rcl}
A_{1/2}&=& \langle B, J', J'_{z}=\frac{1}{2}\ | H_{em}^t| N, J~=~
\frac{1}{2}, J_{z}= -\frac{1}{2}\
\rangle\\
& & \\
A_{3/2}&=& \langle B, J', J'_{z}=\frac{3}{2}\ | H_{em}^t| N, J~=~
\frac{1}{2}, J_{z}= \frac{1}{2}\
\rangle\\\end{array}$$ The transition operator is assumed to be $$\label{eq:htm}
H^t_{em}~=~-~\sum_{j=1}^3~\left[\frac{e_j}{2m_j}~(\vec{p_j} \cdot
\vec{A_j}~+
~\vec{A_j} \cdot \vec{p_j})~+~2 \mu_j~\vec{s_j} \cdot (\vec{\nabla}
\times \vec{A_j})\right]~~,$$ where spin-orbit and higher order corrections are neglected [@cko; @ki]. In Eq. (\[eq:htm\]) $~~m_j$, $e_j$, $\vec{s_j}$ , $\vec{p_j}$ and $\mu_j~=~\frac{ge_j}{2m_j}$ denote the mass, the electric charge, the spin, the momentum and the magnetic moment of the j-th quark, respectively, and $\vec{A_j}~=~\vec{A_j}(\vec{r_j})$ is the photon field; quarks are assumed to be pointlike.
The proton photocouplings of the hCQM [@aie] (Eq. (\[eq:amp\]) calculated at the photon point), in comparison with other calculations [@bil; @ki; @cap], have the same overall behaviour, having the same SU(6) structure in common, but in many cases they all show a lack of strength.
![The helicity amplitudes for the $D_{13}(1520)$ resonance, calculated with the hCQM of Eqs. (3) and (4) and the electromagnetic transition operator of Eq. (6) (full curve, [@aie2]). The dashed curve is obtained with the analytical version of the hCQM ([@sig]), where the behaviour of the quark wave function is determined mainly by the hypercoulomb potential. The data are from the compilation of ref. [@burk]](helamp_d13.ps){width="10cm"}
![The helicity amplitude for the $S_{11}(1535)$ resonance, calculated with the hCQM of Eqs. (3) and (4) and the electromagnetic transition operator of Eq. (6) (dashed curve, [@aie2]) and the model of ref.[@cap]. The data are taken from the compilation of ref. [@burk2]](s11.eps){width="14cm"}
Taking into account the $Q^2-$behaviour of the transition matrix elements of Eq. (\[eq:amp\]), one can calculate the hCQM helicity amplitudes in the Breit frame [@aie2]. The hCQM results for the $D_{13}(1520)$ and the $S_{11}(1535)$ resonances [@aie2] are given in Fig. 3 and 4, respectively. The agreement in the case of the $S_{11}$ is remarkable, the more so since the hCQM curve has been published three years in advance with respect to the recent TJNAF data [@dytman].
In general the $Q^2$-behaviour is reproduced, except for discrepancies at small $Q^2$, especially in the $A^{p}_{3/2}$ amplitude of the transition to the $D_{13}(1520)$ state. These discrepancies, as the ones observed in the photocouplings, can be ascribed either to the non-relativistic character of the model or to the lack of explicit quark-antiquark configurations, which may be important at low $Q^{2}$ . The kinematical relativistic corrections at the level of boosting the nucleon and the resonance states to a common frame are not responsible for these discrepancies, as we have demonstrated in Ref. [@mds2]. Similar results are obtained for the other negative parity resonances [@aie2]. It should be mentioned that the r.m.s. radius of the proton corresponding to the parameters of Eq. (\[eq:par\]) is $0.48~fm$, which is just the value obtained in [@cko] in order to reproduce the $D_{13}$ photocoupling. Therefore the missing strength at low $Q^2$ can be ascribed to the lack of quark-antiquark effects, probably more important in the outer region of the nucleon.
The isospin dependence
======================
The well known Guersey-Radicati mass formula [@gura] contains a flavour dependent term, which is essential for the description of the strange baryon spectrum. In the chiral Constituent Quark Model [@olof; @ple], the non confining part of the potential is provided by the interaction with the Goldstone bosons, giving rise to a spin- and flavour-dependent part, which is crucial in this approach for the description of the lower part of the spectrum. More generally, one can expect that the quark-antiquark pair production can lead to an effective residual quark interaction containing an isospin (flavour) dependent term.
![The spectrum obtained with the hypercentral model containing isospin dependent terms Eq. (7) [@iso] (full lines)), compared with the experimental data of PDG [@pdg] (grey boxes).](spettro_iso.epsi){width="12cm"}
Therefore, we have introduced isospin dependent terms in the hCQM hamiltonian. The complete interaction used is given by [@iso] $$\label{tot}
H_{int}~=~V(x) +H_{\mathrm{S}} +H_{\mathrm{I}} +H_{\mathrm{SI}}~,$$ where $V(x)$ is the linear plus hypercoulomb SU(6)-invariant potential of Eq. \[eq:pot\], while the remaining terms are the residual SU(6)-breaking interaction, responsible for the splittings within the multiplets. ${H}_{\mathrm{S}}$ is a smeared standard hyperfine term, ${H}_{\mathrm{I}}$ is isospin dependent and ${H}_{\mathrm{SI}}$ spin-isospin dependent. The resulting spectrum for the 3\*- and 4\*- resonances is shown in Fig. 5 [@iso]. The contribution of the hyperfine interaction to the $N-\Delta$ mass difference is only about $35\%$, while the remaining splitting comes from the spin-isospin term, $(50\%)$, and from the isospin one, $(15\%)$. It should be noted that the position of the Roper and the negative parity states is well reproduced.
Relativity
==========
The relativistic effects that one can introduce starting from a non relativistic quark model are: a) the relativistic kinetic energy; b) the boosts from the rest frames of the initial and final baryon to a common (say the Breit) frame; c) a relativistic quark current. All these features are not equivalent to a fully relativistic dynamics, which is still beyond the present capabilities of the various models.
The potential of Eq.\[eq:pot\] has been refitted using the correct relativistic kinetic energy $$\label{eq:hrel}
H_{rel} = \sum_{i<j} \sqrt{p_{ij}^2+m^2}-\frac{\tau}{x}~
+~\alpha x+H_{hyp}.$$ The resulting spectrum is not much different from the non relativistic one and the parameters of the potential are only slightly modified.
![The electric proton form factor, calculated with the relativistic hCQM of Eq. (8) and a relativistic quark current [@mds3].](gep.eps){width="12cm"}
![The magnetic proton form factor, calculated with the relativistic hCQM of Eq. (8) and a relativistic quark current [@mds3].](gmp.eps){width="12cm"}
![The ratio between the electric and magnetic proton form factors, calculated with the relativistic hCQM of eq. (8) and a relativistic current [@mds3], compared with the TJNAF data [@ped; @gay].](rap.eps){width="12cm"}
The boosts and a relativistic quark current expanded up to lowest order in the quark momenta has been used both for the elastic form factors of the nucleon [@mds] and the helicity amplitudes [@mds2]. In the latter case, as already mentioned, the relativistic effects are quite small and do not alter the agreement with data discussed previously. For the elastic form factors, the relativistic effects are quite strong and bring the theoretical curves much closer to the data; in any case they are responsible for the decrease of the ratio between the electric and magnetic proton form factors, as it has been shown for the first time in Ref. [@rap], in qualitative agreement with the recent Jlab data [@ped].
A relativistic quark current, with no expansion in the quark momenta, and the boosts to the Breit frame have been applied to the calculation of the elastic form factors in the relativistic version of the hCQM Eq. (\[eq:hrel\]) [@mds3]. The resulting theoretical form factors of the proton, calculated, it should be stressed, without free parameters and assuming pointlike quarks, are good (see Figs. 6 and 7), with some discrepancies at low $Q^2$, which, as discussed previously, can be attributed to the lacking of the quark-antiquark pair effects. The corresponding ratio between the electric and magnetic proton form factors is given in Fig. 8: the deviation from unity reaches almost the $50\%$ level, not far from the new TJNAF data [@gay].
Conclusions
===========
The hCQM is a generalization to the baryon sector of the widely used quark-antiquark potential containing a coulomb plus a linear confining term. The three free parameters have been adjusted to fit the spectrum [@pl] and then the model has been used for a systematic calculation of various physical quantities: the photocouplings [@aie], the helicity amplitudes for the electromagnetic excitation of negative parity baryon resonances [@aie2; @mds2], the elastic form factors of the nucleon [@mds; @mds3] and the ratio between the electric and magnetic proton form factors [@rap; @mds3]. The agreement with data is quite good, specially for the helicity amplitudes, which are reproduced in the medium-high $Q^2$ behaviour, leaving some discrepancies at low (or zero) $Q^2$, where the lacking quark-antiquark contributions are expected to be effective. It should be noted that the hypercoulomb term in the potential is the main responsible of such an agreement [@sig], while for the spectrum a further fundamental aspect is provided by the isospin dependent interactions [@iso].
[200]{}
N. Isgur and G. Karl, Phys. Rev. [**D18**]{}, 4187 (1978); [**D19**]{}, 2653 (1979); [**D20**]{}, 1191 (1979); S. Godfrey and N. Isgur, Phys. Rev. [**D32**]{}, 189 (1985); S. Capstick and N. Isgur, Phys. Rev. [**D 34**]{},2809 (1986)
R. Bijker, F. Iachello and A. Leviatan, Ann. Phys. (N.Y.) [**236**]{}, 69 ( 1994)
M. Ferraris, M.M. Giannini, M. Pizzo, E. Santopinto and L. Tiator, Phys. Lett. [**B364**]{}, 231 (1995).
L. Ya. Glozman and D.O. Riska, Phys. Rep. [**C268**]{}, 263 (1996).
L. Ya. Glozman, Z. Papp, W. Plessas, K. Varga and R. F. Wagenbrunn, Phys. Rev. [**C57**]{}, 3406 (1998); L. Ya. Glozman, W. Plessas, K. Varga and R. F. Wagenbrunn, Phys. Rev. [**D58**]{}, 094030 (1998).
G. Bali et al., Phys. Rev. [**D51**]{}, 5165 (1995).
P. Hasenfratz, R.R. Horgan, J. Kuti and J.M. Richard, Phys. Lett. [**B94**]{}, 401 (1980)
J.-M. Richard, Phys. Rep. [**C 212**]{}, 1 (1992)
M. Fabre de la Ripelle and J. Navarro, Ann. Phys. (N.Y.) [**123**]{}, 185 (1979).
E. Santopinto, F. Iachello and M.M. Giannini, Nucl. Phys. [**A623**]{}, 100c (1997); Eur. Phys. J. [**A1**]{}, 307 (1998)
Particle Data Group, Eur. Phys. J. [**C15**]{}, 1 (2000).
M. Aiello, M. Ferraris, M.M. Giannini, M. Pizzo and E. Santopinto, Phys. Lett. [**B387**]{}, 215 (1996).
M. Aiello, M. M. Giannini and E. Santopinto, J. Phys. G: Nucl. Part. Phys. [**24**]{}, 753 (1998)
M. De Sanctis, E. Santopinto and M.M. Giannini, Eur. Phys. J. [**A1**]{}, 187 (1998).
M. De Sanctis, M.M. Giannini, L. Repetto and E. Santopinto, Phys. Rev. [**C62**]{}, 025208 (2000).
L. A. Copley, G. Karl and E. Obryk, Phys. Lett. [**29**]{}, 117 (1969).
R. Koniuk and N. Isgur, Phys. Rev. [**D21**]{}, 1868 (1980).
F. E. Close and Z. Li, Phys. Rev. [**D42**]{}, 2194 (1990); Z. Li and F.E. Close, Phys. Rev. [bf D42]{}, 2207 (1990).
V. D. Burkert, private communication
R.A. Thompson et al., Phys. Rev. Lett. 86, 1702 (2001).
S. Capstick and B.D. Keister, Phys. Rev.[**D 51**]{}, 3598 (1995)
V. D. Burkert,arXiv:hep-ph/0207149.
M. De Sanctis, E. Santopinto and M.M. Giannini, Eur. Phys. J. [**A2**]{}, 403 (1998).
F. Guersey and L.A. Radicati, Phys. Rev. Lett. [**13**]{}, 173 (1964); M. Gell-Mann, Phys. Rev. [**125**]{}, 1067 (1962); S. Okubo, Prog. Theor. Phys. [**27**]{}, 949 (1962)
M.M. Giannini, E. Santopinto and A, Vassallo, Eur. Phys. J. [**A12**]{}, 447 (2001).
M.K. Jones et al., Phys. Rev. Lett. [**B84**]{},1398 (2000).
M. De Sanctis, M.M. Giannini, E. Santopinto and A. Vassallo, to be published.
O. Gayon et al., Phys. Rev. Lett. [**88**]{}, 092301 (2002).
|
---
abstract: 'We investigate the possibility to form high fidelity atomic Fock states by gradual reduction of a quasi one dimensional trap containing spin polarized fermions or strongly interacting bosons in the Tonk-Girardeau regime. Making the trap shallower and simultaneously squeezing it can lead to the preparation of an ideal atomic Fock state as one approaches either the sudden or the adiabatic limits. Nonetheless, the fidelity of the resulting state is shown to exhibit a non-monotonic behaviour with the time scale in which the trapping potential is changed.'
author:
- 'D. Sokolovski'
- 'M. Pons'
- 'A. del Campo'
- 'J. G. Muga'
title: 'Atomic Fock states by gradual trap reduction: from sudden to adiabatic limits'
---
ł
Introduction
=============
Preparation of atomic states containing exactly a fixed number $M$ of atoms (Fock states) is of importance for a wide range of applications, from studying ultracold chemistry and few-body physics to precision measurements and quantum information processing. The aim of such a preparation is to create a quantum state with the mean number of atoms $\la n\ra$ equal to the chosen $M$ and its mean variance as small as possible. Different proposal based on a time-dependent modulation of the confining potential have been put forward both for optical traps [@DRN07; @MUG; @PONS; @WRN09; @RWZN09] and optical lattices [@Qdistill; @NP10].
Recent experiments have shown the possibility of achieving atom-number sub-Poissonian statistics in a quantum degenerate gas with repulsive interactions [@CSMHPR05]. The key idea is that the mean number of trapped atoms can be controlled by adiabatically reducing the depth of the trap while expelling the excess of atoms. The precision of the control improves by maximising the energy splitting between states with different number of particles which ultimately allows one to discriminate such states by modulating the depth of the trapping potential. This technique is referred as to atom culling [@DRN07]. For ultracold gases confined in tight-waveguides, it works optimally in the strongly interacting regime for Bosonic samples [@DRN07; @MUG; @PONS; @WRN09], i. e., in the Tonks-Girardeau gas, where the repulsive interactions lead to an effective Pauli exclusion principle [@Girardeau60]. Alternatively, Fock state preparation is optimized as well with a spin-polarized non-interacting Fermi gas [@PONS; @RWZN09]. Bearing in mind that both systems (which are dual under Bose-Fermi mapping [@Girardeau60]) share all local correlation functions, and in particular the atom-number distribution [@Moll], we shall address the polarized fermions for brevity in the following. Nonetheless, all results in this paper apply to both systems.
Non-interacting spinless fermions when placed in a trap occupy the lowest one-particle levels in the ground state. Changing the shape of the potential trap aims to expel redundant atoms into continuum levels leaving the bound states of the final trap filled to its maximum capacity, $\la \n\ra = M$. This condition together with the Pauli exclusion principle ensures high fidelity of the preparation: since no more than $M$ atoms can be distributed over $M$ levels of the final well, the variance $\sigma^2=\la \n^2\ra -\la \n\ra^2$ must vanish, and precisely $M$ atoms will be found in each individual case. A slow change of the trapping potential would guarantee, by virtue of the adiabatic theorem, full occupation of the states in the final well. The adiabaticity may, however, require times large compared to the time scales typically involved in ultracold atom experiments and one might wish for a faster way to achieve full final occupation. One of the counter intuitive results obtained in Refs. [@MUG; @PONS] is that an infinitely fast “sudden” change provides an alternative to the adiabatic route which can lead to the preparation of ideal Fock states, when making the well shallower (weakening of the trap) is accompanied by also making it narrower (squeezing). Further, at zero temperature, the best results are obtained [@MUG] in case the projector on the subspace of all filled initial states $\l_0$ contains the projector $\l$ on the subspace spanned by all bound states of the final well ($\l\subseteq\l_0 $), i.e., when on that subspace $\l_0$ can be approximated by unity. In addition, non-zero temperature effects can be overcome by starting with a larger initial sample [@PONS]. However, the sudden limit may not be accessible in a given experimental set up, and the main purpose of the present paper is to explore the behaviour of the trapped atoms when a change of the potential is neither sudden, nor adiabatically slow. In particular, we will show that the efficiency of the Fock state preparation exhibits a non-monotonic behaviour as a function of the duration of the quench of the trapping potential. We will also address other questions such as the dependence of final occupation on the depths of the initial and the final wells, and the escape time required for the expelled atoms to leave the well region. The rest of the paper is organised as follows. In Sect. II we will study the dependence of the final occupation on the time in which the trapping potential changes its shape. In Sect. III we briefly discuss the behaviour of the variance when the final well is filled close to its maximal capacity. In Sections IV we review two different mechanisms responsible for the formation of nearly ideal Fock states in the adiabatic and sudden limits. In Sections V we discuss the escape of expelled particles from the final well region, and Section VI contains our conclusions.
Atom counting statistics following a change of the trapping potential
=====================================================================
Consider a number of non-interacting spinless fermions (fermionised bosons) initially trapped in a one-dimesional rectangular square well of depth $V_i$ and width $L_i$. The potential $V(x,t)$ undergoes, over a time $T$, a transformation such that its final shape is also a rectangular well, but with new depth and width, $V_f$ and $L_f$, respectively. Although other types of evolution are possible, we will consider linear change of both the depth and width of the trap, $$\begin{aligned}
\label{a1}
V(t)=V_i+(V_f-V_i)t/T, \\
\nonumber
L(t)=L_i+(L_f-L_i)t/T,\end{aligned}$$ where the ramping time $T$ determines whether evolution of the potential is rapid or slow. Our aim is to control the mean number $\la n(T) \ra$ of the fermions trapped in the final trap, while minimizing the variance $$\begin{aligned}
\label{a2}
\sigma^2_N(T) \equiv \la n^2(T)\ra-\la n(T)\ra^2,\end{aligned}$$ for the case of zero temperature. As already mentioned in the introduction, non-zero temperature effects turn out to be non-critical and can be conveniently overcome by starting with a larger initial sample as described in [@PONS]. We will further assume that the initial and final traps support $K$ and $M \le K$ bound states, respectively, and that there are $N\le K$ fermions occupying the first $N$ levels of the initial well. A well known technique [@Lev; @Klich; @Moll] allows to express $\la n(T) \ra$ and $\sigma^2_N(T)$ for a system of non-interacting fermions in terms of the solutions of the corresponding one-particle Schrödinger equation. Let us denote by $\phi^i(x)$ ($\phi^f(x)$) the states of the initial (final) trap. Bound (scattering) states will be labeled by a Latin (Greek) letter as a subindex. In the dimensionless variables $x/L_i$ and $t/t_0$, where $$\begin{aligned}
\label{a3}
t_0\equiv \mu L_i^2 / \hbar ,\end{aligned}$$ and $\mu$ is the atomic mass, the time-dependent single-particle eigenstates obey $$\begin{aligned}
\label{a4}
i\partial_t \phi_n^i =-\frac{1}{2}\partial_x^2 \phi_n^i +W(x,t)\phi_n^i,\quad n=1,\dots,N,\end{aligned}$$ with $W(x,t)=V(x,t)t_0/\hbar$. For a trap size of $80$ $\mu$m the time $t_0$ for Rb and Cs atoms takes values of $8.8$ s and $13.4$ s, respectively. Following [@MUG; @PONS; @Lev; @Klich] we obtain $$\begin{aligned}
\label{1}
\la n(T)\ra =\sum_{j=1}^M \la \phi_j^f|\l_T|\phi_j^f\ra\end{aligned}$$ and $$\begin{aligned}
\label{2}
\sigma^2_N(T) = \la n(T)\ra
-\sum_{j=1}^M\la \phi_j^f| \l_T \l \l_T)|\phi_j^f\ra.\end{aligned}$$ Here, $\l$ is the projector on the subspace spanned by the one particle bound states $|\phi^f_j\ra$, $j=1,2,\dots,M$ of the final well, $$\begin{aligned}
\label{4}
\l = \sum_{j=1}^M |\phi_j^f\ra \la \phi_j^f|.\end{aligned}$$ Similarly, $\l_T$ is the projector onto the subspace spanned by the orthogonal states obtained by the time evolution of the one particle bound states $|\phi^i_n\ra$, $n=1,2,\dots,N$, in the initial well, $$\begin{aligned}
\label{3}
\l_T = \sum_{n=1}^N |\phi^T_n\ra \la \phi^T_n|, \quad |\phi^T_n\ra \equiv \hat{U}(T) |\phi^i_n\ra,\end{aligned}$$ with $\hat{U}(T)$ denoting the evolution operator corresponding to Eq. (\[a4\]). Note that Eqs. (\[1\]) and (\[2\]) are generalisations of Eqs. (9) and (10) obtained in Ref. [@MUG] for the sudden limit, with the initial bound states replaced by time evolved states (\[3\]). We further notice that knowledge of time evolved states (\[3\]) allows one to obtain the full atom-number distribution $p(n)$ from the characteristic function $F(\theta)=\tr[\hat{\rho}e^{i\theta\hat{\Lambda}\hat{n}\hat{\Lambda}}]$, as a Fourier transform, $p(n)=\frac{1}{2\pi}\int_{-\pi}^{\pi}e^{-in\theta}F(\theta)d\theta$, with $n=1,\dots,M$ [@Klich; @Moll; @Lev]. Using the projector for the bound subspace in the final configuration, $F(\theta)={\rm det}{\bf A}$ with ${\bf A}=[1+(e^{i\theta}-1)\hat{\Lambda}\hat{\Lambda}_T]$. In the basis of single-particle eigenstates $|\phi_m^f\ra$, the elements of the matrix ${\bf A}$ read $A_{nm}=\delta_{nm}+[\exp(i\theta)-1]\la\phi_n^f|\hat{\Lambda}_T|\phi_m^f\ra$.
With the problem reduced to numerical evaluation of the corresponding one particle states, we employ the Crank-Nicolson method to solve Eq. (\[a4\]) with the initial conditions $\phi_n^i(x,t=0)$ $n=1,2...N$ and zero boundary conditions at the edges of the numerical box, $x=\pm L_b$. To avoid reflections from the boundaries we introduce an absorbing potential proposed by Manolopoulos [@Mano; @MPNE04] (see appendix A for details). Following Refs. [@MUG; @PONS], we will refer as “weakening” to the case where the trap is made shallower while keeping its width constant, $V_i>V_f$, $L_i=L_f$ , while reducing the trap’s width will be called “squeezing”.
From Eqs. (\[1\]) and (\[2\]) it is easy to deduce [@MUG; @PONS] that an ideal Fock state with $\la n(T)\ra=M$ and $\sigma^2_N(T)=0$ would be prepared provided that $\l\subseteq\l_T$, this is, the space spanned by the state before the end of the quench should enclose that spanned by the Fock state to be prepared [@proof]. In the following we shall describe different physical implementations to fulfill this requirement.
Two ways to arrive at a good Fock state
=======================================
![Dependence of the final occupation on the ramping time (pure weakening): $\la n(T)\ra$ (solid) and $\la n(T)\ra -\sigma_N(T)$ (dashed) vs $\tau=T/t_0$. The potentials are $W_i=(19.5)^2\pi^2/2$, $W_f=(4.95)^2\pi^2/2$, corresponding to maximum capacities $K=20$ and $M=5$, respectively, and $N=K=20$, it is, all initial levels occupied.\
[]{data-label="FIG1"}](FIG1.pdf){width="8cm"}
In the [*adiabatic limit*]{}, where the change of the potential shape is slow, we can expect the first $M$ bound states of the initial well to follow into the $M$ bounds states of the final well $$\begin{aligned}
\label{b1a}
|\phi^T_n\ra= \exp(i\Phi_n) |\phi_n^f\ra, \quad n=1,2,\dots,M,\end{aligned}$$ where $\Phi_n$ is a real phase. Thus, on the subspace of final bound states we have $$\begin{aligned}
\label{b2}
\l_T = \sum_{n=1}^M |\phi^T_n\ra\la \phi^T_n| = \sum_{n=1}^M |\phi^f_n\ra\la \phi^f_n| .\end{aligned}$$ Figure \[FIG1\] shows the final occupation $\la n(T)\ra$ and its variance as functions of the ramping time for the case of pure weakening of the trap. As expected, the final occupation increases and the fidelity improves as one approaches the adiabatic limit, where the variance of the resulting state vanishes. This result (a consequence of the adiabatic theorem) does not depend on whether weakening, squeezing or a combination of both techniques is applied.
It is worth noting that, to our knowledge, the question of exactly how slowly the potential must evolve to ensure adiabaticity in the case the $M$-th level in the final well lies close to the edge of continuum remains open and requires further investigation. As mentioned in the introduction, the applicability of the adiabatic method may be limited by a finite lifetime of the trapped condensate.
In the [*sudden limit*]{} where the shape of the well changes almost instantaneously we have $\hat{U}(T)\approx 1,$ $|\phi_{\nu}^T\ra= |\phi_{\nu}^i\ra$ and $|\phi_{n}^T\ra= |\phi_{n}^i\ra$, i. e. $\l_T=\l_0$. Let us recall that if $\l\subseteq\l_T$, i.e, the final well would contain a Fock state with precisely $M$ fermions. Remarkably, the use of either either weakening and squeezing in the sudden limit lead to quasi-Fock states of limited fidelity [@MUG], while the combination of both techniques can lead to ideal Fock states [@MUG; @PONS]. Generally, sudden quenches of the trap might lead to undesirable excitations of the transverse modes breakdown the effective one-dimensional character of the system.
Gradual modulation of the trap potential
========================================
We have seen that both adiabatic and sudden changes of the trapping potential can in principle lead to the preparation of ideal Fock states, but as idealized limits might be of limited relevance to experimental implementation of atom culling techniques.
Motivated by this observation we next look at the mean particle number and variance of a state resulting from a combined process of squeezing and weakening of the trap in finite time, see Fig. \[FIG2\]. Here, $\la n(T)\ra$ stays close to the maximum capacity of the well for all ramping times, but exhibits a dip at $T/t_0 \approx 0.03-0.1$. Notice that the adiabatic preparation relies on the adiabatic following of the first $M$ states of the initial trap onto the $M$ states of the bound subspace of the final trap [@DRN07]. By contrast, the sudden preparation relies on the instantaneous resolution of the target state $|M\ra$ within the space spanned by the initial state [@MUG; @PONS]. The non-monotonic behaviour of $\la n(T)\ra$ reported in Fig. \[FIG2\] results from a simultaneous failure of these two different mechanisms that allow the perfect resolution of the desired Fock state in either of the $T\rightarrow 0,\infty$ limits.
![a). Same as Fig. \[FIG1\] but for weakening combined with squeezing, $L_f/L_i=0.6$. Also shown by a dot-dashed line is the l.h.s. of Eq. (\[11\]).\
b) Probabilities $p(4)$ (solid) and $p(3)\times 10^3$ (dashed) in Eqs. (\[11a\]) vs $\tau=T/t_0$. The inset shows the probability $p(4)$ and the ramping time interval for which $p(4)>0.5\%$.[]{data-label="FIG2"}](FIG2.pdf){width="8cm"}
We further note (see Fig. 2, dashed) that in the case of combined weakening and squeezing the mean number of atoms and its variance (both dimensionless, of course) add almost exactly to the maximum capacity of the final well, $M$, for all ramping times, and in the next section we discuss this approximate “sum rule” in some detail.
Sub-Poissonian statistics at nearly full final occupation
=========================================================
Let us start by introducing the deficiency operator $$\begin{aligned}
\label{7}
\hat{A}=\l-\l_T.\end{aligned}$$
Inserting (\[7\]) into (\[1\]) and (\[2\]) yields $$\begin{aligned}
\label{8}
\la n(T)\ra =M-\sum_{j=1}^M \la \phi_j^f|\a|\phi_j^f\ra\end{aligned}$$ and $$\begin{aligned}
\label{9}
\sigma^2_N(T) = \sum_{j=1}^M \la \phi_j^f|\a|\phi_j^f\ra -
\sum_{j,k=1}^M|\la \phi_j^f|\hat{A}|\phi_k^f\ra|^2.\end{aligned}$$ From Eqs. (\[8\]) and (\[9\]) follows a “sum rule” $$\begin{aligned}
\label{10}
\la n(T)\ra + \sigma^2_N(T) = M - \sum_{j,k=1}^M
|\la \phi_j^f|\hat{A}|\phi_k^f\ra|^2.\end{aligned}$$ Note that the last term is quadratic in the operator $\hat{A}=1_f-\l_T$ so that if $\a$ is “small” [@FOOT] the variance can be expressed in terms of $\la n(T)\ra$ through the approximate relation $$\begin{aligned}
\label{11}
\la n(T)\ra + \sigma^2_N(T) \approx M,\end{aligned}$$ as demonstrated in Fig. 2a. This is an example of non-Poissonian behaviour resulting from indistinguishability of the fermions involved. Indeed, for a Poissonian distribution one would expect $\sigma^2_N(T) = \la n(T)\ra$ whereas from Eq. (\[11\]) we have $\sigma^2_N(T) = M- \la n(T)\ra \ll \la n(T)\ra$.
Note also that in the case of almost full final occupation, knowing $\la n(T)\ra$ and $\sigma^2_N(T)$ allows one to reconstruct the full counting statistics. Thus, neglecting the probabilities to trap less than $M- 2$ fermions, $p(k)\approx 0$ for $k=0,1,\dots,M-3$, we can obtain $p(M-2)$, $p(M-1)$ and $p(M)$ from the normalisation and the first two moments of the distribution $p$, $$\begin{aligned}
\label{11a}
\sum_{k=M-2}^M p(k) k^m= \la n ^m \ra, \quad m=0,1,2.\end{aligned}$$ Results of such a reconstruction are shown in Fig. 2b. For the set corresponding to Fig. \[FIG2\]b the probability to trap three fermions, $p(3)$, is negligibly small, and one would obtain $4$ fermions at most in $1.2\%$ of all cases for $\tau=T/t_0\approx 0.01$. To reduce this fraction to $0.5\%$ one has an option to choose the ramping time either less than $\tau_1 < 0.005$ or longer than $\tau_2> 0.095$ shown on the inset in Fig. \[FIG2\]b.
Escape of expelled particles from the final trap region
=======================================================
Although the final occupation of the bound states in the final well is determined by the time $T$ after which the potential no longer changes, the obtained Fock state can only be used after the atoms expelled into the continuum have left the well area. To obtain an estimate for how fast this would happen we have chosen an interval $\Omega$: $-1.5 L_f \le x \le 1.5 L_f$, containing the final well region, and monitored the mean number of atoms in $\Omega$, $\la n_{\Omega}(t)\ra$, during the ramping, $t\le T$, as well as for $t>T$. Figure \[FIG6\] shows the results for combined weakening and squeezing, $W_f/W_i = 0.18$, $L_f/L_i = 0.6.$ for $T/t_0=0.05$ and $T/t_0=1.0$, close to the sudden and the adiabatic limits, respectively. With this choice of parameters, the initial well contains $20$ atoms and the final well’s maximum capacity is $5$.
In the nearly sudden limit, $T/t_0=0.05$, a considerable fraction of expelled atoms remain in the region $\Omega$ by the time the well achieves its final configuration. One has then to wait a duration of the order of $t_0$ for the mean number of atoms in $\Omega$ to settle to its final value $\la n(T)\ra \approx M =5$. A more detailed analysis shows that it is the $M+1$ (in this case, the sixth) state of the initial well which is delayed most in leaving the area. We note also that this decay is not exponential and, therefore, cannot be attributed solely to trapping of an atom in one of the resonances supported between the edges of the final well.
In the nearly adiabatic case, $T/t_0=1$, initial one-particle levels are gradually pushed into the continuum, and most of the atoms have time to leave $\Omega$ while the potential shape is still changing. Once the change has stopped one still has to wait approximately $t_0$ for the contribution from the $M+1$-th state to clear the area. Although comparable in magnitude, the wait is somewhat longer than that in the nearly sudden limit, mostly because the expelled atoms receive more energy and move faster if the change of the potential shape is sudden.
![Mean number of atoms in well region $(\Omega:$ $-0.75 L_f < x < 0.75 L_f)$ vs $t/t_0$ for $\tau=T/t_0=0.05$ and $\tau=T/t_0=1$. $W_i=(19.5)^2\pi^2/2$, $N=K$ and weakening is combined with squeezing, $L_f/L_i =0.6$. Vertical dashed lines indicate ramping times after which the potential assumes its final form.\
[]{data-label="FIG6"}](FIG6.pdf){width="7cm"}
Conclusions and discussion
==========================
In summary, we have investigated formation of atomic Fock states by gradually changing the trapping potential and expelling the excess of atoms out of the trap. For the case in which either the depth (weakening) or the width (squeezing) of the potential trap are reduced, an increase of the ramping time leads to an improvement in the fidelity of the procedure until the adiabatic limit is reached. This monotonic behaviour in a relevant variable is generic in phenomena with sudden to adiabatic crossovers. For example, in a Landau-Zener transition in which the crossing time is gradually increased, the final state goes from one level to the other depending on that time and the corresponding degree of adiabaticity of the passage. However, in our case, a combination of both procedures (squeezing and weakening), which produces high fidelity Fock states in the sudden limit, remains superior to either of both operations for all ramping times. This opens a new route for feasible atomic Fock state creation by trap reduction, approximating the sudden limit in the laboratory. The behaviour along the crossover is non monotonic and for finite quenching times one observes a decrease in the mean number of trapped atoms which is accompanied by a corresponding increase in its variance. Therefore in this combined scheme we find the rather unusual property that both sudden and adiabatic limits lead to the same results, certainly through different mechanisms, with a non-montonic decrease of fidelity in between.
Acknowledgement
===============
We acknowledge support of University of Basque Country UPV-EHU (Grant GIU07/40), Basque Government (IT-472-10), and Ministry of Science and Innovation of Spain (FIS2009-12773-C02-01). JGM and AdC acknowledge the hospitality of the Max Planck Institute for the Physics of Complex Systems at Dresden, Germany.
Complex absorbing potential
===========================
The absorbing potential employed in this paper has the form suggested in [@Mano; @MPNE04]. With the wavefunction required to vanish at the edges of the computational box, $x=\pm L_b$ denoting the size, the absorbing potential is chosen to be zero for $0 <x<L_b/2$. For $L_b/2 \le x< L_b$ it is given by $$\begin{aligned}
\label{5}
V_{abs}(x)= -iD\bigg[Az-Bz^3+\frac{4}{(C-z)^2}-\frac{4}{(C+z)^2}\bigg],\nonumber\\\end{aligned}$$ where $z=C(2x/L_b-1)$, $C=2.62206$, $A=(1-16/C^3)$, $B=(1-17/C^3)/C^2$, and $D=C^2L_b^2/1.28$. Finally, $V_{abs}(x)=V_{abs}(-x)$. The value $L_b/L_i=10$ was used and the Schrödinger equation was solved numerically on a grid of $4\cdot 10^5$ points spanning the interval $[-L_b,L_b]$.
Final occupation vs the depth of the initial trap
=================================================
The discussion in Section IV suggests that, close to the sudden limit and for weakening and squeezing applied together, starting with a deeper well fully filled, $N=K$, should improve fidelity. In addition, starting with a deeper initial well with a fixed number of level filled, $N$, would not be as beneficial, since the frequency of oscillations of the first unfilled state, $\la x|\phi_{N+1}^i\ra$, is almost independent of $V_i$. This is illustrated in Fig. \[FIG3\] where the solid line shows the final occupation of the well with the maximum capacity $M=5$ with all the levels of the initial well filled, $N=K$. The graph show peaks which correlate with initial depths at which a new level appears in the initial well. In the same figure, the dashed line shows the dependence of $\la n(T)\ra$ on $V_i$ when only first $25$ levels of the initial well are occupied. In this case, $\la n(T)\ra$ is no longer sensitive to the appearance of new bound states, but yields a slightly lower fidelity of the preparation.
![Final occupation in the sudden limit, $\la n(T=0)\ra$, for the case of weakening with squeezing vs. the dimensionless initial well depth for $N=K$ (solid) and $N=25$ (dashed). Other parameters are $W_f=(4.95)^2\pi^2/{2(L_f/L_i)^2}$ ($M=5$) and $L_f/L_i=0.6$. Note that for a deep well $(2W)^{1/2}/\pi$ gives an estimate of the number of bound states, $K \approx Int ((2W)^{1/2}/\pi)$. ](FIG3.pdf){width="7cm"}
. \[FIG3\]
The case of pure weakening shown in Fig. \[FIG4\] is more complex.
![Same as Fig. \[FIG3\] but for the case of pure weakening, $L_i=L_f$ and for $N=K$ (solid) and $N=25$ (dashed). The insert shows the correlation between the structure in the $N=K$ curve and the number of the bound states supported by the initial well. ](FIG4.pdf){width="7cm"}
. \[FIG4\]
There increasing the depth of a fully filled well leads to a decrease in $\la n(T)\ra$ (solid), which is also highly sensitive to the increase in the number of initial bound states (see inset in Fig. \[FIG4\]). As in Fig. \[FIG3\], partial filling of the initial well removes the structure in the dependence of $\la n(T)\ra$ on $V_i$ but leads to lower final occupation in the deep well limit.
[999]{} A. M. Dudarev, M. G. Raizen, and Q. Niu, Phys. Rev. Lett. [**98**]{}, 063001 (2007). A. del Campo and J. G.Muga, Phys.Rev. A, [**79**]{}, 023412-1 (2008). M. Pons, A del Campo, J. G.Muga and M. G.Raizen , Phys.Rev. A, [**79**]{}, 033629-1 (2009). S. Wan, M. G. Raizen, and Q. Niu. J. Phys. B 42, 195506 (2009). M. G. Raizen, S. Wan, C. Zhang, and Q. Niu. Phys. Rev. A 80, 030302(R) (2009). F. Heidrich-Meisner, S. R. Manmana, M. Rigol, A. Muramatsu, A. E. Feiguin, and E. Dagotto, Phys. Rev. A [**80**]{}, 041603(R) (2009). G. M. Nikolopoulos and D. Petrosyan, J. Phys. B 43, 131001 (2010). C. S. Chuu, F. Schreck, T. P. Meyrath, J. L. Hanssen, G. N. Price, and M. G. Raizen, Phys. Rev. Lett. [**95**]{}, 260403 (2005). M. D. Girardeau, J. Math. Phys. [**1**]{}, 516 (1960).
L. S.Levitov, H. -W. Lee and G. B. Lesovik, J.Math.Phys, [**37**]{}, 4845 (1996). I. Klich, in [*Quantum Noise in Mesoscopic Physics*]{}, ed. by Yu. v. Nazarov (Kluwer, Dordrecht, 2003), e-print arXiv:condmat/0209642 M. Budde and K. Molmer, Phys.Rev. A, [**70**]{}, 053618 (2004).
Whenever $\hat{\Lambda}\subseteq\hat{\Lambda}_T$, $A_{nm}=\exp(i\theta)\delta_{nm}$, $F(\theta)=\exp(i\theta M)$ and the the desired Fock space is prepared: $p(n)=\delta_{n,M}$.
D. E. Manolopoulos, J. Chem. Phys. 117, 9552 (2002). J. G. Muga, J. P. Palao, B. Navarro, and I. L. Egusquiza, Phys. Rep. [**395**]{}, 357 (2004). More precisely, we need both the sum, $I_1$, in Eq. (\[8\]) and the double sum, $I_2$, in Eq. (\[9\]) to be small compared to unity. Denoting $A_{max}$ the largest of $|\la \phi_j^f|\hat{A}|\phi_k^f\ra|$, with $j,k=1,2,\dots,M$ we find $I_1\le MA_{max}$ and $I_2\le M^2A_{max}^2$. Thus, for the contribution of the operator $\a$ to be negligible, $MA_{max} \ll 1$.
|
---
abstract: 'Anomalous polarization characteristics of magnetic resonance in CuGeO$_3$ doped with 2% of Co impurity are reported. For the Faraday geometry this mode is damped for the microwave field $\mathbf{B}_\omega$ aligned along a certain crystallographic direction showing that the character of magnetic oscillation differs from the standard spin precession. The observed resonance coexists with the EPR on Cu$^{2+}$ chains and argued not to be caused by “impurity” EPR, as previously claimed, but to correspond to an unknown before, collective mode of magnetic oscillations in an $S=1/2$ AF quantum spin chain.'
author:
- 'S. V. Demishev'
- 'A. V. Semeno'
- 'H. Ohta'
- 'S. Okubo'
- 'I. E. Tarasenko'
- 'T. V. Ishchenko'
- 'N. E. Sluchanko'
title: 'New polarization effect and collective excitation in $S=1/2$ quasi 1D antiferromagnetic quantum spin chain'
---
From the modern theoretical point of view [@1; @2] magnetic resonances in $S=1/2$ quasi 1D antiferromagnetic (AF) quantum spin chains should be treated as collective phenomena rather than a diffusive spin dynamics suggested by exchange narrowing [@3] approach. In the case of the electron paramagnetic resonance (EPR), the collective nature develops in the specific temperature dependences of the line width and $g$-factor caused by an interplay of the staggered field and anisotropic exchange and in the damping of EPR at low temperatures accompanied by rising of the breather mode [@1; @2]. In the latter case, the changes in the resonant spectrum also affect the experiment geometry; namely, EPR can be excited only in the Faraday geometry whereas the breather mode may be observed for both the Faraday and Voigt geometry [@4]. These predictions have been well proven experimentally [@4; @5; @6] for the cases of Cu-benzoate and doped CuGeO$_3$.
Another possible field, where a collective motion of spins may be found, is a polarization effect [@2]. However, although both the field theory approach [@2] and direct numerical simulation [@7] suggest that an EPR line depends on the orientation of the microwave field $\mathbf{B}_\omega$, the expected influence on the line width and $g$-factor is small. This result agrees well with the previous calculations in the frame of exchange narrowing theory and with known experimental data [@8; @9; @10].
Here we report an experimental observation of a strong polarization dependence for a magnetic resonance in CuGeO$_3$ doped with a Co impurity, which have not been foreseen by existing theories for a low dimensional magnets. We argue that the discovered effect reflects the appearance of the unknown before collective mode in an $S=1/2$ quasi 1D AF quantum spin chain.
A cobalt magnetic impurity in CuGeO$_3$ ($S=3/2$) substitutes cooper in chains [@11; @12; @13] and in contrast to the other dopants induces an onset of the specific resonant mode, which accompanies EPR on Cu$^{2+}$ chains [@11; @12]. Therefore, an experimental spectrum of the resonant magnetoabsorption in CuGeO$_3$:Co is formed by two broad lines, which can be completely resolved for frequencies $\omega/2\pi\geq$100 GHz. It was found [@11; @12] that frequencies of both modes are proportional to the resonant magnetic field in a wide range 60-360 GHz. The analysis of the $g$-factor values have shown that the resonant mode corresponding to higher magnetic fields represents a collective EPR on Cu$^{2+}$ chains, whereas the resonant mode corresponding to lower magnetic fields may be interpreted as an EPR on Co$^{2+}$ impurity clusters embedded into CuGeO$_3$ matrix rather than as an antiferromagnetic resonance (AFMR) mode [@12].
In the present paper, we performed polarization measurements in the Faraday geometry of the magnetic resonance spectrum of CuGeO$_3$ containing 2% of Co at frequency 100 GHz in a temperature range 1.8-40 K. The details about samples preparation, characterization, and quality control are given elsewhere [@12]. It was established that for this concentration range Co impurity completely damp the spin-Peierls transition for the vast majority of Cu$^{2+}$ chains and no Neel transition was found at least down to 1.8 K [@11; @12]. The quantitative analysis of the EPR on Cu$^{2+}$ chains parameters have shown that line width and $g$-factor reflect properties of the chains with the damped spin-Peierls state and, moreover, the Cu$^{2+}$ magnetic subsystem retains one dimensional character in the aforementioned temperature interval [@5; @12].
In polarization experiments, a single crystal of was located on one of the endplates of the cylindrical reflecting cavity tuned to the TE$_{014}$ mode. A small DPPH reference sample was simultaneously placed in the cavity. The external field $\mathbf{B}$ up to was generated by a superconducting solenoid and was parallel to the cavity axis. Three cases, when $\mathbf{B}$ was aligned along the $\mathbf{a}$, $\mathbf{b}$ and $\mathbf{c}$ crystallographic directions, were studied. In each case two orientations of the oscillating microwave field along remaining axes were investigated; namely, the polarizations $\mathbf{B}_\omega\Vert\mathbf{a}$ and $\mathbf{B}_\omega\Vert\mathbf{c}$ for $\mathbf{B}\Vert\mathbf{a}$ and so on. (Hereafter we denote the Cu$^{2+}$ chain direction as $\mathbf{c}$; $\mathbf{b}$ axis is perpendicular to $\mathbf{c}$ marking a direction of the second strongest exchange in the chains plane and $\mathbf{a}$ axis is orthogonal to the chains plane). Therefore below we report the results for six experiment geometries. Measurements were repeated for ten single crystals and provided identical results.
It is worth noting that $\mathbf{B}_\omega\perp\mathbf{B}$ in all cases studied and no effect in a convenient EPR on a single spin is expected. For the isotropic Heisenberg spin system, the spin Hamiltonian consisting of the exchange and Zeeman terms commutes with the magnitude of the total spin and its $z$-component. Thus in the absence of anisotropic terms in the Hamiltonian, which give rise to a finite line width, the EPR occurs at the same frequency $\omega=\gamma B$ as in single-spin problem [@1; @2]. As a result, even in a strongly interacting system like quantum spin chain, it is possible to use a semiclassical language and describe a magnetic resonance in terms of magnetization rotation along the field direction as one does for the single spin. Therefore for the quantum spin chain in “zero order”, the excitation of EPR does not depend on the $\mathbf{B}_\omega$ direction in plane perpendicular to $\mathbf{B}$. This speculation qualitatively explains why polarization effects may be only expected in the line width and polarization corrections to the $g$-factor that is in agreement with the exact results [@2; @7; @8; @9].
The obtained experimental data, however, contradict to this picture. As can be seen from Fig. 1 the low field mode A, which is suspected to be an “impurity” resonance in the previous studies [@11; @12], can be excited for one polarization only. At the same time, the EPR on Cu$^{2+}$ chains (resonance B) does not show any strong polarization dependence. For the mode A and $\mathbf{B}\Vert\mathbf{a}$, an “active” polarization is $\mathbf{B}_\omega\Vert\mathbf{c}$ and “non-active” polarization corresponds to the case $\mathbf{B}_\omega\Vert\mathbf{b}$ (Fig. 1, a). It is worth to note that in the case of “non-active” polarization the magnetic resonance A is almost completely damped, and weak traces of this mode for $\mathbf{B}_\omega\Vert\mathbf{b}$, which are visible at low temperatures, are due to the finite sample size and related weak misalignment of $\mathbf{B}_\omega$ from $\mathbf{b}$ axis in cavity measurements. Another characteristic feature of the observed phenomenon is the peculiar temperature dependence. For the “active” case, mode A appears below 40 K and at $T\sim$12 K becomes as strong as the resonance on Cu$^{2+}$ chains. Further lowering of temperature makes the A resonance a dominating feature in the magnetoabsorption spectrum with amplitude considerably exceeding that of mode B (Fig. 1, b).
![Comparison of “active” and “non-active” polarizations (panel a) and temperature evolution of the magnetoabsorption spectrum for “active” polarization (panel b) in geometry $\mathbf{B}\Vert\mathbf{a}$. Narrow line in panel a represents DPPH signal. Figures near curves in panel b correspond to temperatures in K.[]{data-label="fig:1"}](figure1){width="0.8\linewidth"}
Similar behavior is observed for $\mathbf{B}\Vert\mathbf{b}$ geometry (Fig. 2). In this case for mode A an “active” polarization is $\mathbf{B}_\omega\Vert\mathbf{a}$, and “non-active” polarization is $\mathbf{B}_\omega\Vert\mathbf{c}$, whereas the resonance B is not much affected by an orientation of the microwave field. In agreement with the case $\mathbf{B}\Vert\mathbf{a}$, the mode A is the strongest in the spectrum, however for $\mathbf{B}\Vert\mathbf{b}$ the main resonance A is accompanied by its second harmonic (Fig. 2). Interesting that although the mode A is completely damped in $\mathbf{B}_\omega\Vert\mathbf{c}$ case, the second harmonic of this resonance retains the same amplitude for both “active” and “non-active” polarizations (Fig. 2).
![“Active” and “non-active” polarizations for $\mathbf{B}\Vert\mathbf{b}$. Points correspond to experiment, solid line- fitting of the experimental spectrum assuming Lorentzian shapes of the resonances A, B and second harmonic of resonance A. Partial contributions of these resonances to the spectrum are shown by dashed lines.[]{data-label="fig:2"}](figure2){width="0.8\linewidth"}
A dominating character of the resonance A at low temperatures is conserved in $\mathbf{B}\Vert\mathbf{c}$ case (Fig. 3). The effect of polarization appears to be weaker, and the amplitude of the resonance A for $\mathbf{B}_\omega\Vert\mathbf{b}$ is only two times less than for $\mathbf{B}_\omega\Vert\mathbf{a}$. Nevertheless, the polarization dependence of this mode remains anomalously strong, especially as compared with the resonance on Cu$^{2+}$ chains.
![“Active” and “non-active” polarizations for $\mathbf{B}\Vert\mathbf{c}$. Points correspond to the experiment, solid line- to fitting of the experimental spectrum assuming Lorentzian shapes of the resonances A and B. Partial contributions of these resonances to the spectrum are shown by dashed lines.[]{data-label="fig:3"}](figure3){width="0.8\linewidth"}
Data in Figs. 1-3 show that the resonance field for the mode A varies substantially, when a direction of the external magnetic field $\mathbf{B}$ is changed. The corresponding $g$-factor values are $g\approx$4.9 ($\mathbf{B}\Vert\mathbf{a}$), $g\approx$2.9 ($\mathbf{B}\Vert\mathbf{b}$), and $g\approx$3.7 ($\mathbf{B}\Vert\mathbf{c}$). Thus, the $g$-factor for this mode may differ 1.7 times, while for the EPR on Cu$^{2+}$ chains (resonance B) the $g$-factors for various crystallographic directions lie in the range 2.06-2.26 [@6; @12] and, hence, are changed only by 10%.
The experimental data obtained in the present work demonstrate that the resonant mode A in CuGeO$_3$:Co is an anomalous one. First of all, this mode shows extremely strong dependence on orientation of the oscillating microwave field $\mathbf{B}_\omega$ in the Faraday geometry. At the same time, no comparable effect for EPR on Cu$^{2+}$ chains is observed in good agreement with theoretical expectations [@7; @8; @9; @10].
Secondly, the vanishing of the resonance A for certain polarizations means that the character of magnetic oscillations in this mode is completely different from the magnetization vector precession around a magnetic field direction described by Landau-Lifshits equation. Indeed, in case of precession, the magnetization vector end moves around a circle, whose plane is perpendicular to the magnetic field, and hence any linear polarization in Faraday geometry excites an EPR-like mode or a mode based on correlated precession of various magnetization components [@14].
Thirdly, modes A and B coexist in a wide temperature range $1.8<T<
40$ K and therefore the scenario giving a vanishing collective EPR (mode B) after an onset of the breather excitation like in Cu-benzoate [@1; @2; @4] does not hold. The appearance of the mode A at relatively high temperatures $T\sim$40 K (Fig. 1) simultaneously eliminates applicability of the standard scenario of doping [@15], where the coexistence of EPR and AFMR in doped CuGeO$_3$ may be expected only at temperatures below $0.3 T_{SP}\sim$4 K (AFMR coexisting with EPR in CuGeO$_3$ have been observed in experiments at $T<$2 K [@16]).
The above consideration does not allow explaining of mode A in terms of a single spin EPR problem. At the same time, the properties of this magnetic resonance is not possible to describe assuming either collective EPR or AFMR in a quantum spin chain system, as well by the other known to date collective modes like breather excitations. Thus, the doping with Co of Cu$^{2+}$ quantum spin chains in CuGeO$_3$ leads to formation of a novel unidentified magnetic resonance. Nevertheless it is possible to deduce that the observed excitation of a magnetic subsystem of CuGeO$_3$:Co has a collective nature. The first argument favoring this supposition is the magnitude of this magnetic resonance. Taking into account that in the samples studied only 2% of cooper ions are substituted by the cobalt impurity and no spin-Peierls transition affecting mode B happens, it is difficult to expect that any individual impurity mode considerably exceeds the magnitude of the magnetic resonance on Cu$^{2+}$ chains (Figs. 1-3). Therefore, in our opinion, mode A is likely a specific collective excitation of a quasi 1D Cu$^{2+}$ chain, which properties are modified by doping.
The unusual polarization dependence of the mode A may be considered as another argument. Apparently, the observed behavior is forbidden for a single spin or $S=1/2$ AF spin chain with an isotropic Hamiltonian [@1; @2]. However, in presence of anisotropic terms, the spin chain Hamiltonian does not commute with the total spin and its $z$-component, and hence, in principle, the magnetic oscillation modes different from the standard spin precession may become possible. It is worth to note that experimental data in Figs. 1-3 suggest a selected character of $\mathbf{b}$ axis. Indeed, for $\mathbf{B}_\omega\Vert\mathbf{b}$ the resonance A is completely damped ($\mathbf{B}\Vert\mathbf{a}$) or its magnitude is reduced ($\mathbf{B}\Vert\mathbf{c}$) and in case $\mathbf{B}\Vert\mathbf{b}$ the second harmonic of the anomalous mode, which is missing in other geometries, develops. At the same time, the previous studies [@5; @6] have shown that doping with magnetic impurities of CuGeO$_3$ leads to appearance of the staggered magnetization aligned predominantly along $\mathbf{b}$ axis [@6]. Therefore, the observed mode A is likely related with the staggered field, which may be responsible for anomalous polarization characteristics. Apparently no such effects caused by a staggered field may be expected for a single spin resonance and thus the idea of a staggered magnetization controlled mode A agrees with its collective nature. From to date theoretical point of view, a staggered field is known to be anisotropic term in Hamiltonian, which is crucial for the EPR problem in the studied case [@1; @2]. However the question whether this type of anisotropy is sufficient for the explanation of the observed phenomena remains open and a required extension of the theory [@1; @2] is missing.
From the data presented in Figs. 1-3 it is possible to deduce the character of magnetic oscillations for resonances A and B in CuGeO$_3$:Co. In a semiclassical approximation the magnetization in a given magnetic field $\mathbf{B}$ has the form $\mathbf{M}=\mathbf{M}_0+\mathbf{m}$, where $\mathbf{M}_0$ denotes an equilibrium value and $\mathbf{m}$ stands for the oscillating part [@14]. As long as magnetic resonances probe normal modes of magnetization oscillations described by vector $\mathbf{m}$, for an excitation of some mode, vector $\mathbf{B}_\omega$ should have non zero projection on any $\mathbf{m}$ component [@14], i.e., a condition $(\mathbf{B}_\omega,\mathbf{m})\neq0$ for the scalar product should be fulfilled. For geometry $\mathbf{B}\Vert\mathbf{a}$ and a normal mode, when precession of magnetization around the field direction takes place, $\mathbf{m}
=(0,{m}_b,{m}_c)$ and both projections of $\mathbf{m}$ on $\mathbf{b}$ and $\mathbf{c}$ axes are nonzero. Therefore any alignment of vector $\mathbf{B}_\omega$ in $\mathbf{b}\mathbf{c}$ plane excites precession. The weak dependence of the resonance amplitude on $\mathbf{B}_\omega$ alignment corresponds to condition ${m}_b\approx{m}_c$. Thus, for the mode B and $\mathbf{B}\Vert\mathbf{a}$, the trajectory of the vector $\mathbf{M}$ end is a circle lying in $\mathbf{b}\mathbf{c}$ plane (a similar consideration is apparently applicable to mode B in geometries $\mathbf{B}\Vert\mathbf{b}$ and $\mathbf{B}\Vert\mathbf{c}$).
The same analysis can be applied for the resonance A. Data in Fig. 1 suggest that in geometry $\mathbf{B}\Vert\mathbf{a}$ the oscillating contribution to magnetization should acquire the form $\mathbf{m}
=(0,0,{m}_c)$ leading to “active” polarization $\mathbf{B}_\omega\Vert\mathbf{c}$ and “non-active” polarization $\mathbf{B}_\omega\Vert\mathbf{b}$ (Fig. 1). Therefore in this case, the end of vector $\mathbf{M}$ should move along the line parallel to $\mathbf{c}$ axis. Analogously, $\mathbf{m} =({m}_a,0,0)$ for $\mathbf{B}\Vert\mathbf{b}$ and linear oscillation happens along $\mathbf{a}$ axis (Fig. 2). For $\mathbf{B}\Vert\mathbf{c}$, mode A can be excited in both polarizations and hence $\mathbf{m}
=({m}_a,{m}_b,0)$. However, the decrease of the resonance magnitude for $\mathbf{B}_\omega\Vert\mathbf{b}$ suggests condition ${m}_a\approx2{m}_b$ (Fig. 3). As a result, the trajectory of the vector $\mathbf{M}$ end will be an ellipse in $\mathbf{a}\mathbf{b}$ plane elongated in $\mathbf{a}$ direction. The summary of the above consideration for mode A is given in Fig. 4.
![Schema of possible magnetic oscillations for normal modes responsible for anomalous polarization dependences of the mode A in three cases ($\mathbf{B}\Vert\mathbf{M}_0\Vert\mathbf{a}$, $\mathbf{B}\Vert\mathbf{M}_0\Vert\mathbf{b}$ and $\mathbf{B}\Vert\mathbf{M}_0\Vert\mathbf{c}$). Oscillating contribution $\mathbf{m}$ is assumed to vary harmonically with time in cases $\mathbf{M}_0\Vert\mathbf{a}$ and $\mathbf{M}_0\Vert\mathbf{b}$. For $\mathbf{M}_0\Vert\mathbf{c}$ vector $\mathbf{m}$ rotates around the $\mathbf{c}$ axis. Dotted lines mark trajectories of the vector $\mathbf{M}=\mathbf{M}_0+\mathbf{m}$ end.[]{data-label="fig:4"}](figure4){width="0.6\linewidth"}
To our best knowledge, the modes with linear or elliptic oscillation trajectories have been neither reported for any magnetic resonance, nor foreseen by theoretical studies. Moreover, the current understanding of the whole field of magnetic resonance (including EPR, AFMR and ferromagnetic resonance) essentially exploits semiclassical magnetization precession in an external field, and hence leaves no space to the observed new polarization effect. Therefore, an adequate theory relevant to the studied case, including different polarization characteristics of magnetic resonance harmonics appears on the agenda.
In conclusion, we have shown that doping of CuGeO$_3$ with 2% of Co impurity induces an anomalous magnetic resonance mode possessing unique polarization characteristics. This resonance coexists with the EPR on Cu$^{2+}$ chains and is likely caused by a new, unknown before, collective mode of magnetic oscillations in $S=1/2$ AF quantum spin chain.
Authors are grateful to M.Oshikawa for the stimulating discussion. This work was supported by the RFBR grant 04-02-16574 and by the Programme “Strongly correlated electrons” of Russian Academy of Sciences.
[99]{} M.Oshikawa and I.Affleck, Phys. Rev. Lett. **82**, 5136 (1999). M.Oshikawa and I.Affleck, Phys. Rev. **B 65**, 134410 (2002). P.W.Anderson and P.R.Weiss, Rev. Mod. Phys. **25**, 269 (1953). Y.Ajiro, J. Phys. Soc. Jpn. **72**, Suppl.B, 12 (2003). S.V.Demishev, Y.Inagaki, H.Ohta, S.Okubo, Y.Oshima, A.A.Pronin, N.A.Samarin, A.V.Semeno, N.E.Sluchanko, Europhys. Lett. **63**, 446 (2003). S.V.Demishev, A.V.Semeno, A.A.Pronin, N.E.Sluchanko, N.A.Samarin, H.Ohta, S.Okubo, M.Kimata, K.Koyama, M.Motokawa, Prog. Theor. Phys. Suppl. 159, 387 (2005). S.Miyashita, T.Yoshino and A.Ogasahara, J. Phys. Soc. Jpn. **68**, 655 (1999). Y.Natsume, F.Sasagawa, M.Toyoda and I.Yamada, J. Phys. Soc. Jpn. **48**, 50 (1980). I.Yamada and Y.Natsume, J. Phys. Soc. Jpn. **48**, 58 (1980). Y.Natsume, F.Noda, F.Sasagawa and H.Kanazawa, J. Phys. Soc. Jpn. **52**, 1427 (1983). S.V.Demishev, Y.Inagaki, M.M.Markina, H.Ohta, S.Okubo, Y.Oshima, A.A.Pronin, N.E.Sluchanko, N.A.Samarin and V.V.Glushkov, Physica **B 329-333**, 715 (2003). S.V.Demishev, A.V.Semeno, N.E.Sluchanko, N.A.Samarin, A.A.Pronin, Y.Inagaki, S.Okubo, H.Ohta, Y.Oshima and L.I.Leonyuk, Physics of the Solid State **46**, 2238 (2004). P.E.Anderson, J.Z.Liu and R.N.Shelton, Phys. Rev. **B 56**, 11014 (1997). A.G.Gurevich and G.A.Melkov, *Magnetization Oscillations and Waves* (CRC Press, Boca Raton, 1996). M.Mostovoy, D.Khomskii and J.Knoester. Phys. Rev. **B 58**, 8190 (1998) A.I.Smirnov, V.N.Glazkov, A.N.Vasil’ev, L.I.Leonyuk, S.Coad, D.Mck Paul, G.Dhalenne, and A.Revcolevschi, JETP Lett. **64**, 305 (1996)
|
---
abstract: 'The AB-stacked bilayer graphene (BLG) is a pure semiconductor whose band gap and properties can be tuned by various methods such as doping or applying gate voltage. Here we show an alternative method to control the electronic properties of BLG; intercalation of transition metal atoms between the two monolayers graphene (MLG). A theoretical investigation has been performed on two-dimensional MLG, BLG, BLG-intercalated nanostructured materials, all of which are energetically stable. Our study reveals that only MLG and BLG-intercalated with one Vanadium (V) atom (BLG-1V) materials have a Dirac Cone at the K-point. This study reveals a new strategy to control the material properties of BLG to exhibit various behaviors, including: metallic, semi-metallic, and semiconducting by varying the concentration and spin arrangement of the V atoms in BLG. In all the cases, the present DFT calculations show that the 2p$_z$ sub-shells of C atoms in graphene and the 3d$_{yz}$ sub-shells of the V atoms provide the electron density near the Fermi level controlling the material properties of the BLG-intercalated materials. Thus we prove that out-of-plane atoms can influence in plane electronic densities in BLG.'
author:
- Srimanta Pakhira
- 'Kevin P. Lucht'
- 'Jose L. Mendoza-Cortes'
bibliography:
- 'PRL-Bibliography.bib'
title: 'An Alternative Strategy to Control the Electronic Properties of Bilayer Graphene: Semi-metal to Metal Transition and a 2D Material with Dirac Cone'
---
In modern science and technology, there has been a tremendous amount of theoretical and experimental interest in the low energy electronic properties of ultrathin graphite films including graphene monolayer and graphene bilayer materials [@Novoselov2005]. Graphene [@Novoselov2004], a 2D honeycomb sheet of carbon just one atom thick with hybridized sp$^2$ bonded orbitals between carbon atoms [@CastroNeto2009], is an ideal and novel material for making nanoelectronic and photonic devices, because it is a very good electrical conductor as well as being the thinnest 2D material known. Graphene has a unique linear band structure around the Fermi level (E$_F$) forming a Dirac Cone at the K-points of its Brillouin zone, and has led to fascinating phenomena, exemplified by massless Dirac fermion physics [@Ohta2006; @Castro2007; @Zhou2007; @Mullen2015]. This emergent behavior of Dirac fermions in condensed matter systems defines the unifying framework for a class of materials called Dirac materials.
Monolayer graphene (MLG) or graphene has an electronic structure that can be controlled by an electrical field [@Novoselov2012]. To be used in digital electronics, however, MLG has the well known zero-gap issue [@Avouris2010] which makes a high on-off ratio difficult; deeming it unsuitable for transistors, which are the foundation of all modern electronic devices. Bilayer graphene (BLG) can be used instead of MLG to overcome the zero-gap problem, with a gap opening simply by applying an electric field [@Oostinga2008; @McCann2007]. BLG has an entirely different band structure, and equally interesting [@Ohta2007; @Nath2015], since the band gap of the BLG can be modulated from zero to a few eV by using different methods such as doping and applying an external electric field. In addition, BLG holds the potential for electronics and nano-technological applications, particularly because of the possibility to control both carrier density and energy band gap through doping or gating [@McCann2006; @Castro2007; @Novoselov2006; @Ohta2006; @Nath2015]. The most remarkable property of BLG is that the inversion symmetric AB-stacked BLG is a zero-band gap semiconductor in its pristine form, but a non-zero band gap can be induced by breaking the inversion symmetry of the two graphene monolayers to form AA-stacked BLG. When two graphene monolayers are stacked (in both the cases AA- and AB-stacked), the monolayer features are lost and dispersion effects become quadratic and more effective [@McCann2006]. Thus, BLG acts as a semiconductor and exhibits an induced electric field and broadly tunable band gap [@Castro2007]. Because tuning the band gap of BLG can turn it from a semiconductor into a metal or semi-metal, a single millimeter-square sheet of BLG could potentially hold millions of differently tuned electronic devices that can be reconfigured. Recently, lasers have been used to get BLG to act as a conductor, a Dirac material or semiconductor; an important step towards computer chips made of a 2D material [@Lin2015].
In this Letter we show that the band structure of BLG can be controlled by adding Vanadium (V) metal atoms between two graphene layers so that the electronic band gap between the valence and conduction bands can be tuned; thus resulting in the appearance of a Dirac Cone. By definition; intercalation occurs when one layer of metal atoms is inserted between two graphene monolayers. Intercalation of the V atoms between a graphene bilayer causes surprisingly a diverse array of electronic properties. We also show that the band gap depends on the coupling between the two graphene layers and symmetry of the BLG system i.e. AB stacking vs AA stacking. We have performed a theoretical investigation of the electronic properties such as band structure, projected density of states (DOSs), spin arrangements, and structural stability of the MLG, BLG, BLG intercalated with one Vanadium atom (BLG-1V), BLG intercalated with two Vanadium atoms (BLG-2V), and BLG intercalated with three Vanadium atoms (BLG-3V) in one unit cell. We have also investigated their structural stabilities by computing the Gibbs free energies ($\Delta G\!_{f}$) of the aforementioned systems.
![The optimized structures, band structures and density of states (DOSs) are shown a) MLG, b) BLG, c) BLG-1V, d) BLG-2V, and e) BLG-3V 2D materials. The individual components of the DOSs of the C and V atoms and total DOSs (depicted by “Total”) are also presented in here.[]{data-label="fig:V_graphene2DPROP"}](./Final-Figure-1_02_01){width="0.99\linewidth"}
The geometry and 2D layer structure of the MLG, BLG and BLG-intercalated materials (BLG-1V, BLG-2V and BLG-3V) were optimized by dispersion-corrected hybrid density functional theory [@Becke1993; @Becke1988; @Lee1988; @Grimme2006; @Pakhira2012; @Pakhira2013], i.e. B3LYP-D2, which has been shown to give correct electronic properties of the 2D materials [@Grimme2006; @Lucht2015]. The semi-empirical Grimme-D2 dispersion corrections were added in the present calculations in order to incorporate van der Waals dispersion effects on the system and to estimate the van der Waals forces [@Grimme2006; @Pakhira2012; @Pakhira2013]. The CRYSTAL14 [@Dovesi2014] suite code was used to perform all the computations. The graphene intercalated nanostructured 2D materials (BLG-1V, BLG-2V, and BLG-3V) have been prepared by adding the V atoms in one unit cell of the crystal structures. In the present computation, triple-zeta valence with polarization quality (TZVP) Gaussian basis sets were used for both C and V atoms [@Peintinger2013]. Integration inside of the first Brillouin zone were sampled on 15 x 15 x 1 k-mesh grids for all the materials for both the optimization and material properties (band structures and density of states) calculations. We have plotted the bands along a high symmetric k-direction, $\mathrm{M-K-\Gamma-M}$, in the first Brillouin zone. Electrostatic potential calculations have been included in the present computation i.e. the energy is reported with respect to the vacuum. In this work, both AB-stacked BLG system, and AA-stacked BLG-intercalated materials have been considered with tunable interlayer separation. The unit cell constructed this way contains one configuration, known as AA, where one atom is exactly above another atom of the other layer of graphene (see Figure \[fig SM1:AA-Stacked-Structure\] in the Supplementary Material), there is another configuration, AB, where an atom of the top of graphene layer is exactly in the center of a lower layer hexagon as shown in Figure 1b. To reproduce previous experimental and theoretical results, we have considered both AA and AB-stacking for the BLG material, however we have used the AA-stacked layer structures for the BLG-1V, BLG-2V, and BLG-3V materials, which are the stable configurations. The optimized geometries of the MLG, AB-stacked BLG, and BLG-intercalated materials along with their band structures and projected density of states (DOSs) are shown in Figure 1. The optimized geometry, band structures and DOSs of pristine AA-stacked BLG are reported in the Supplemental Material. The contribution of the sub-shells (such as p$_x$, p$_y$, p$_z$, d$_{yz}$ etc.) in the total DOSs has been also computed.
The present DFT calculation shows that the Dirac Cone exists at the K-point in the band structure of the pristine MLG material which is consistent with previous results [@Ohta2006; @Castro2007; @Zhou2007; @Mullen2015]. It is found that the valence bands of MLG arise from the 2p$_z$ sub-shells of C atoms, as depicted in the DOSs calculation. These sub-shells make the Dirac Cone in the band structure of the pristine MLG (as shown in Figure 1a). The total DOSs calculations show the electron density around the Fermi energy level (E$_F$), confirming MLG as a semimetal.
For the pristine AB-stacked BLG material, the Fermi level lies below the point where the valence and conduction bands touch each other as shown in Figure 1b. We have also studied the AA-stacked BLG, but our computation shows that AB-stacked BLG is thermodynamically more stable than the AA-stacked BLG. The valence and conduction bands are crossing the E$_F$ at K-points for both the AA- and AB-stacked BLG materials. Thus, both the AA- and AB-stacked BLG materials have no Dirac Cone, which agree well with the previous experimental results [@Min2007; @Latil2006]. Thus the AA-/AB-stacked BLG materials are ordinary non-zero/zero band gap semiconductor. The bond distances, lattice constants ($a$ and $b$), and intercalation distance ($d$) are reported in Table I. The distance between the two monolayers in the pristine AB-stacked BLG material is about 3.390 Å which agrees with the previous experimental and theoretical results [@Min2007]. We have also calculated the binding energy ($\Delta G\!_{f}$) of all the system studied in here as shown in Table II. The required binding energy, $\Delta G\!_{f}$, to form the AB-stacked BLG from MLG, is about -0.86 eV.
=5.5pt
[ c c c c c c ]{} Component & C-C (Å) & C-V (Å) & $a$ (Å) & $b$ (Å) & $d$(Å)\
MLG & 1.416 & N/A & 2.451 & 2.451 & N/A\
BLG & 1.421 & N/A & 2.449 & 2.449 & 3.390\
BLG-1V & 1.439 & 2.243 & 4.942 & 4.942 & 3.441\
BLG-2V & 1.442 & 2.335 & 8.575 & 5.022 & 3.645\
BLG-3V & 1.464 & 2.413 & 5.032 & 5.032 & 3.658\
=6.5pt
[ c c c c ]{} Component & State & $\Delta G\!_{f}$ (eV) & Dirac Cone\
MLG & Semimetal & N/A & Yes\
BLG & Semiconductor & -0.86 & No\
BLG-1V & Metal & -5.97 & Yes\
BLG-2V & Metal & -5.67 & No\
BLG-3V & Metal & -5.36 & No\
The optimized structure between Vanadium and graphene follows an AA-stacking arrangement with the Vanadium placed at the center of the honeycomb, and it forms the BLG-1V 2D material. On this conformation, the Vanadium d-orbitals will be situated to favorably interact with the p$_z$ orbital orthogonal of the graphene layer. The BLG-1V structure is highly favorable by -5.97 eV relative to BLG. Additional Vanadium atoms are also favorable by -5.67 eV between single metal and double metal addition to form BLG-2V material, and by -5.36 eV between the double metal and triple metal addition to form BLG-3V material as shown in Table II. The present DFT calculations show that the intercalation distances have been gradually increased in the intercalated BLG materials, BLG-1V to BLG-3V, but not equally as shown in Table I. Our computation also shows the C-C bond distance has been increased by an average of 0.014 Å in the BLG-1V, BLG-2V and BLG-3V 2D materials due to the presence of V atoms between the graphene layers compared to BLG. We have calculated the stability of BLG material with four V atoms (BLG-4V), but the frequency calculations show that the structure is unstable thermodynamically as it has many imaginary frequencies, and hence this result is excluded from this Letter.
The addition of Vanadium atom was found to substantially alter the electronic properties in the graphene bilayer. Addition of a single Vanadium atom intercalated in AA-stacked BLG (i.e. BLG-1V) yields a Dirac Cone along the K direction as shown in Figure 1c. The DOSs around the Fermi energy level indicates metallic behavior. The individual components of the p-orbitals electron density of C atoms in graphene and the d-orbitals electron density of V atoms have been explicitly shown along with the total electron density in the DOSs computations as depicted in Figure 1c. The density projections show the electrons can freely move from valence band to the conduction band, and it reveals the intrinsic electron mobility of the BLG-1V materials. The DOSs calculations of the BLG-1V material show the electron density around the E$_F$ is coming from the p$_z$ (highlighted by red color) sub-shells of C atoms in graphene and the d$_{yz}$ (highlighted by violet color) sub-shell of V atom. In other words, the p$_z$ sub-shell of the carbon, and d$_{yz}$ sub-shells of Vanadium is solely responsible for the emergence of Dirac Cone of the BLG-1V material. In MLG, the Dirac Cone seems to come from p-orbital of C atoms (may be the other related materials as well) but for BLG-1V, the nature of the Dirac Cone is different because it comes from either p-orbital alone or d-orbital alone or a hybrid of p-d-orbitals at the same time. We can not tell for now but the difference in nature is interesting. As an excellent conductor and Dirac material, the BLG-1V material can be used in various modern electronic and nanotechnology devices.
BLG-2V can be formed with the addition of another Vanadium atom in BLG-1V material. This addition results in the Dirac Cone disappearing, and the bands to shift, showing a metallic behavior. The most stable spin states are reported in this Letter. All other spin states and the discussion about the spin alignment of the BLG-2V and BLG-3V 2D materials are described in the Supplementary Material. The 3d orbitals of V receive more electron donation from the graphene 2p$_z$ sub-shells when the total spin is increased. The band structure reveals that along the M to K directions, the valence band crosses the Fermi level in both the BLG-2V and BLG-3V materials resulting in large electron distribution around the Fermi level as shown in Figure 1d and Figure 1e, respectively. Thus both BLG-2V and BLG-3V materials show metallic behavior as depicted in their DOSs. In both materials, the DOSs show that electron distribution or density around the Fermi level is due to the p$_z$ sub-shell electrons of C atoms in graphene and the d$_{yz}$ sub-shell electrons of V atoms as depicted in sub-shells DOSs calculations. Among all the intercalated materials, BLG-3V shows the least electron density near E$_F$. We have studied the structure and material properties of the MLG, BLG, BLG-intercalated nanostructured 2D materials. The individual components of the sub-shells of the p-orbital of C atoms and the d-orbital of the V atoms, which are taking part in the total electron density in DOSs are also reported along with the total DOSs. Among all the systems, MLG and BLG-1V materials have a Dirac Cone at the K-point. Once the concentration of Vanadium is increased the Dirac Cone disappears. This work provides the first theoretical investigation on the BLG with intercalation of one, two and three V atoms per unit cell and the first observation of Dirac Cone in the BLG-1V material. It further shows how the material properties have been changed in BLG-1V, BLG-2V and BLG-3V due to the presence of Vanadium atoms. We have also found that the 2p$_z$ sub-shell of C atoms and 3d$_{yz}$ sub-shell of the V atoms in the BLG-1V, BLG-2V, and BLG-3V materials are the main components around E$_F$ playing the main role to show the Dirac Cone and the conducting properties. In conclusion, the Dirac Cone structure gives the MLG and BLG-1V materials massless fermions leading to ultrahigh carrier mobility, with the former having p-orbitals character while the second involves p- and d-orbitals character.
------------------------------------------------------------------------
\
Srimanta Pakhira$^{1, 2}$, Kevin P. Lucht$^{1, 2}$, and Jose L. Mendoza-Cortés$^{1, 2, *}$\
$^{1}$ Condensed Matter Theory, National High Magnetic Field Laboratory, Scientific Computing Department, Materials Science and Engineering, Florida State University (FSU), Tallahassee, Florida, 32310, USA\
$^{2}$ Department of Chemical & Biomedical Engineering, FAMU-FSU Joint College of Engineering, Tallahassee, Florida, 32310, USA.\
E-mail: <mendoza@eng.famu.fsu.edu>
------------------------------------------------------------------------
I. Optimized geometry, band structure and density of states of AA-stacked bilayer graphene
==========================================================================================
![The optimized structures of AA-stacked bilayer graphene (BLG) 2D materials.[]{data-label="fig SM1:AA-Stacked-Structure"}](./SI/AA-Stacked-Structure){width="0.98\linewidth"}
The optimized structure of AA-stacked bilayer graphene (BLG) is shown in Figure \[fig SM1:AA-Stacked-Structure\], and the band and DOSs are shown in Figure \[fig SM2:AA-Stacked-BLG-DOSS\_DPDOSS\_BAND-2D\]. The band and DOSs calculations reveal that AA-stacked BLG is a non-zero band gap semiconductor, and it has an indirect band gap around 0.25 eV as depicted in the band structure. The individual components of p-orbital (i.e. p$_x$, p$_y$, p$_z$ sub-shells of p-orbital) are calculated along with total DOSs. We found the p$_z$ sub-shell of p-orbital accounts for the largest electron contribution in total DOSs.
![The band structure and density of states (DOSs) of AA-stacked BLG. The individual components of DOSs of the C atom, and total DOSs (depicted by “Total” in the third row) are also presented in here.[]{data-label="fig SM2:AA-Stacked-BLG-DOSS_DPDOSS_BAND-2D"}](./SI/AA-Stacked-BLG-DOSS_DPDOSS_BAND-2D){width="0.98\linewidth"}
II. Effect of spin alignment in BLG-2V and BLG-3V materials
===========================================================
An alternative method of controlling the electronic properties of BLG-intercalated materials is by altering the spin configuration. We carried out Mulliken spin density analysis to study the spin configuration. Thus the estimation of spin can vary using another method to estimate it from the wave function. All previous materials have referenced Vanadium in the high-spin state, and in the case of BLG-2V and BLG-3V in a ferromagnetic (FM) spin arrangement. Considering an anti-ferromagnetic (AFM) arrangement of spins, we can drastically modify the properties. The spin conformations of the BLG-intercalated materials are reported in Table S1.
=5.0pt
[ c c c ]{} Materials & Average Spin of V & Total Spin\
BLG-1V & 2.245 & 2.082\
FM BLG-2V & 1.856 & 3.740\
FM BLG-3V & 1.489 & 4.375\
AFM BLG-2V & 0.000 & 0.000\
AFM BLG-3V & 0.157 & 0.405\
Figure \[fig:V2\_graphene\_AFM\_2DPROP\] shows the band structure and DOSs of the AFM BLG-2V material. In the case of BLG-2V, the AFM conformation yielded higher spin of individual V atoms, 2.22 and -2.22 compared to the FM conformation with 1.86 for each Vanadium. Thus the average spin of AFM BLG-2V material is 0.0. Relative to its FM counterpart, the AFM BLG-2V structure is more stable by $\Delta G\!_{f}$ = -0.410 eV. Examining the electronic properties, we noticed considerable differences between the AFM and FM BLG-2V structures. Once in the AFM state, BLG-2V have a degenerate pair of band structures (for alpha and beta electrons) with a band opening between the valence and conductive band. The band gaps of this spacing are 0.101 eV for the indirect transition and 0.681 eV for the direct transition. Thus this calculation reveals that the band gap in BLG-intercalated material can be formed by altering the spin alignment.
![The band structure and DOSs of the alpha electron of AFM BLG-2V materials are presented. The beta electrons exhibit identical information. The individual components of DOSs of the C and V atoms, and total DOSs are also presented here. Compared to the FM, in the AFM state a band gap appears.[]{data-label="fig:V2_graphene_AFM_2DPROP"}](./V2_graphene_AFM_2DPROP){width="0.99\linewidth"}
For the case of BLG-3V, there also exists one AFM conformation given the symmetry of the unit cell consisting of two Vanadium atoms with electrons in the alpha state and one in the beta state, or vice versa. We found that the AFM BLG-3V structure choses to allocate the spin as -1.726 for the beta state Vanadium, and the two alpha state Vanadiums with spins of 1.098. The average spin of AFM BLG-3V is 0.405. Comparing the structure to the FM BLG-3V structure, it is slightly less favorable by $\Delta G\!_{f}$ = 0.193 eV. As for the electronic properties, we found for alpha and beta electrons that they are both highly conductive with a large DOSs and band overlap around the Fermi Energy (see Figure \[fig:V3\_graphene\_AFM\_PROP\] and Figure \[fig:V3\_graphene\_AFM\_BETA\_PROP\]) resulting in a large electron density around the E$_F$. The present calculations showed that both the FM and AFM states of BLG-3V material are conducting and thus they show ordinary metallic behavior. This study of spin behavior shows that not only the electronic properties of BLG-intercalated materials dependent on the concentration of Vanadium atoms intercalated in BLG, but also on the arrangement of the spin, which adds an additional variable to modifying the behavior of BLG-intercalated materials.\
![The band structure and DOSs of the alpha electron of AFM BLG-3V materials are presented. The individual components of the DOSs contributions from the C and V atoms, and total DOSs are also presented here. []{data-label="fig:V3_graphene_AFM_PROP"}](./V3_graphene_AFM_PROP){width="0.99\linewidth"}
![The band structure and DOSs of the beta electron of AFM BLG-3V materials are presented. The individual components of the DOSs contributions from the C and V atoms, and total DOSs are also presented here. Just as the alpha electrons, we can see the conductive behavior for the beta electrons.[]{data-label="fig:V3_graphene_AFM_BETA_PROP"}](./V3_graphene_AFM_BETA_PROP){width="0.99\linewidth"}
Thus, we show an alternative method of controlling the electronic properties of BLG-intercalated materials is by altering the spin conformation. The AFM arrangement of spins in the BLG-2V material changes it from a metal to a semi-conductor with a small band gap. And for the BLG-3V material, the AFM state changes the electron density around the Fermi level compared FM states. We have observed that the 3d orbitals of V receive more electron donation from the graphene 2p$_z$ sub-shell when the total spin is increased. We have found the 2p$_z$ sub-shell of p-orbital of C atoms and 3d$_{yz}$ sub-shell of d-orbital of V atoms account for the largest electron contribution in total DOSs.\
All the .cif structures
=======================
The optimized crystallographic information (.cif files) are provided below.
Monolayer Graphene
------------------
AA-stacked Bilayer Graphene
---------------------------
AB-stacked Bilayer Graphene
---------------------------
BLG-intercalated BLG-1V
-----------------------
BLG-intercalated BLG-2V
-----------------------
BLG-intercalated BLG-3V
-----------------------
|
---
author:
- 'Christoph Dibak, Wolfgang Nowak, Frank Dürr, Kurt Rothermel'
bibliography:
- 'literature.bib'
title: Using Surrogate Models and Data Assimilation for Efficient Mobile Simulations
---
[Preprint]{} [Dibak : Using Surrogate Models and Data Assimilation for Efficient Mobile Simulations]{}
of today, numerical simulations are heavily used in engineering and decision making. While today’s simulations are mostly executed on powerful stationary computers, many use-cases require simulation results to be available in the field including on-site information or interaction with the simulation. Enabler for such mobile simulations are modern powerful mobile devices, emerging augmented reality headsets for interaction, and more and more sensors being integrated into the Internet of Things.
Engineers facing unplanned or unforeseeable situations would benefit from such mobile simulations allowing for faster decision making. As an example, consider an engineer in the field who has to find a solution for placing a hot exhaust tube during deployment of a machine in a factory. Using her augmented reality headset and mobile simulations, the engineer directly sees the heat of the surface of the tube and its surrounding materials as if the machine would be operational. To cover different situations, the engineer can change parameters of the simulation, e.g., the airflow surrounding the tube. The application enables the engineer to see the heat distribution even in complex geometrical regions, e.g., bends, and allows her to place the tube according to surrounding material. Additionally, parameters from sensors can be integrated into the simulation, e.g., to include data from similar machineries deployed elsewhere.
The main challenge for implementing such mobile simulations is the complexity of the simulation combined with resource-poor mobile devices. Mobile devices are about $10$ times slower than servers for solving numerical problems [@Dibak2015]. Even worse, for very high-quality solutions, the battery-powered mobile device might run out of memory or energy and is therefore unable to provide any solution. Therefore, powerful server resources need to be involved to enable high-quality simulations on mobile devices. However, simply streaming simulation results from remote servers to mobile devices has been found to be inefficient due to the large size of the results and the latency of the communication [@Dibak2017a]. Efficient approaches therefore utilize both, communication and computing resources of the mobile devices and carefully consider quality degradation to reduce latency of the computation.
Recent literature suggests using code-offloading techniques for dynamically distributing a mobile application between remote server and mobile device [@Cuervo2010; @Ra2011; @Chun2011; @Gordon2012]. Code-offloading typically splits the application into modules that will be dynamically placed on either the mobile device or the server. For this placement, data dependencies between modules have to be taken into account possibly resulting in communication overhead, e.g., one module on the mobile device requires data from another module on the server. However, numerical simulations require lots of state information during the execution, making distributed execution between mobile device and server constantly unpleasant. Therefore, code-offloading would also result in the two inefficient options discussed above: either compute everything on the mobile device or stream everything from the server. Additionally, code offloading is application agnostic and therefore misses the opportunity to exploit specific properties of the simulation to gradually reduce quality while still providing results within user-defined quality bounds.
Our recently proposed methods for mobile simulations are based on existing numerical methods, namely model order reduction, to provide efficient distributed execution between a server and a mobile device [@Dibak2017a; @Dibak2017b; @Dibak2018a]. Model order reduction generates a reduced model in a pre-computation step. The reduced model is built to provide fast approximate solutions and can be executed on the mobile device. While the generation process for the reduced model is compute-intensive, it can be executed on a remote server, which can also adapt the reduced model when necessary. However, these approaches only consider time-independent problems, only work for similar behavior of the simulation model that can be expressed using only a few parameters, and are less flexible due to the pre-computation step. Consequently, time-dependent simulations with a high number of parameters require new methods for distributed mobile execution.
In this article, we propose a novel framework for the distributed execution of generic time-based simulations between mobile device and server based on, so-called, surrogate models. Surrogate models are computationally simplified models of a simulation model providing approximate solutions for the reference model. For instance, the surrogate model can use another discretization, i.e., the simulated system has lower resolution in time or space, it can be based on different physical properties, e.g., neglecting physical properties having only little effect on the result, or even use time-dependent model order reduction techniques. Intuitively, our approaches will execute the surrogate model on the mobile device and both models, the reference model and the surrogate model, on the server. Having the results of both models available, the server can decide if the quality available from the surrogate model on the mobile device is sufficient without any communication overhead. This way, the server can send updates to the mobile device only when necessary to ensure quality constraints for the user. The remaining challenge is then to integrate updates into the surrogate simulation model. To solve this problem, one of our approaches will use data assimilation techniques, namely the ensemble Kalman filter, which allows to drastically reduce the size of the updates.
In detail, our contributions in this article are as follows:
Analysis of the problem for providing time-dependent simulation data to mobile devices;
an approach based on surrogate models where only required parts of the simulation are streamed from a remote server;
an approach based on surrogate models and data assimilation that supports partial updates and therefore significantly reduces the need for high data rates;
evaluation for different scenarios based on real measurements in cellular networks and on a popular system-on-chip platform, as typically used in mobile applications.
The rest of the current article is structured as follows: Section \[sec:related-work\] briefly discusses related work before Section \[sec:system-model\] introduces the system model including the mobile environment and the simulation model. Section \[sec:problem-statement\] introduces the problem statement before we introduce the basic streaming approaches in Section \[sec:stream-approach\]. Section \[sec:full-update-approach\] introduces the full update approach utilizing computation and communication, followed by Section \[sec:partial-update-approach\], which introduces the partial update approach using data assimilation techniques to reduce required bandwidth. In Section \[sec:evaluation\], we discuss the evaluation of all approaches.Finally, Section \[sec:conclusion-future-work\] concludes the paper with future work.
Related Work {#sec:related-work}
============
Before describing our approaches, we first briefly discuss the limitations of existing approaches for providing time-dependent numerical simulations on mobile devices. Existing approaches can be categorized into code offloading approaches, mobile approximate computing approaches, and our existing approaches for mobile simulations.
In mobile computing, code offloading partitions a mobile application to run parts of the application on remote server resources. Existing approaches focus on reducing energy consumption [@Cuervo2010] or the latency of the computation [@Ra2011; @Chun2011; @Gordon2012]. The general idea is to partition modules of the mobile application depending on the characteristics and dependencies between modules. Partitioning will result in two sets of modules, where one set will be executed on the mobile device and the other set will be executed on the server. Data dependencies between models that do not run on the same node have to be communicated between the server and the mobile device. For increased robustness against the communication link, modules can also be executed on both nodes [@Berg2015]. However, the initial partitioning depends on profiling of required resources and data dependencies between modules.
While many mobile applications can benefit from code offloading, mobile simulations require different solutions since
code offloading is application agnostic and does not consider quality of the result;
numerical simulations are hard to modularize, since computation of one state requires large sizes of shared memory between parallel processes; and
states of the numerical simulation have strong data dependencies, hindering partitioning.
Therefore, code offloading would result in heavily unbalanced executions, where either a single node computes the full simulation, or the network is heavily used to communicate lots of data between modules. In contrast, our approaches will consider different quality levels of the mobile and the server computation and only communicate when necessary, reducing both, energy on the mobile device and latency of the application.
A framework for quality aware execution between a mobile device and a remote server has been proposed by Pandey et al. [@Pandey2016; @Pandey2017]. Their approach uses a workflow-based representation of computation tasks that yield approximate results. Computation tasks are profiled offline prior runtime. During runtime, the previously profiled tasks are offloaded between mobile device and server. However, while there are quality aware algorithms that can benefit from their framework, it is unsuitable for numerical simulations, since
the separation of offline and online phase make this approach infeasible for applications that depend on parameters, such as numerical simulations;
the workflow of numerical simulations consists of varying number of tasks for varying quality, which is not considered in their approach; and
as for code offloading, the workflow of numerical simulations consists of a single path of tasks with strong data dependencies, which results in unbalanced offloading.
In contrast, our approach does not require any offline phase or profiling before runtime and will execute different quality versions of the single-path workflow in parallel on the server and the mobile device to deal with strong data dependencies.
In our previous works, we used the reduced basis method (RBM) that reduces the complexity of simulations to provide faster results [@Dibak2017a; @Dibak2017b; @Dibak2018a]. The general idea is to pre-compute a simpler model that provides only approximate results at much lower cost. Using the RBM we were able to significantly reduce the runtime and energy consumption. However, our existing approaches only focused on stationary simulations which are time-independent and therefore are unusable for time-dependent simulations. Additionally, approaches presented in this article will perform much better for higher number and broader range of parameters.
Other approaches by the authors focused on increasing the robustness of streaming results from a remote server to a mobile device under disconnections [@Dibak2015]. Using a statistical model, we were able to predict the duration of disconnections and therefore decide on the computation on the mobile device. Such approaches are orthogonal to the work presented in this article and can be used to make approaches more resilient against disconnections.
System Model {#sec:system-model}
============
This section introduces our system model for dynamic offloading of time-dependent numerical simulations. We first describe our model for the time-dependent simulation and then provide our model of the mobile environment, consisting of mobile device, remote server, and wireless communication network.
Time-Dependent Simulations
--------------------------
![image](discretization/discretization)
Time-dependent numerical simulations are based on differential equations describing the behavior of the system w.r.t. continuous time and space. Such equations need to be discretized in order to be solved. Time-discretization divides the continuous time into $n_t + 1$ time steps. Each step represents the system state $S_i$ at fixed time $t_i$. For simplicity, we assume that the time of the first step is $t_0 = 0$ and the time for the last step is $t_{n_t} = 1$. Then, the time resolution is $\Delta t = 1 / n_t$ (cf. Fig. \[fig:discretization\]).
Next, we first describe how the simulated system at each time step is discretized in space, how the transition between time steps is implemented, and how the computation can be optimised to provide computationally cheaper approximations of the simulation problem.
### Representation of Time-States and Transition Between States
Time-states $S_i$ represent the state of the simulated system at discrete points in time. While space is defined continuously in the differential equation, it also needs to be discretized. To this end, the system is only observed at fixed points in space, e.g., at points forming a grid with mesh width $\Delta x$ (cf. Fig. \[fig:discretization\]). Values of the simulation at these points form a vector. The size of the vector depends on the spatial discretization. If finer discretization is required, the size of the vector is increased. The size of the vector later also depends on the complexity of the computation.
Transition between time states is implemented by solving an algebraic problem in a numerical solver. The output of the solver is the state vector of the next time state $S_{i+1}$. Input into the solver is the old state vector $S_i$ and problem specific information, e.g., a problem matrix and a vector forming the algebraic problem. Typically, there is a choice between multiple classes of algebraic problems for the same differential equation leading to different trade-offs between quality and complexity of the computation. For instance, simulating heat propagation using the heat equation yields various discretization methods that can be generalized into two classes: implicit methods and explicit methods. While implicit methods are computationally more expensive they provide better quality than explicit methods. Such decisions on the trade-off between quality and complexity motivate the use of surrogate models.
### Reference Model and Surrogate Model {#sec:reference-model-and-surrogate-model}
We assume two different implementations of the model, a reference model and a surrogate model. The reference model defines the “ground truth” of the simulation. It is defined over fine-grained discretization grids enabling accurate predictions of future system states. While it provides accurate results, it is expensive to compute. On the other hand, the surrogate model is computationally less expensive while providing only lower quality than the reference model. Surrogate models can be obtained by using an explicit method rather than an implicit method or by changing $\Delta x$ of the discretization grid to have a lower space resolution.
We will later compare the reference model and the surrogate model at the same time step. To compare results of both models, the vector of the reference model has to be mapped to the same dimensionality as the vector of the surrogate model. We assume that this mapping is provided by a transformation matrix $T_{R\to S}$. Additionally, to simplify the notation for comparison between models, we assume that time-discretization of the reference model and the surrogate model is the same. However, the reference model could also be configured to compute multiple, say $n_{\textit{ref}}$ steps for one surrogate step. This way, the reference model will have a time discretization with $\Delta t_{\textit{ref}} = \Delta t / n_{\textit{ref}}$ and we are able to compare results of the reference model and the surrogate model every $n_{\textit{ref}}$ time steps.
### Mixing Simulation Models for Approximate Solutions
The solutions of one simulation model form a chain of time steps (cf. Fig. \[fig:discretization\]). To provide better quality, the chains of the reference model can be used to update the surrogate model chain. Each of these updates forms a new chain of approximate solutions. For instance, the surrogate model is updated at time step, say 5, to set its state to the state of the reference model. The resulting chain of simulation results may be significantly different compared to the original surrogate simulation chain without the update.
System Components
-----------------
To compute results of the numerical simulation, the system consists of two compute nodes, namely the mobile device and the server. The server is located in a central location in the network and receives data from sensors (cf. Fig. \[fig:nodes-sensors\]).
![The two compute nodes, server and mobile device, and connected sensors.[]{data-label="fig:nodes-sensors"}](nodes-sensors/nodes-sensors)
The mobile device is carried by the user. The user directly interacts with the mobile device and requests simulation results. The mobile device has an energy-efficient but slow processor. In contrast to the server, it is very resource-limited and depends on batteries providing limited energy.
The server receives data from cloud-connected sensors and collects and stores relevant data to form the initial state for the simulation. Therefore, the initial state for the simulation is only available on the server and needs to be communicated to the mobile device before any simulation model can be executed.
Server and mobile device are connected via wireless communication, e.g., 3/4G cellular networks or IEEE 802.11 WiFi. Wireless communication is subjected to dynamic latency and throughput, which might be very low in some cases.
Problem Statement {#sec:problem-statement}
=================
After describing the system model, this section describes the problem statement. We first define quality for approximate solutions and then define the optimization goal of the system.
Quality of approximate solutions is defined by comparing approximate solutions to the reference model using a user-defined norm $\lVert \cdot \rVert_U$. Let $S_i^A$ denote the approximate solution and $S_i^R$ denote the reference solution for time step $i = 0, \dots n_t$. The quality of time step $i$ is then defined as $q_i^A = \lVert S_i^A - T_{R\to S} S_i^R \rVert_U$, where $T_{R\to S}$ is the transformation matrix between reference model results and surrogate model results (cf. Sec. \[sec:reference-model-and-surrogate-model\]). The overall quality of the approximate solution for all time steps is then defined as $$\begin{aligned}
Q_A
& = \max_{i = 0, \dots, n_t} q_i \\
& = \max_{i = 0, \dots, n_t} \left \lVert S_i^A - T_{R\to S} S_i^R \right \rVert_U.\end{aligned}$$ One example for the user-defined metric $\lVert \cdot \rVert_U$ is to compare approximate solution and reference model solution by the maximum difference at any point, e.g., the maximum temperature difference of a heat simulation.
Having defined quality, we can now define the overall goal of the system. The goal of the system is to minimize the latency until approximate solutions are available on the mobile device. Approximate solutions have to fulfill quality constraints given by the user, i.e., the user provides $Q_{\max}$ and the solution has to fulfill $Q_A \leq Q_{\max}$. This way, the user can define the maximum difference of the approximate solution to the reference simulation and the system provides an approximate solution as fast as possible.
Basic Streaming Approaches {#sec:stream-approach}
==========================
We briefly describe the straight forward streaming approaches. These approaches only serve as baseline and for comparison to our approaches that will be presented in the next sections. Streaming approaches compute all steps of the simulation on the server and communicate results to the mobile device. We introduce two approaches, the *simple stream approach* and the slightly more sophisticated *advanced stream approach*.
The simple stream approach computes the reference simulation on the server and communicates all steps to the mobile device. Therefore, all results on the mobile device have the best possible quality and $Q_A = 0$.
The advanced stream approach also computes the reference simulation on the server. However, it will reduce the quality of the simulation states before they are sent to the mobile device. In particular, it will reduce the resolution of the simulation to the surrogate model discretization. This way, the quality is still $Q_A = 0$, while the volume of data communicated over the wireless communication link is significantly reduced.
While the simple stream approach represents the result of an unbalanced partitioning, which could be the result of code offloading for the simulation problem, the advanced stream approach is able to reduce the communication overhead at no quality loss. However, the advanced stream approach still has to communicate all simulation steps over the network. The following sections will introduce our approaches, which require much lower communication overhead over the stream approaches.
Full Update Approach {#sec:full-update-approach}
====================
While the previously introduced stream approaches are able to meet any quality requirements, they do not consider the mobile device for computing simulation results. This section therefore introduces the full update approach, which reduces the requirements on the wireless link as it uses computation on the mobile device.
The general idea of this approach is to execute the same simulation on the surrogate model simultaneously on the mobile device and the server. Thus, the server knows which (approximate) results have been calculated by the mobile device using the surrogate model. By comparing to the exact solution of the reference model, the server can send updates to the mobile device at selected states, whenever the surrogate model yields results of insufficient quality.
![Overview of the full update approach[]{data-label="fig:full-update-approach"}](full-update-approach/full-update-approach)
Figure \[fig:full-update-approach\] depicts the components for the full update approach. Reference model and surrogate model are models of the simulation that implement time transition of the states. The update decision component will decide whether to send an update to the mobile device. The mobile state tracker holds the last known state of the simulation on the mobile device for the previous state. On the mobile device, the update integration component combines possible outputs of the surrogate computation.
While the reference model and the surrogate model have already been introduced in the previous sections, we will explain the remaining components in the following subsections.
Mobile State Tracker
--------------------
The mobile state tracker provides the previous state of the mobile device on the server. As the surrogate model is deterministic and will return the exact same result as on the mobile device, the server can use the result even before it has been computed on the mobile device.
For the initialization, the initial state needs to be communicated to the mobile device. The mobile state tracker will then be initialized with the same initial state.
Update Decision
---------------
The update decision is based on the requirements of the user. To this end, the update decision component receives the current state of the reference model and the surrogate model. It computes the difference of the states after transforming the reference state to the same spatial discretization grid as the surrogate state. Afterwards, it will check whether the quality of the result of the surrogate model is sufficient. If it is sufficient, it will send a quality certification message to the mobile device. If it is not sufficient, it will send an update of the vector representing the current reference state to the mobile device.
Notice that before sending, the update is transformed to the spatial discretization level of the surrogate model since this provides the quality such that the mobile device can continue calculating future states from the updated model.
The update decision component will also update the tracked state on the server. If a certification message was sent, it will use the result from the surrogate model. If an update was sent, it will update the mobile state with the result from the reference model.
Update Integration
------------------
The update integration component is executed on the mobile device and receives messages from the server. It may invoke the surrogate model and provides the result as current simulation state to the user application.
If the update integration component receives a certification message, it will use the result from the surrogate model and provide the result to the user application. If it receives an update message, it parses the state and provides it directly to the user application since the result then is directly derived from the reference model. To this end, the update integration module always has to wait for the next message from the server. If no certification message is available, it cannot provide any result to the user application since the result of the surrogate model might not provide sufficient results.
There are optimistic and pessimistic implementations of the full update approach. The optimistic implementation assumes that no update has to be made and the computation using the surrogate model is sufficient. The pessimistic implementation assumes that the surrogate model will not provide sufficient quality. To this end, the optimistic approach will always start the computation of the next state in the background, whereas the pessimistic approach only computes on request by the server.
Compared to the stream approach, the full update approach will reduce the traffic of the wireless communication link as certification messages will be much smaller than streaming results. The traffic of the network now depends on the accuracy of the surrogate model. However, this approach adds slightly more overhead on the server side, which now has to compute the surrogate model in parallel to the reference model. In the next section, we will see how we can further reduce the traffic of the network by just sending partial updates.
Partial Update Approach {#sec:partial-update-approach}
=======================
![image](partial-updates/partial-updates)
The previously introduced full update approach always has to communicate full state updates for surrogate states violating quality requirements. This results in a huge communication overhead even in cases where only a small part of one surrogate state violates quality constraints. In this section, we therefore introduce our partial update approach, which is based on the full update approach, but will only update parts of the vector representing the approximate solution from the surrogate model (cf. Fig. \[fig:partial-updates\]).
Updating single values of the simulation is not straight forward since the simulation model might be sensitive to external changes of the simulation state. For instance, if we consider a heat simulation, the simulation model and numerical calculation assumes the solution to be continuous, while when randomly updating values, the solution becomes discontinuous, i.e., updated values add sharp edges to the simulation state. Such discontinuities lead to numerical instabilities which would never occur in normal calculations for the simulation and which the model might not be able to recover. Therefore, more sophisticated approaches are required.
To reduce the discontinuity when updating single values, we use data assimilation techniques. Data assimilation emerged from weather simulations, where sensor data updates the simulation state, which leads to similar problems [@Law2015]. To prevent such problems, data assimilation techniques identifies the correlation between parts of the simulation state. When updating one value, data assimilation uses this correlation and updates all correlated values respectively. Therefore, the number of discontinuities is highly reduced and the simulation model quickly recovers from single point updates.
The idea of the partial update approach is to apply data assimilation and treat single point updates as sensor observations. In this case, sensor observations are perfect, since they are taken from the reference simulation, which is our ground truth. This simplifies the calculation of data assimilation methods, since they normally assume inaccurate observations.
In the following, we first briefly describe our data assimilation of choice, the ensemble Kalman filter, before we discuss how the partial update approach changes the update decision and update integration from the full update approach.
The Ensemble Kalman Filter
--------------------------
![image](ensemble-kalman-filter/ensemble-kalman-filter)
The ensemble Kalman filter (EnKF) provides a solid and frequently applied framework for data assimilation [@Evensen2003]. The general idea of the EnKF is to use multiple states to track uncertainties (c.f. Fig. \[fig:ensemble-kalman-filter\]). These states are called ensemble members. Initially, ensemble members are generated using random perturbation of the initial state. For every simulation state, the next state of the ensemble members is computed using the surrogate model. The number of ensemble members $n_e$ can be small. It has been shown that even some complex applications do not require more than $50$ ensemble members [@Houtekamer1998; @Keppenne2000].
### Generation of Ensemble Members
We generate ensemble members by perturbation of a reference state $S_i$. To this end, we add a random vector to the initial state to form an ensemble member $e_{i}^{(j)} = S_i + r^{(j)}$. The random vector $r^{(j)}$ is sampled such that the mean of ensemble members track the state of the reference simulation, e.g., using the standard error between reference and surrogate model.
In order to have the same result available on the mobile device as on the server, the computation has to be deterministic. To provide random numbers to the EnKF, we therefore use a deterministic random number generator with well-defined seed for the random vector. As the seed should change for every state, the server chooses a basic seed during initialization. We then use a deterministic function of the basic seed and the state number to calculate the seed for state perturbation of the current state.
### Combining Simulation Model and Observations
Combining the state of the surrogate model and partial updates consists of two steps, namely the forecast step and the analysis step. In the forecast step, the surrogate model is applied for all current ensemble members $E_i = (e_i^{(1)}; \dots; e_i^{(n_e)})$. This generates the forecast ensembles for the next ensemble members $F_{i+1} = (f_{i+1}^{(1)}; \dots ; f_{i+1}^{(n_e)})$.
Partial updates are communicated as set of pairs $(\textit{position}, \textit{value})$ where, for every updated value, the position of this value in the surrogate state vector is given. This representation is translated into an update vector $u_{i+1}$ containing just the values and a measurement operator $H_{i+1}$ mapping respective entries of the surrogate state vector to the update vector.
For the analysis step, the next state has to be combined with partial updates $u_{i+1}$ by using the so called Kalman gain $K_{i+1}$. The Kalman gain defines the sensitivity of the difference of partial update $u_{i+1}$ and forecast state $F_{i+1}$.
The analyzed ensemble members are then calculated as $e_{i+1}^{(j)} = f_{i+1}^{(j)} + K_{i+1} (f_{i+1}^{(j)} - H_{i+1} u_{i+1})$. The analyzed simulation state as output for the user is the ensemble mean of all analyzed ensemble members. Further details about the EnKF and the computation of the Kalman gain $K_{i+1}$ can be found in the appendix.
Update Decision
---------------
For identification of parts that need updating, we introduce the concept of violation points. Violation points are points in the result of the surrogate model that violate the quality constraint. How these points can be calculated depends on the norm used for specifying the quality. In general, we distinguish between maximum norm and any other norm.
If the maximum norm is used for the quality constraint, every point has a maximum distance to its corresponding point in the reference state. Violation points are therefore all points that differ too much from the current reference state.
For other norms, e.g., the Euclidean norm, the computation of violation points is slightly more complex and requires an iterative process. To this end, we build the set of points that require updating. Initially, this set is empty. If quality requirements using the current updates cannot be met, the point with the maximum error to the reference model is included in the updates. This is repeated until the quality requirements are met.
Once the decision on the set of points to update is made, the update is sent over the network. Additionally, on the server side, the update is applied by the mobile state tracker in order to derive the same state as on the mobile device. In contrast to the full update approach, the mobile state tracker not only keeps track of the current simulation state on the mobile device, but also of ensemble members expressing the uncertainty of the current state.
Update Integration
------------------
The update integration component on the mobile device receives the update from the server. It holds the current ensemble members of the surrogate model. To this end, it will calculate the prediction model and prepares all steps in the calculation of the EnKF to provide the next state of the simulation for the user application.
Notice that before using the EnKF, we used the Kalman filter as data assimilation technique. The Kalman filter tracks uncertainty of the states using a covariance matrix. This matrix is quadratic to the problem size and therefore much more computationally expensive.
Evaluation {#sec:evaluation}
==========
The previous sections introduced the full update approach and the partial update approach. This section evaluates both approaches against the streaming approach described in Sec. \[sec:stream-approach\], which is the state-of-the art for providing simulation results to mobile devices. In this evaluation, we consider different mobile network setups and different assumptions on the accuracy of the surrogate model. As benchmark simulation problem, we are using a 2d heat simulation based on the well-known heat equation. Before describing details of evaluation results, we first introduce the evaluation setup.
Setup {#ssec:setup}
-----
We evaluated our approaches on a distributed test bed consisting of a Raspberry Pi 3 as mobile device and a powerful server. The Raspberry Pi 3 uses a system-on-chip (SoC) hardware similar to the SoCs used by mobile devices. It features a quad-core Broadcom ARM CPU at 1.2 GHz and 1 GB RAM. The server is a commodity off-the-shelf server featuring a quad-core Intel Xeon E3 CPU at 3.4 GHz and 16 GB RAM.
We emulated the cellular network connecting mobile device and server using the Linux Kernel Packet Scheduler on both nodes. To this end, we added queueing disciplines that restrict the data rate using a token bucket filter (TBF) and delaying packets using the netem module. To set parameters of the TBF and for the delay, we measured the performance of real cellular networks using HSDPA and LTE. We found that in extreme conditions, data rates can be as low as $50$ kbit/s with around $1$ second latency over longer periods. However, as we assume data rates to increase in the future and as our approaches are much better for lower data rates, we assume a data rate of $1$ Mbit/s.
Our approaches and the simulation are implemented in Python (version 2.7.13) and NumPy (1.14.3). To accelerate the computation, NumPy was linked with OpenBLAS (0.2.19), which is available for the server and mobile architecture. Serialization is implemented using Protobuf (3.5.2), and data was communicated using TCP as transport protocol. We used background threads and queues in order to send data parallel to processing. As deterministic random number generator, we use the Mersenne twister sequence [@Matsumoto1998] as implemented in Python.
As simulation problem, we choose the popular and well understood 2d heat equation with Dirichlet boundary conditions. We implemented two numerical solvers, one using explicit Forward-Time Central-Space (FTCS) discretization and the other using the Alternating Direction Implicit method (ADI) with the Crank-Nicolson method for 1d discretization. Throughout the evaluation, we used the explicit FTCS implementation as surrogate model and the implicit ADI implementation as reference model. For the initial state, we choose random values in the interval $[0,1]$ and set the boundary to $0$.
The execution of the simulation depends on many parameters for discretization and accuracy of the numerical model. We ran our evaluations with different parameters and received results similar to the results reported in this section. For the final evaluations we used the following default parameters. We assume the temporal discretization as $\Delta t = 0.0001$ with $100$ states. The maximum error allowed between reference and surrogate model was $2^{-7}$, i.e., we allowed less than 1% of error between reference model and surrogate model.
To compare different surrogate models, we defined quality levels as uniform grids for space discretization. This way, we can use different discretization grids and define surrogate models on each of the levels. To this end, our uniform grid implementation consists of different levels, where each higher level includes points of all lower levels plus points in between of all existing points. The number of points on this levels quickly grows, e.g., the later often referred level $5$ contains $1089$ points, while level $6$ contains $4225$ points.
As the number of updates required might dependent on the required quality, we will in the following first evaluate the impact of quality onto the number of full state updates and sizes of partial state updates. We will then evaluate the latency for varying data rate of the network, full update probability, sizes of partial updates, and surrogate problem size.
Accuracy of the Surrogate Model {#ssec:accuracy-of-surrogate-model}
-------------------------------
The accuracy of the surrogate model impacts two quantities:
the number of points requiring partial updates and
the distribution of the number of points in states violating quality constraints.
To this end, we introduced the terms *violation states* and *violation points*. Violation states are states of the simulation that do not fulfill quality requirements and therefore need updating in the full update approach. Similarly, violation points are points in one simulation state that are required in partial updates. Intuitively, violation states and violation points depend on the maximum error that is allowed for the application. To provide an overview of the distribution of violation states and violation points, we recorded them for different error bounds (maximum error) in the 2d heat equation with random initial state.
![Ratio of violations for full updates and partial updates.[]{data-label="fig:violations"}](violations/violations){width="\linewidth"}
Figure \[fig:violations\] depicts the percentage of states that are required as updates, and percentage of points per state requiring updating for partial updates. While some partial updates require many point updates, the majority of partial updates only require few points. For maximum error $2^{-7}$, around $50$% of states require updating, while only $5.2$% of points need to be updated. We therefore assume a state update probability of $0.5$ in the following, if not stated otherwise. Notice that this reduced the volume of data to be communicated from server to mobile device by $50$%.
Impact of Channel Data Rate
---------------------------
Mobile devices face varying data rates of the wireless communication channel. Especially in areas with bad signal strength, e.g., indoors in basements, data rates drop to low rates down to $50$ kbit/s. This is inline with Shannon-Harley Theorem which would require higher bandwidth in cases of higher signal-to-noise ratio to keep constant data rates.
We evaluated the impact of data rates on the latency of our approaches. After setting different rates, we ran our approaches multiple times and recorded the median latency. All other parameters are as our default configuration introduced in Section \[ssec:setup\].
![Latency of the three approaches over data rate.[]{data-label="fig:throughput"}](throughput/throughput){width="\linewidth"}
Figure \[fig:throughput\] depicts results for the three approaches over the data rate of the wireless channel. As the volume of data to be communicated by our approaches is only a fraction of the streaming approach, both approaches are much faster than streaming. The full update approach is able to provide results up to $9.4$ times faster than the streaming approach. The partial update approach, which requires only very little data to be communicated from the server to the mobile device is able to provide results even up to $13.3$ times faster than the streaming approach. However, for high data rates, the full update approach is marginally faster than the partial update approach since the partial update approach has a higher overhead for data assimilation. Both approaches have a speedup of $50\,\%$ compared to streaming. We assume that this speedup is due to the doubled volume of data for streaming and the serialization using state-of-the-art Google Protocol Buffers, which is already considered to be one of the fasted serialization formats [@Sumaray2012]. In general, data rate has only very little impact on the partial update approach, while it effects stream and full update approach. For varying data rates, the combined approach is therefore the best choice, while the full update approach might be considered for higher data rates.
Impact of Update Probability
----------------------------
To evaluate the impact of the accuracy of the surrogate model on latency of the approaches, we previously evaluated our approaches with a synthetic probability of updates (cf. Sec. \[ssec:accuracy-of-surrogate-model\]). To this end, we recorded the median latency for the different approaches for multiple simulation runs with different update probability. We assumed that updates are uniformly distributed. All other parameters are as our default configuration introduced in Section \[ssec:setup\].
![Latency over update probability.[]{data-label="fig:updates"}](updates/updates){width="\linewidth"}
Figure \[fig:updates\] depicts latency over update probability of simulation states. The latency of the stream approach is practically constant since it sends all states of the simulation over the network. The full update approach is up to $2.1$ times faster than the stream approach for no updates but converges to the latency of the stream approach when all states require updating. Performance of the partial update approach only changes gradually since more updates only marginally change the communication overhead. For only few updates, the full update approach has the same performance than the partial update approach. However, if all states require updating, the partial update approach is still $26\,\%$ faster compared to streaming and full update approach making the partial update approach the best option in a scenario with many small state updates.
Impact of the Size of Partial Updates
-------------------------------------
In addition to varying update probability, we also considered different sizes of partial updates. To this end, we run the approaches and introduced fake updates. Each state had a probability of $0.5$ to require an update. Each update had a fixed size of violation points that require updating by partial updates. All other parameters are as our default configuration in Section \[ssec:setup\].
![Latency over update size.[]{data-label="fig:sizes"}](sizes/sizes){width="\linewidth"}
Figure \[fig:sizes\] depicts latency over update size. As the size of the partial update does not effect the stream approach and full update approach, only the latency of the partial update approach gradually increases. Latency is even higher than for the stream approach, since sending partial updates requires decoding of the position of the updated points. This makes a partial update with all points updated bigger than a full update. Additionally, the overhead for calculation of the ensemble Kalman filter is increased for more updates. For more than $50$% updates, streaming is more efficient than the partial update approach. However, for up to $20$%, the partial update approach provides results in the same time as the full update approach. As shown in Section \[ssec:accuracy-of-surrogate-model\], the percentage of violation points is typically below $20$%.
Impact of the Surrogate Problem Size
------------------------------------
Lastly, we want to measure the impact of different surrogate problem sizes. If the surrogate problem grows, the stream approach has to communicate more data. However, for the full update approach and the partial update approach, also the computational overhead is increased. We want to measure the impact for different space discretization of the surrogate model. All parameters are taken as described in Section \[ssec:setup\].
![Latency over surrogate model space discretization level.[]{data-label="fig:problem"}](problem/problem){width="\linewidth"}
Figure \[fig:problem\] shows the latency of the approaches over different discretization level. Notice that for level 6, the reference model has the same space discretization as the surrogate model. However, the surrogate model is implemented as implicit method, so the two models provide different results. As for increased discretization level the number of unknowns grow exponentially, latency of the approaches grow linearly with increased number of unknowns. However, the full update approach and partial update approach provide much better results for high discretization levels. In particular, the full update approach provides a speedup of up to $1.9$, while the partial update approach is only $33$% faster than the streaming approach. Notice that the mobile device is limited to at most discretization level $6$ due to memory limitations.
Conculsion
----------
Concluding the evaluations, full update approach and partial update approach are significantly better than streaming. The full update approach is best for high data rates, high update size, low update probability, and high surrogate problem size. The partial update approach is best in cases of low data rates, low update size, and high update probability. Approaches can be combined by simply using the partial update approach for updates lower than $20$% and otherwise send full updates to benefit from both update types.
Conclusion & Future Work {#sec:conclusion-future-work}
========================
In this article, we presented methods to provide results of resource-intensive simulations on resource-poor mobile devices. The goal was to provide fast results with guaranteed quality. To this end, our approaches compute the simulation in a user-defined reference quality on the server. On the mobile device, a surrogate model providing lower quality at much fewer computation time will be executed.
We presented three approaches. The first approach was to simply stream results to the mobile device in surrogate quality. The second approach was to compute the surrogate model on the mobile device and the server. The server detects if quality constraints are not fulfilled and send a full state update as correction to the mobile device. In the third approach, we considered partial updates to reduce communication overhead. To combine the surrogate state with partial updates, we use tools from data assimilation, namely the ensemble Kalman filter. This is required to maintain mathematical properties of the simulation model.
All approaches were implemented and extensively evaluated on our test bed based on a Raspberry Pi and a connected server. Evaluations showed that our approaches are able to provide fast simulation results, even in cases of low data rates. Compared to our streaming approach, our approaches increase the performance of the system by up to over $13$ times. The performance depends on the actual data rate and the size and frequency of required updates.
In the future, we plan to include uncertain sensor data in our approaches. Additionally, we will research methods for signaling uncertainty to the user of the mobile device.
Acknowledgements {#acknowledgements .unnumbered}
----------------
The authors would like to thank the German Research Foundation (DFG) for financial support of the project within the Cluster of Excellence in Simulation Technology (EXC 310/2) at the University of Stuttgart.
Details About the Ensemble Kalman Filter {#sec:details-about-kalman-filter}
========================================
In this appendix, we shortly describe the steps required to implement the ensemble Kalman filter (EnKF) for our partial update approach based on [@Houtekamer1998; @Evensen2003]. We first describe the input into the filter, the general idea, and then provide more details on the calculation. Notice that while the EnKF can also be implemented for implicit methods, our implementation uses an explicit method as surrogate model.
The filter is based on a simulation model described by a matrix $M\in\mathbb{R}^{n\times n}$ and a problem specific procedure for perturbation of states. After the perturbation, the mean of ensemble members should track the real state of the system, i.e., the reference simulation state.
For every system state, partial updates are provided by means of the updated values in a vector $u_{i+1}$ and a selection matrix $H_{i+1}$ that contains the position of the updates points. Notice that $H_{i+1}$ and $u_{i+1}$ can be easily constructed from updates as list of tuples representation with $\{ (\textit{position}, \textit{value}), \dots \}$.
The idea of the EnKF is to track states $S_i$ and the uncertainty of the states as sample covariance of the ensemble members $e_i^{(j)}$. Using the ensemble members, the so called Kalman gain $K$ is calculated for every state update. The Kalman gain defines the influence of the update on the current state of the surrogate result and is calculated as $K = CH^T (HCH^T)^{-1}$, where $H^T$ represents the transpose of $H$. The sample covariance $C$ is calculated as $C = E[(E - X)(E-X)^T]$, where $E[\cdot]$ denotes the expected value. Variable $X$ should be a good guess on the reference simulation state, which is provided by the ensemble mean value.
|
---
author:
- 'I. Minchev, C. Chiappini, M. Martig'
title: 'Chemodynamical evolution of the Milky Way disk II: Variations with Galactic radius and height above the disk plane'
---
Introduction {#sec:intro}
============
Galactic archeology aims at understanding the formation and evolution of the Milky Way (MW), where the chemical and kinematical information contained in its stellar component is used as fossil records [@freeman02; @matteucci12]. Unprecedented amounts of data from a number of ongoing and planned astrometric and spectroscopic Galactic surveys (RAVE - @steinmetz12; SEGUE - @yanny09; HERMES - @freeman10; APOGEE - @allende08) will be available soon, especially from Gaia [@perryman01] and 4MOST [@dejong12], who together will provide high accuracy 6D kinematics and chemistry for more than $10^7$ disk stars.
It has been recently recognized that chemical evolution galactic disk modeling must be combined with dynamics. The main reason for this is that numerical simulation have shown that stars do not remain near their birth places, but migrate throughout the disk during their lifetimes. This redistribution of angular momentum has been shown to be caused by the effect of non-axisymmetric disk features, such as spiral structure [@sellwood02; @roskar08a] and the central bar [@mf10; @minchev11a; @brunetti11; @dimatteo13]. This has resulted in efforts to understand how traditional, static, chemical evolution disk modeling couples with dynamics, as discussed in detail in the introduction in the first paper of this series, [@mcm13] (hereafter paper I). We also refer the reader to the comprehensive recent reviews by [@rix13] and [@binney13].
The chemodynamical model of paper I {#sec:paper1}
-----------------------------------
The chemodynamical model we use in this work was presented in paper I, where we mostly concentrated on an annulus of 2 kpc, centered on the “solar radius". The main features that make this model unique is the fusion between a state-of-the-art simulation in the cosmological context [@martig09; @martig12] and a detailed thin-disk chemical evolution model.
The exact star formation history and chemical enrichment from our chemical model is implemented into the simulation with more than 30 elements assigned to each particle. This novel approach has made it possible to avoid problems with chemical enrichment and star formation rates currently found in fully self-consistent simulations, as described in paper I.
The simulation builds up a galactic disk self-consistently by gas inflow from filaments and mergers and naturally takes into account radial migration processes due to early merger activity and internal disk evolution at low redshift. The last massive merger takes place at $\sim9$ Gyr look-back-time having a disk mass ratio of 1:5. A relatively quiescent phase marks the last 8-9 Gyr of evolution. A number of less violent events (1:70-1:100 mass ratio) are present during this period, with their frequency decreasing with time. The accreted disk component at the final time, estimated at $4<r<16$ kpc, is $\sim3$% of the total disk mass. In paper I and this work we chemically tag only stars born in situ and do not consider the accreted stellar component, which would introduce an additional complexity in the chemo-kinematic relations. The exploration of the range of parameters arising from considering the accreted component is deferred to a future work.
A central bar develops early on, doubles in size in the last $\sim6$ Gyr, and has a similar length at the final simulation time to that of the MW (see paper I, Fig. 1 rightmost top panel). Snapshots of the disk face-on and edge-on stellar surface density can be seen in Fig. 1 of paper I. An important ingredient, which ensures we capture disk dynamics similar to the MW, is that prior to inserting the chemical evolution model, we rescale the simulation to place the “solar radius" (8 kpc) just outside the 2:1 outer Lindblad resonance (OLR) of the bar, as is believed to be the case for the MW (e.g., @dehnen00 [@mnq07; @minchev10]). Consideration of the bar when studying the MW disk is important, since the bar is expected to dominate the disk dynamics within 2-3 scale-lengths through its corotation resonance (CR) and OLR, may drive spiral structure of different multiplicity (e.g., @masset97 [@quillen11; @minchev12a]), and be responsible for coupling between the vertical and radial motions at preferred locations both in the inner [@combes90; @quillen14] and the outer disk [@minchev12b].
Need for chemo-kinematics information of the entire disk
--------------------------------------------------------
![image](rirf2.eps){width="18.5cm"}
While a range of models may be able to match the chemo-kinematics of stars in the solar neighborhood, discrimination between different evolution scenarios can be made by requiring that models are compliant with data covering extended portions of the disk. By integrating the orbits of RAVE giants, [@boeche13b] were able to cover extended disk radii by considering the guiding radii, instead of the current stellar positions. SEGUE G-dwarf data has also been used to cover disk regions between 6-12 kpc [@cheng12a; @bovy12a]. Large portions of the Galactic disk close to the plane are now observed with APOGEE [@anders14; @hayden14].
Variations of chemical gradients are expected for different age populations, as well as for different vertical slices of the disk. A realistic MW chemo-dynamical model must be able to explain not only the local properties of stars but also these variations with Galactic radius and distance from the disk midplane. The goal of this work is to extend the results of paper I to regions beyond the solar vicinity, by using the same model, and provide understanding for the causes of the expected variations. We note, however, that a direct comparison between the results presented here and observational data must be done with care, as observational biases can affect strongly chemo-kinematic relations, especially at large distances. A future work will be dedicated to the proper comparison to observations with the help of mock catalogues constructed from our models (Piffl et al., in preparation).
Effects of radial migration in our simulation
=============================================
In paper I (Fig. 1, bottom panel) we showed the changes in angular momentum at different stages of the disk evolution. This revealed that the strength and radial dependence of migration is governed by merger activity at high redshift and internal perturbations from the bar and spirals at later times.
Another way of looking at the mixing induced throughout the galaxy lifetime is presented in Fig. \[fig:rirf\], where the top row shows density contours of stellar final radii, $r_{final}$, versus birth radii, $r_0$, at the end of the simulation. The contour level separation is given on top of the left row. The inner 1-kpc disk is not shown, in order to display properly the contours. The disk is divided into six age groups, as specified on top of each panel. The dotted-red and solid-blue vertical lines indicate the positions of the bar’s CR and OLR at the end of the simulation. The dashed-black line shows the locus of non-migrating circular orbits. Deviations from this line are caused not only from stars migrating from their birth places, but also from stars on eccentric orbits away from their guiding radii. The latter effect becomes more important with increasing age, especially for stars with age $>8$ Gyr, e.g., those formed before the last massive merger event in our simulation.
To exclude the effect of high-eccentricity stars, in the bottom row of Fig. \[fig:rirf\] we show the density of final versus birth [*guiding radii*]{}, $r_{g,final}$ and $r_{g,0}$, respectively. We estimate these for each stellar particle as $$\label{rg0}
r_{g,0} = \frac{L_0}{v_{c,0}}$$ and $$\label{rg1}
r_{g} = \frac{L}{v_{c}},$$ where $L_0\equiv r_0 v_{\phi,0}$ and $L\equiv r v_{\phi}$ are the initial and final angular momenta, respectively, while $v_{c,0}$ and $v_c$ are the initial and final values of the circular velocity at the corresponding radii, which changes roughly between 190 and 220 km/s, while remaining significantly flat except for the decline at $r<1.5$ kpc (see top row of Fig. 1 in paper I).
Several interesting differences are seen between the stellar distributions in the $r_0-r_{final}$ and $r_{g,0}-r_{g,final}$ planes. Firstly we find that all guiding radius distributions are much more symmetric around the dashed-black line, indicating more balanced exchange of particles across a given disk radius. This is because the asymmetric drift effect, where outer radii are populated by high-eccentricity stars close to their apocenters, has been removed by using the guiding radii. Another difference is the better identification of efficient migration radii in the guiding radii plots (such as the bar’s CR and OLR radii indicated by the dotted-red and solid-blue verticals). Note that these become more clear for younger (and thus colder) populations, because colder orbits respond more strongly to the non-axisymmetric perturbations (e.g., spirals and bar). Interestingly, a well defined peak is found inside the bar’s CR for the oldest stellar component when we use the guiding radii. While some of these stars have gained angular momentum already during the strong merger phase ending in the first 2-3 Gyr of evolution, the peak becomes well defined only after the bar is formed, as seen in middle row, left panel of Fig. 15 in paper I. This indicates that the bar affects not only the stars born after its formation, but also the oldest stars, as should be expected.
In both rows of Fig. \[fig:rirf\] we find that the largest width around the black-dashed line occurs for stars in the age groups $4<age<6$ and $6<age<8$ Gyr. This is indicative of large changes in angular momentum related to significantly long exposure to migration mechanisms. The two oldest populations do not migrate as much because of their high velocity dispersions resulting from the high-redshift merger phase.
A decrease in disk scale-length is apparent with increasing age, ending up with a very concentrated disk for ages $>$10 Gyr. This is in agreement with observations in the MW (e.g., @bensby11 [@bovy12a]) and will be discussed more in Sec. \[sec:hd\]. The effect of the bar is seen in the overdensity inside the CR (dotted-red vertical), indicating stars shifted preferentially from their birth radii outward in the disk.
Migration cools the disk during mergers {#sec:mer}
---------------------------------------
We showed in paper I that stars born in the inner disk can arrive at the simulated solar vicinity with lower velocity dispersion than the in-situ born population. This is in drastic contrast to the expected effect of outward migrators in a quiescent disk evolution, where stars arriving from the inner disk are slightly warmer than the non-migrating population [@minchev12b]. Below we investigate this in grater detail studying the entire disk.
![ The effect of migration during (left) and after (right) a massive merger. [**Left column:**]{} The top panel shows the changes in angular momentum in a time period of 2 Gyr encompassing the merger event. The vertical velocity dispersion profiles of inward migrators, outward migrators and the non-migrating population, as indicated, are shown in the second panel. The net effect of migrators can be seen in the third panel. The bottom panel shows the fractional change in velocity dispersion resulting from migration $\Delta\sigma_{\rm z}=(\sigma_{\rm z,all}-\sigma_{\rm z,non\_mig})/\sigma_{\rm z,all}$. Migrators cool the outer disk, thus working against disk flaring. [**Right column:**]{} Same as on the left, but for stars born after the last massive merger event. Minimal effect from migration is seen on the disk vertical velocity dispersion. \[fig:rdsigz\] ](rdsigz.eps){width="8.5cm"}
First we consider all stars born right before the last massive merger encounters the disk, at $t=1.4$ Gyr. To see how much angular momentum redistribution takes place during this merger event, in the top panel of Fig. \[fig:rdsigz\] we plot number density contours of the changes in guiding radius, $\Delta r_g$, versus the initial guiding radius, $r_{g,0}$, estimated in the time period $1.4<t<3.4$ Gyr. The percentage of stars in each contour level is given by the color bar on the right.
The strong redistribution of angular momentum seen in the $r_{g,0}-\Delta r_g$ plane is caused both by the tidal effect of the satellite, which plunges through the galactic center, and the strong spiral structure induced in the gaseous component. We note that the disk dynamics is dominated by the very massive gas disk at this early stage of disk evolution.
Next we separate migrating from non-migrating stars (in the considered time period) by applying the technique described by [@minchev12b]. This consists of separating stars in a given radial bin into “migrators” and “non-migrators” as follows. Non-migrators are those particles found in the selected radial bin at both the initial and final times, while migrators are those that were not present in the bin initially but are there at the final time. We distinguish between “outward” and “inward” migrators – those initially found at radii smaller or larger than the annulus considered, respectively. This is done for radial bins over sampling the entire radial extent of the disk. See [@minchev12b] for more details.
![image](r0a.eps){width="18cm"}
In the second top-to-bottom left panel of Fig. \[fig:rdsigz\] we plot the vertical velocity dispersion profiles of inward migrators, outward migrators and the non-migrating population, as indicated. It is interesting to see that inward and outward migrators [*during the merger*]{} have reversed roles, where stars migrating inwards have positive contribution to $\sigma_z$ and those migrating outward cool the disk. This is the opposite of what is expected for migration in the absence of external perturbations (as described in the Introduction and shown in paper I). The recent work by [@minchev14a] showed that the fact that cool, old, \[$\alpha$/Fe\]-enhanced, metal-poor stars can arrive from the inner disk can provide a way to quantify the MW merger history, where the idea was applied to a high-quality selection of RAVE giants [@boeche13a] and the SEGUE G-dwarf sample [@lee11]. Indeed, the effect of satellites perturbing the MW disk has been linked to a number of observations of structure in the phase-space of local stars (e.g., @minchev09 [@gomez12a; @gomez12b; @gomez13; @widrow12; @ramya12]).
The net effect of migrators during merger can be seen in the third top-to-bottom left panel of Fig. \[fig:rdsigz\]. The overall contribution to the vertical velocity dispersion from the migrating stars during the merger is negative, in the sense that it is lower than that of the stars which did not migrate. We emphasize that we only considered stars born before the merger took pace, therefore, the effect seen is not related to the accreted population.
Finally, to quantify the changes to the disk vertical velocity dispersion resulting from migration in the given period of time, we plot the fractional changes in the bottom left panel of Fig. \[fig:rdsigz\]. We estimate these as $$\label{eq:1}
\Delta\sigma_{\rm z}=(\sigma_{\rm z,all}-\sigma_{\rm z,non\_mig})/\sigma_{\rm z,all},$$ where $\sigma_{\rm z,all}$ and $\sigma_{\rm z,non\_mig}$ are the vertical velocity dispersions for the total population and the non-migrators, respectively. Decrease in $\sigma_z$ of up to 30% can be seen.
We have found here that during a massive merger sinking deep into the disk center, migrators cool the outer disk, thus working against disk flaring. This is related to the stronger effect of mergers on the outer disk, owing to the exponential decrease in the disk surface density. Stars arriving from the inner parts during a merger, would therefore be cooler than the rest of the population.
We now contrast the above results with a sample born after the last massive merger even. The right column of Fig. \[fig:rdsigz\] is the same as the left one, but for stars born after $t=2.5$ Gyr (or 8.7 Gyr ago). Minimal effect from migration is seen on the disk vertical velocity dispersion during the last 2 Gyr of (mostly quiescent) evolution, despite the strong migration efficiency seen in the top right panel.
In accordance with the above results, small (and mostly negative) contribution of radial migration on the age-velocity relation was found also by [@martig14b], where a suite of seven simulation similar to the one considered in this work was studied. Additionally, studying the high-resolution isolated disk simulation of [@donghia13], the recent work by [@vera-ciro14] also found minimal effect of migration on the disk thickening.
![ [**Left column:**]{} Variation with Galactic radius of the fraction of stars arriving from the inner disk, from the outer disk, and those native to a given radius. [**Right column:**]{} Peak shift and width of the total distributions (black curves). The peak shift of stars born before (red) and after (blue) the last massive mergers are also shown. From top to bottom, different rows show the above quantities estimated for the final guiding radius ($r_{g,final}$), the guiding birth radius ($r_{g,0}$), and the actual birth radius ($r_{0}$), as in Fig. \[fig:r0\]. The peak shift is defined as the difference between the median final radius and the peak of the total distributions shown in Fig. \[fig:r0\]. The width is estimated as the standard deviation of the total distribution in each bin. For this figure we consider 30 overlapping radial bins instead of the five used in Fig. \[fig:r0\], in order to better exhibit the radial variations. []{data-label="fig:r0a"}](peak_r.eps){width="8.6cm"}
Contamination from migration at different distances from the Galactic center
----------------------------------------------------------------------------
We would now like to quantify the radial migration as a function of galactic radius in our simulation. For this purpose, in Fig. \[fig:r0\] we study the origin of stars ending up in five radial bins of width 2 kpc, centered on $r=3$, 5, 7, 9, and 11 kpc. For each annulus (green vertical strip), six different age-groups are shown, color-coded in the bottom-leftmost panel. The solid-black line shows the total distribution in each panel. The bar’s CR and OLR are shown by the dotted-red and solid-blue verticals. The middle row corresponds to the solar vicinity (the bottom-middle panel is the same as the left panel of Fig. 3 in paper I) with the bar’s OLR at 7.2 kpc. Guiding radii for this figure are estimated from equations \[rg0\] and \[rg1\] as in Fig. \[fig:rirf\].
In the top row of Fig. \[fig:r0\] we show the [*final*]{} guiding radii, $r_{g,final}$, of stellar particles ending up in the five annuli considered at the final simulation time. This shows that there is a substantial fraction of stars with angular momentum lower (to the left of the bin) or higher (to the right of the bin) than appropriate for the given annulus, but populating that bin close to their apocenters or pericenters, respectively. The predominancy of stars with lower angular momenta is related to the well-known asymmetric drift effect seen in solar neighborhood stars (mean rotational velocity distribution offset to negative values), which is found here to increase strongly with increasing galactic radius.
In the second row of Fig. \[fig:r0\] we show the [*birth*]{} guiding radii, $r_{g,0}$, of stars currently in the five bins considered. The distributions widen strongly compared to those shown in the first row, indicating significant changes in angular momentum since their birth. The drastic differences found between the distributions in the first and second rows speak of the importance of angular momentum redistribution within the disk, i.e., radial migration. We emphasize the strong changes in angular momentum seen even for the oldest ages (red histograms), although the migration efficiency for these stars is expected to be reduced due to their larger velocity dispersions (see Fig. \[fig:rirf\]).
Finally, in the bottom row of Fig. \[fig:r0\] we show the actual birth radii, $r_0$, (not guiding radii) of stars ending up in the final five bins considered, where we see the effect of both the changes in angular momentum and heating. Distributions are similar to those of the initial guiding radii, $r_{g,0}$, shown in the second row.
While Fig. \[fig:r0\] can be interpreted as giving an account for (1) stars appearing in a given radial bin close to their apo- and pericenters (first row) and for (2) stars which have migrated there (middle row), it must be realized that the situation is more complicated than that. Even stars with guiding radii outside the given radial bin have experienced radial migration. Therefore, those cannot be treated simply as non-migrators contaminating a given radius because of large eccentricities as it is frequently done in the current literature. An evidence that even the oldest and, thus, the hottest stars have migrated substantially is given by the differences seen in the red histogram in the top and middle rows of Fig. \[fig:r0\], where the initial guiding radii cover a much larger radial extent than the final guiding radii.
![image](chem1.eps){width="18cm"}
To quantify better the amount of migration and heating seen in Fig. \[fig:r0\], in Fig. \[fig:r0a\] we plot the radial variation of the fraction of stars arriving from the inner disk, from the outer disk, and those native to a given radius (left column), as well as the peak shift and width of the distributions (right column). The peak shift is defined as the difference between the median final radius and the peak of the total distribution shown in Fig. \[fig:r0\]. The width is estimated as the standard deviation of the total distribution at each annulus.
As in Fig. \[fig:r0\], the three rows of Fig. \[fig:r0a\], from top to bottom, correspond to $r_{g,final}$, $r_{g,0}$, and $r_0$. To see better the radial variation of the above quantities, we have considered 30 overlapping radial bins in the radial range $4<r<12$ kpc, instead of the five bins shown in Fig. \[fig:r0\].
Focussing on the top-left panel of Fig. \[fig:r0a\], we find an increase with increasing distance from the Galactic center, in the fraction of stars with [*final guiding radii*]{} ($r_{g,final}$) inside a given annulus (red-dashed curve), from $\sim$35% to $\sim$60%. Only about 10% of stars have guiding radii higher than those appropriate for a given annulus (blue-solid curve). The fraction of stars with angular momenta just right for a given annulus decreases with Galactic radius from $\sim$55% to $\sim$35% (black-dotted curve). Note that at the solar radius (8 kpc) the fraction of stars belonging to the inner disk is similar to that of stars with angular momenta typical for that annulus. It should be kept in mind that the above numbers will change if we were to use an annulus width different than the 2-kpc utilized for Figures \[fig:r0\] and \[fig:r0a\].
While we find that a lot of contribution can come from stars with angular momenta lower than that appropriate for a given radial bin (e.g., $\sim50\%$ at the solar radius), it is also important to know how far these stars come from. Therefore, in the top-right panel of Fig. \[fig:r0a\] we show the radial variation of the distributions’ peak shift (solid curve) and width (dashed curve). An increase in both of these is found with radius, indicating stronger contamination in the outer disk.
In the second row of Fig. \[fig:r0a\] we show that the fractions of stars with [*birth guiding radii*]{} ($r_{g,0}$) coming from the inner or outer disk are appreciably larger than the fraction of stars arriving at a given radial bin simply due to their eccentric orbits (as in the top row). For example, at the solar radius about 60% of stars were born in the inner disk, only about 30% were born in situ ($7<r<9$ kpc), and about 15% migrated from the outer disk. However, in the second-row, right panel, we see that the peak shift at the solar radius more than doubles and the width of the distribution is also quite larger than that seen in the final guiding radii. The nonlinear increase with radius of the peak shift is related to the variation in migration efficiency with Galactic radius, as seen in the bottom row of Fig. 1 of paper I (see discussion in paper I). The rate of increase in contamination with radius is higher for the birth guiding radii than for the final guiding radii by about a factor of two (compare slopes in black-solid curves in the top and middle panels in the right column of Fig. \[fig:r0a\]).
Finally, the bottom row of Fig. \[fig:r0a\] shows the fraction of stellar actual birth radii ($r_{0}$) contributing to different radial bins. This can be seen as the effect of both migration and heating, but note that it is not simply the addition of the top two rows since, for example, some stars with guiding radii appropriate for a given radial bin can be found outside that annulus at a given time. The bottom row of Fig. \[fig:r0a\] is similar to the middle row, with somewhat higher fraction of stars arriving from the inner disk (red-dashed curve in left panel) and yet steeper increase in the peak shift (black-solid curve in right panel).
As mentioned earlier and seen in Fig. \[fig:r0\], the oldest stars are the ones most affected by radial migration processes. To illustrate this and contrast the effect of migration and heating on young and old stars, in the right column of Fig. \[fig:r0a\] we over-plotted the peak shift variation with radius of stars born before the last massive merger (ages$>9$ Gyr, solid-red curve), stars born after the last massive merger (ages$<9$ Gyr, solid-green curve), and stars with age$<5$ Gyr (solid-blue curve). It is notable that (i) the increase of the peak shift with radius for the younger stellar populations is also quite strong and (ii) the effect is significantly stronger when migration and heating act together – a maximum of $\sim1.6$ kpc shift is found for $r_0$ (bottom panel), while the effect of heating only, indicated by $r_{g,final}$, reaches $\sim$0.85.
The predicted increase in contamination from migration and heating with radius is due to the exponential drop in disk surface density, where more stars are available to migrate outwards than inwards. Note that inward migration is still very important both for the kinematics and chemistry, in that stars arriving from the outer disk balance the effect of stars coming from the inner disk at intermediate radii (as discussed in Sections \[sec:amr\] and \[sec:gas\]).
As may be expected at this point, this difference in contamination from radial migration and heating at different galactic radii should have strong effect on the disk final chemistry. We anticipate these effects to be now testable with the large body of data coming from current Galactic spectroscopic surveys. We show the implication of this in the following sections.
Radial variations of chemodynamical relations
=============================================
Age-Metallicity Relation at different radii {#sec:amr}
-------------------------------------------
The first row of Fig. \[fig:chem\] plots age-\[Fe/H\] stellar density contours for the same disk annuli as in Fig. \[fig:r0\]. Contour levels are indicated in the color-bar attached to the rightmost panel. The middle panel ($7<r<9$ kpc) corresponds to the solar neighborhood and is the same as the plot shown in Fig. 4 of paper I. The input chemistry, native to each bin, is shown by the green-solid curve. The pink dashed curves indicate the mean \[Fe/H\] in each panel. Measurement uncertainties of $\pm0.1$ dex drawn from a uniform distribution are convolved with our simulated \[Fe/H\].
While a scatter in the age-metallicity relation (AMR) is seen at all radii, the mean (pink-dashed curves) is very close to the local evolution curve in the inner three bins. Only outside the solar radius do we find significant deviation, with the strongest flattening in the AMR at the outermost bin ($11<r<13$ kpc). This results from the accumulation of outward migrators in the outer disk, as seen in Figures \[fig:r0\] and \[fig:r0a\]. Conversely, at radii close to and smaller than the solar radius the contribution from metal-rich stars arriving from the inner disk is mostly balanced by metal-poor stars arriving from the outer disk. This is analogous to the contribution to the velocity dispersion from inward and outward migrators discussed by [@minchev12b], where the overall effect on the disk heating (and thus thickening) was found to be minimal at radii less than about four disk scale-lengths. Similar effect results for the gas (as discussed in Sec. \[sec:gas\], see Fig. \[fig:gas\]), rendering recycled gas flows unimportant in the range $4<r<12$ kpc.
The left panel of Fig. \[fig:chem1a\] shows the mean \[Fe/H\] variation with age at different Galactic radii as color-coded. These curves are identical to the pink-dashed curves in the top row of Fig. \[fig:chem\].
![ [**Left:**]{} The mean \[Fe/H\] variation with age at different Galactic radii as color-coded. Identical to the pink-dashed curves in the top row of Fig. \[fig:chem\]. [**Right:**]{} The mean \[Fe/H\]-\[Mg/Fe\] relation for different radii. Same as pink-dashed curves in the secind row of Fig. \[fig:chem\]. []{data-label="fig:chem1a"}](chem1a.eps){width="8.6cm"}
Another important property of the AMR predicted by our model is that the maximum metallicity achieved in each bin decreases from \[Fe/H\]$\sim$0.5-0.6 to about $\sim$0.2, as one moves from the innermost towards the outermost radial bins. The exact amount of metal rich stars predicted by our model is sensitive to the initial chemical model assigned to the inner regions of the disk. As discussed in paper I, we extrapolated our thin-disk chemical evolution model to the Galactic center in the innermost 2 kpc, not assigning a specific bulge chemistry to particles born in that region. This is here justified by the fact that we first want to study the effect of migration in a pure disk. The impact of considering bulge chemistry will be studied in a forthcoming paper. We anticipate that this would mostly affect the predicted fraction of super-metal-rich stars in the radial bins internal to the solar vicinity.
The fraction of stars with metallicities above $\sim$0.2-0.3 dex can be used as a powerful constraint of our models. Observationally, it is still difficult to quantify this fraction not only due to observational biases induced by color-selected spectroscopic samples, but also because of difficulties of current abundance pipelines to account for metallicities above solar. This situation will certainly improve in the future as spectral libraries and stellar isochrones get extended beyond solar.
![ [**Left:**]{} Overlaid normalized metallicity distributions at different distances from the Galactic center (all solid-black histograms in bottom row of Fig. \[fig:chem\]). The peak of the total distribution for each radial bin is always centered on \[Fe/H\]$\approx-0.15\pm0.06$ dex and the metal-poor wings are nearly identical, extending to \[Fe/H\]$\approx-1.3$ dex. In contrast, the metal-rich tail of the distribution decreases with increasing radius, giving rise to the expected decrease in mean metallicity with radius. [**Right:**]{} As on the left but for \[Mg/Fe\]. Similarly to \[Fe/H\], the peak does not vary significantly as a function of Galactic radius, being situated at \[Mg/Fe\]$\approx0.15\pm0.08$ dex. Unlike the metallicity distribution, the $\alpha$-poor tail is lost as radius increases, while the $\alpha$-rich tail always ends at \[Mg/Fe\]$\approx 0.45$ dex. However, the correspondence between \[Fe/H\] and \[Mg/Fe\] is not symmetric: for \[Mg/Fe\] both wings of the distribution are affected and the width decreases with increasing radius. See Fig. \[fig:chem4\] for variation with distance from the disk plane. []{data-label="fig:chem3"}](chem3.eps){width="8.5cm"}
![image](chem2.eps){width="18cm"}
\[Fe/H\]-\[Mg/Fe\] relation at different radii
----------------------------------------------
The second row of Fig. \[fig:chem\] shows \[Fe/H\]-\[Mg/Fe\] density contours. Uncertainties of $\pm$0.1 and $\pm$0.05 dex (typical of high-resolution observations in the literature) drawn from a uniform distribution are convolved with our simulated \[Fe/H\] and \[Mg/H\] abundances, respectively[^1].
An overall contraction in \[Mg/Fe\] is found with increasing galactocentric distance, due to the decrease of high-metallicity stars in the outer bins as well as the decrease of high-\[Mg/Fe\] stars.
In the model, most of the super-metal-rich stars originating in the innermost disk regions (see gold contours) are predicted to have sub-solar \[Mg/Fe\] ratios. The exact amount of this effect (i.e., the absolute values of \[Mg/Fe\] at a different \[Fe/H\]) is strongly dependent on the adopted stellar yields of Mg, Fe and the SNIa rates (here we adopt the same stellar yields as in @francois04). How stellar yields behave at above solar metallicity regimes is still very uncertain. Improvements on stellar evolution models at very high-metallicities are on the way by several groups, and will be soon implemented in our models as well.
As a summary of the second row of Fig. \[fig:chem\], the left panel of Fig. \[fig:chem1a\] shows the variation of the mean \[Fe/H\]-\[Mg/Fe\] relation with Galactic radius as color-coded. These curves are identical to the pink-dashed curves in the second row of Fig. \[fig:chem\]. Only small variations are found, mostly at super-solar values of \[Mg/Fe\].
We do not find a bimodality in the \[Fe/H\]-\[Mg/Fe\] at any distance from the Galactic center. We showed in paper I that the distribution at the solar radius can become bimodal when selection criteria used in high-resolution surveys (e.g., @bensby03) were applied (see paper I, Fig. 12). It should be noted that the gap in \[$\alpha$/Fe\] (usually $\sim0.2$ dex) has now been seen in a number of different observational samples (e.g., SEGUE - @lee11, APOGEE - @anders14, HARPS - @adibekyan13, GES - @recio-blanco14) indicating that it may be a real feature and not just a selection bias.
Several studies of the \[Fe/H\]-\[$\alpha$/Fe\] plane (e.g., @bensby11 [@cheng12b; @anders14]) have shown that the number of high-\[$\alpha$/Fe\] metal-poor stars decreases strongly in the outer disk. A work in preparation is dedicated to the proper comparison with observations, where survey biases will be considered when comparing to our model. However, a decline in the maximum value of \[Mg/Fe\] is already clearly seen in the second row of Fig. \[fig:chem\] - a shift downward in \[Mg/Fe\] of $\sim0.2$ dex is found in the three densest contours, when comparing the innermost to the outermost radial bins. Introducing a gap at $\sim0.2$ dex (as discussed above), which would result naturally from a gap in the star formation at high redshift (as in the two-infall model of @chiappini97), may naturally improve the comparison between our model and the observations.
\[Fe/H\] and \[Mg/Fe\] distributions at different radii
-------------------------------------------------------
The third row of Fig. \[fig:chem\] shows the metallicity distributions at different distances from the Galactic center. In each panel the solid-black histogram shows the total sample, while various colors correspond to groups of common birth radii, as indicated on the left. The thick histogram in each panel shows stars born in that given radial bin, e.g, green corresponds to the solar vicinity.
In all final radial bins the metal-rich tail of the distribution results from stars originating in the inner disk (compare the local metal-rich tail to that of the total distribution). Note that the largest contribution in the range $7<r<11$ kpc (the solar bin and the neighboring two bins) to the metal-rich tail comes from the bar CR region (blue curve). Therefore, the existence of a metal-rich tail throughout most of the disk can be linked to the effect of the MW bar. This was already noted in paper I for the solar vicinity (see bottom row of Fig. 1 of that work).
For better comparison of the \[Fe/H\] distributions at different distances from the Galactic center (solid black histograms in the bottom row of Fig. \[fig:chem\]), in the left panel of Fig. \[fig:chem3\] we show these overlaid and color-coded, as indicated in the right panel. We find that the peak of the total distribution for each radial bin is always centered on \[Fe/H\]$\approx-0.15\pm0.06$ dex and the metal-poor wings are nearly identical, extending to \[Fe/H\]$\approx-1.3$ dex. In contrast, the metal-rich tail of the distribution decreases with increasing radius, giving rise to the expected decrease in mean metallicity with radius. This is a property of the chemical evolution model used, as evident from examining the locally born stars in each annulus in the bottom panel of Fig. \[fig:chem\]. However, we can also see clearly that for each annulus the metal-rich tail is extended (in each case by about 0.2-0.3 dex) because of stars migrating from the inner disk. As shown later in Fig. \[fig:chem4\], this results from stars close to the disk midplane - another evidence that outward migrators do not populate a thick disk.
The right panel of Fig. \[fig:chem3\] is like the left one but for \[Mg/Fe\]. Similarly to \[Fe/H\], the peak does not vary significantly as a function of Galactic radius, being situated at \[Mg/Fe\]$\approx0.15\pm0.08$ dex. In this case the $\alpha$-poor tail is lost as radius increases, while the $\alpha$-rich tail always ends at \[Mg/Fe\]$\approx 0.45$ dex. However, the correspondence between \[Fe/H\] and \[Mg/Fe\] is not symmetric: for \[Mg/Fe\] both wings of the distribution are affected. In both cases the distributions get broader with increasing radius, which is a testable prediction with current Galactic spectroscopic surveys.
To see what ages comprise different regions of the \[Fe/H\] and \[Mg/Fe\] distributions, in Fig. \[fig:chem2\] we decompose them into six different age groups, as indicated in the bottom left panel. It is remarkable that despite the strong decrease in the fraction of old stars (red and orange lines) with increasing radius, the metal-poor tail remains practically identical for samples at different radii (as seen in Fig. \[fig:chem3\], left). While in the central regions it is dominated by the oldest stars, in the outer disk it is a mixture of all ages.
In the bottom row of Fig. \[fig:chem2\] we see that the decrease in the fraction of old stars with increasing radius gives a significant effect in the $\alpha$-rich wing of the distribution (see Fig. \[fig:chem3\], right). On the other hand, the $\alpha$-poor tail becomes more prominent in the inner radial bins due to the contribution of the youngest stars (with ages below 3-4 Gyr, born inside the solar circle; see also Fig. \[fig:chem2\]).
The strong decrease of the fraction of old/\[Mg/Fe\]-rich stars with Galactic radius seen in Fig. \[fig:chem2\] indicates that in our model the age/chemically defined thick disk has a shorter scale-length than the thin disk. This is in agreement with observations in the Milky Way (e.g., @bensby11 [@anders14; @bovy12a]) and will be discussed further in Sections \[sec:hd\] and \[sec:highmg\].
![ [**Left column:**]{} Overlaid normalized metallicity distributions for different distances from the Galactic center for stars with $|z|<0.5$ kpc (top) and $0.5<|z|<3$ kpc (bottom). This shows that variations in the metal-rich end seen in Fig. \[fig:chem3\] come mostly from stars close to the disk midplane. [**Right column:**]{} As on the left but for \[Mg/Fe\]. Similarly to \[Fe/H\], strong variations with radius are seen mostly for stars close to the disk midplane. The reason for this is that migrating stars stay with cool kinematics (both in the radial and vertical directions), i.e., do not contribute to thick disk formation. []{data-label="fig:chem4"}](chem4.eps){width="8.5cm"}
![image](mgfe.eps){width="18cm"}
Variations with distance from the disk midplane
===============================================
In the previous sections we showed general chemodynamical properties of our Galactic disk model by analyzing all particles confined to 3 kpc vertical distance from the plane (where most of the disk particles lie and current surveys cover). In this section we consider different cuts in the vertical direction.
\[Fe/H\] and \[Mg/Fe\] distributions at different heights
---------------------------------------------------------
We showed in Fig. \[fig:chem3\] how the \[Fe/H\] and \[Mg/Fe\] distributions vary with distance from the Galactic center. Because the high-\[Fe/H\] and low-\[Mg/Fe\] tails come from stars migrating from the inner disk, it is interesting to find out how this is reflected in samples at different distances from the disk plane.
Similarly to the left panel of Fig. \[fig:chem3\], in the left column of Fig. \[fig:chem4\] we show normalized metallicity distributions for different distances from the Galactic center, but for stars with $|z|<0.5$ kpc (top) and $0.5<|z|<3$ kpc (bottom). We find that variations in the high-\[Fe/H\] tail seen in Fig. \[fig:chem3\] come mostly from stars close to the disk midplane. The right column of Fig. \[fig:chem4\] is the same as the left one, but for \[Mg/Fe\]. Similarly to \[Fe/H\], strong variations with radius are seen mostly for stars close to the disk midplane. The reason for this is the fact that stars close to the disk are not just vertically cool, but also their eccentricities are low. Stars with low eccentricities are also the youngest (on the average), and thus the most metal-rich population (on the average). In contrast, the older/hotter population reaching high distances away from the plane is metal-poor. Note that if stars heated the outer disk as they migrate outwards (i.e., if they populated regions high above the disk midplane) there would not be a difference in the samples close and away from the disk plane, and thus no negative vertical metallicity gradient as seen in a number of observations (e.g., @carraro98 [@chen11; @kordopatis11]).
The age-\[Fe/H\] and \[Fe/H\]-\[Mg/Fe\] relations, as well as the corresponding distributions for stars with $|z|<0.5$ and $0.5<|z|<3$ kpc are shown in Figures \[fig:chemz1\] and \[fig:chemz2\] in appendix \[sec:a1\].
Chemical gradients at different heights above the plane
-------------------------------------------------------
Various studies have found different metallicity and \[$\alpha$/Fe\] gradients in the MW, as discussed in the Introduction. Fig. \[fig:grad\] illustrates that different mixtures of stellar ages (e.g., because of different slices in $z$) can give rise to a range of different \[Fe/H\] and \[$\alpha$/Fe\] gradients.
Thick black curves in the top row show the azimuthally averaged metallicity variation with galactic radius for stellar samples at different distances from the disk midplane, as marked in each panel. Different colors correspond to different age groups as indicated in the bottom-left panel. The height of rectangular symbols reflects the density of each bin. The bottom row of Fig. \[fig:grad\] shows the same information as above but for \[Mg/Fe\].
\[tab:fr\]
$|z|<3$ kpc $|z|<0.25$ kpc $0.25<|z|<0.5$ kpc $0.5<|z|<1.0$ kpc $1.0<|z|<3.0$ kpc
------------------- --------------------- --------------------- --------------------- --------------------- ---------------------
\[Fe/H\], \[Mg/Fe\] \[Fe/H\], \[Mg/Fe\] \[Fe/H\], \[Mg/Fe\] \[Fe/H\], \[Mg/Fe\] \[Fe/H\], \[Mg/Fe\]
${\rm age}<2$ Gyr $-0.058, 0.028$ $-0.057, 0.027$ $-0.059, 0.028$ $-0.064, 0.030$ $-0.077, 0.038$
$2<{\rm age}<4$ $-0.048, 0.021$ $-0.047, 0.021$ $-0.048, 0.021$ $-0.049, 0.021$ $-0.070, 0.033$
$4<{\rm age}<6$ $-0.040, 0.015$ $-0.040, 0.015$ $-0.038, 0.014$ $-0.041, 0.015$ $-0.058, 0.023$
$6<{\rm age}<8$ $-0.039, 0.012$ $-0.038, 0.012$ $-0.037, 0.011$ $-0.038, 0.011$ $-0.050, 0.016$
$8<{\rm age}<10$ $-0.031, 0.007$ $-0.032, 0.008$ $-0.028, 0.007$ $-0.030, 0.007$ $-0.032, 0.007$
${\rm age}>10$ $-0.022, 0.002$ $-0.025, 0.004$ $-0.012, 0.001$ $-0.020, 0.002$ $-0.020, 0.001$
All ages $-0.016, 0.003 $ $-0.027, 0.009$ $-0.012, 0.001 $ $-0.004, -0.004$ $-0.006, -0.003$
The negative radial metallicity gradient seen in the total population (leftmost upper panel) is strongly flattened with increasing $|z|$, and even reversed at $r<10$ kpc for the range $0.5<|z|<1.0$ kpc. By examining the variation in density of different age subsamples, we see that the change in slope with increasing $|z|$ is caused by the strong decrease of stars with ages $<6$ Gyr at $|z|>0.5$ kpc for $r\lesssim12$ kpc.
Focusing on \[Mg/Fe\] we find that the weak positive gradient for the total population (leftmost bottom panel in Fig. \[fig:grad\]) turns negative as distance from the disk plane increases, although the gradient of each individual age-bin is positive.
In contrast to the strong variation of chemical gradients with distance from the disk midplane for stars of all ages, the gradients of individual age groups do not vary significantly with distance from the plane (see Table 1). Gradients in the total population can vary strongly with $|z|$ because of the interplay between (i) the predominance of young stars close to the disk plane and old stars away from it (as illustrated by the rectangular symbols in Fig. \[fig:grad\]), (ii) the more concentrated older stellar component (as seen in Fig. \[fig:den\] below), and (iii) the flaring of mono-age disk populations (see @martig14a, Fig. 5).
![image](den.eps){width="14cm"}
The above discussion suggests that the results of different Galactic surveys should be compared with care, taking into account the spatial and magnitude coverage of observational samples. Indeed, before large spectroscopic surveys were in place, most of the MW abundance gradients reported in the literature were obtained by using rather young populations, such as Cepheids (e.g., @andrievsky02 [@pedicelli09; @luck11; @lemasle13]), HII regions (e.g., @daflon04 [@stasinska12] and references therein) and the more numerous young open clusters (e.g., @jacobson11 [@yong12; @frinchaboy13]). The advantage of these young population tracers is that they cover a large radial disk extent and are mostly located in the disk midplane, i.e., lie in a very low $z$-range. It is clear from Fig. \[fig:grad\] that such young tracers will yield steeper metallicity gradients than a mixed population.
The situation changes when using field stars of mixed ages. In addition the the very local GCS, populations of mixed ages were covered in larger regions around the solar neighborhood thanks to RAVE and SEGUE. Using data from the latter two surveys, it became possible to infer abundance gradients of field stars at high distances from the plane (although most of the region near the Galactic plane, i.e., $|z|< $ 0.2-0.4 kpc is not sampled in this case, see @cheng12a [@boeche13b; @boeche14]). Recently, abundance gradients for field stars (all ages covering vertical distances from zero to beyond 3 kpc are being estimated thanks to the SDSS-APOGEE survey [@hayden14; @anders14] – an infrared high-resolution survey that can observe stellar populations very close to the galactic plane, filling the gap left by SEGUE and RAVE. Note that in GCS, RAVE, SEGUE and APOGEE, the age mix is not only a strong function of the distance from the plane and from the galactic center, but also dependent on the selection biases of each sample.
Given the above discussion, we do not show in the present paper a direct comparison of the magnitude of our predicted gradients with observations. The main focus here is to understand what is driving the general shape of the abundance gradients given the different mix of ages of the tracer populations at different heights from the plane. However, we can say that the \[Fe/H\] gradients predicted for our youngest population bin ($<$ 2 Gyr), in the range $5<r<13$ kpc (see Table 1) are in good agreement with the values reported in the literature for Cepheids (e.g., @lemasle13) and young open clusters [@frinchaboy13] of around $-0.06$ dex/kpc. For \[Mg/Fe\] we predict a positive gradient of $\sim0.03$ dex/kpc for stars with age $<2$ Gyr, which could be in slight tension with the observations of young populations showing almost flat \[Mg/Fe\] gradients (e.g., @jacobson11). Our values for ”all ages" also compare well with the recent values reported in the literature by [@boeche13b] and [@anders14], based on RAVE and APOGEE data, respectively, both for iron and \[$\alpha$/Fe\] gradients. A more detailed comparison with RAVE and APOGEE data is deferred to a future work, where we will take properly into account the spatial and magnitude coverages (along with the expected sample biases).
It should also be kept in mind that the variations of \[Fe/H\] and \[Mg/Fe\] with radius are rarely well fitted by a single line in both observations and our model. Therefore, non-negligible variations in the estimated gradients should be expected with a change in the radial range used for fitting.
![image](gas_flow2.eps){width="17.cm"}
Disk scale-lengths at different heights above the plane {#sec:hd}
-------------------------------------------------------
We now examine the variation of disk scale-length with distance from the disk plane for populations grouped by common ages or chemistry.
The first row of Fig. \[fig:den\] shows stellar surface density as a function of galactic radius for stars with distance from the midplane $|z|<3$ kpc. In addition to the total mass (pink symbols), stars grouped by six bins of age (left panel), \[Mg/Fe\] (middle panel), and \[Fe/H\] (right panel) are shown by different colors. The corresponding bin values are indicated above each panel. The second and third rows of Fig. \[fig:den\] are the same as the first one, but for stars with $|z|<0.5$ and $0.5<|z|<3$ kpc, respectively. The color-coded values in the left column, $r_d$, indicate single exponential fits in the range $5<r<16.5$ kpc, shown by the dotted lines (dashed pink line for the total population).
The total density has $r_d=2.43$ kpc at $|z|<3$ kpc, however, depending on the age, values can range from $\sim1.7$ to $\sim3.1$ kpc. This smooth increase in scale-length with decreasing age is a manifestation of the disk inside-out growth and is in agreement with the results of [@bovy12a], if we assume that mono-age populations correspond to mono-abundance populations. Note that despite the significant migration throughout the disk evolution (see Figures \[fig:rirf\] - \[fig:r0\]), older disks do not increase scale-lengths fast enough to compete with the naturally resulting larger scale-lengths for younger populations, due to the changes in the SFR as a function of radius. Therefore, while migration [*does flatten*]{} surface density profiles (e.g, @foyle08 [@debattista06; @minchev12a]), this cannot overtake the effect of inside-out disk growth. Deviations from this rule for some intermediate-age populations was reported by [@martig14a], possibly related to satellite-disk interactions.
Decrease in scale-length is seen with increasing age for all three distances from the disk plane (Fig. \[fig:den\], left column). The youngest population shows the largest scale-length at $0.5<|z|<3$ kpc ($r_d=5.5$ kpc) and the smallest at $|z|<0.5$ ($r_d=2.9$ kpc), which indicates that the fraction of young stars at larger radii increased with distances from the disk plane. This suggests that coeval younger populations should flare, e.g., increase their scale-height with radius. Indeed, this was shown by [@martig14a] for the same simulation we study here.
When stars are binned by \[Mg/Fe\] (top row, middle panel), we see a break in the trends we have found so far. While for narrow bins of \[Mg/Fe\] in the range 0.45-0.05 single exponentials can still be fit well at $r>5$ kpc, this is no longer true for lower \[Mg/Fe\] values. The lowest two bins (blue and black squares) deviate from the increasing trend in scale-length for samples with decreasing \[Mg/Fe\]. A downtrend at $\sim10$ and $\sim5$ kpc is found for the blue and black squares, respectively. For the sake of comparison, for these two bins we still fit single exponentials in the range $5<r<10$ kpc.
Similar difficulties with fitting single exponentials to the highest \[Fe/H\] bins are also found in the right column of Fig. \[fig:den\]. The situation is very similar for stars with $|z|>0.5$ kpc (Fig. \[fig:den\], bottom middle-row panel).
As we move away from the disk plane (as in the SEGUE G-dwarf sample, for example), we find that both \[Mg/Fe\] and \[Fe/H\] can all be fit reasonably well by single exponentials. However, the correspondence between age and chemistry is not anymore valid for high \[Fe/H\], low \[Mg/Fe\] stars, which do not show large scale-lengths as the youngest stellar age groups.
Gas flows {#sec:gas}
---------
We now investigate how radial gas flows may affect our results. In our simulation stellar mass loss is considered by converting stellar particles to gas throughout the simulation (see @martig12). This can be used to estimate the neglect of gas flows in our chemodynamical model, by studying the effect of the radial motion of this “enriched” gas. We considered the gas converted from stars at each time step and assigned chemistry to it as a function of time and disk radius, similarly to what we did for the stars in paper I.
Panel (a) in the top row of Fig. \[fig:gas\] shows the fraction of net gas flow per unit of time, into 7 disk annuli of width 2 kpc as a function of time. We estimate this as the difference between the mass of gas coming from the inner disk and gas coming from the outer disk, therefore negative values correspond to net inflows. Also shown in the top row are the fractions of gas migrating into each bin from inside (i.e., outward migrators, panel b), from outside (i.e., inward migrators, panel c), and the gas which does not leave the bins (panel d).
Typically, about 60-70% of gas in each annulus does not migrate and contributions from the inner and outer disks are about 20-30%, with generally slightly larger fraction of gas migrating outward. Exceptions to this rule are found for the innermost and outermost annuli. Consequently, the net flows into the annuli considered are close to zero (panel a), except for the innermost and outermost disk regions. Larger fraction of outward migrating gas is seen during the first couple of Gyr of disk formation, i.e, during the merger epoch.
In the bottom row of Fig. \[fig:gas\] we now show the mean metallicity estimated from the total gas mass (panel e), the gas arriving from inside the given bin (panel f), from outside the given bin (panel g), and from the gas which stays in the bin (panel h). The overlaid dashed curves in panels (e), (f), and (g) show the non-migrating gas (same as panel h).
As expected, strong deviations from the input metallicity are found for outward only, or inward only flows (by more than 0.5 dex). However, the net effect is drastically reduced as seen in panel (e), where in the range $4<r<12$ kpc (purple to orange curves) the deviations from the input curves is less than $0.1$ dex during most of the time evolution. This is caused by the fact that the effects of inward and outward flows mostly cancel in the radial range $4<r<12$ kpc.
From the above discussion we conclude that our results for intermediate radial distance from the Galactic center (including for the solar neighborhood presented in paper I) are not affected to any significant level by the neglect of gas flows. However, gas flows may be affecting our results for the innermost and outermost radial bins we study in this paper.
We note that inflows of pristine (metal-poor) gas from cosmic web filaments may contribute at the disk outskirts by decreasing the metallicity there, as also reasoned by [@kubryk13]. This can counteract the effect of the radial-migration-induced gas flows to the disk outer boundary seen in Fig. \[fig:gas\]. On the other hand, increase in density in the inner 2 kpc due to inward migration of both gas and stars can be expected to increase the SFH in that region. This in turn can result in higher chemical enrichment (higher metallicity) counteracting the flattening of metallicity gradients induced by migration (see @cavichia13). It can be argued, therefore, that the neglect of gas flows does not have a strong impact on our results. Further work is needed to investigate the above ideas.
Fraction and velocity dispersions of old/high-\[Mg/Fe\] stars at different radius and distance from the disk midplane {#sec:highmg}
=====================================================================================================================
We here consider the stars in our model with thick-disk-like chemistry. Similarly to [@fuhrmann11] (see his Fig. 15), we used a definition for the thick disk \[$\alpha$/Fe\]$<0.2$ dex (here we use \[Mg/Fe\]) and considered the volume defined by $7.9<r<8.1$ kpc, $|z|<0.1$ kpc. This resulted in a local fraction of thick to total disk mass of 14%, which is somewhat lower than the value of 20% estimated by [@fuhrmann11] for his volume-complete local sample. As the distance from the solar position in our model increases from 0.1 kpc to 3 kpc, the thick disk mass fraction increases from 14% to 27%. These numbers are shown in Table 2.
We next divided the disk into inner ($4<r<8$ kpc) and outer parts ($8<r<15$ kpc), and considered four different maximum distances from the disk midplane, $|z|$. The thick to total disk mass fractions for different $(r,|z|)$ ranges are displayed in Table 2, where it can be seen that (1) close to the midplane there is hardly any variation, (2) as $|z|$ increases the fraction of high-\[Mg/Fe\] stars always increases, and (3) at $|z|>0.5$ kpc the inner disk contains more stars with thick-disk-like chemistry compared to the outer disk. The overall decrease of high-\[Mg/Fe\] stars in the outer disk is consistent with the results of recent observational studies (e.g., @bensby11 [@anders14; @bovy12a]), which have shown that the chemically defined Milky Way thick disk is more centrally concentrated than the thin disk. In our model this results naturally due to the smaller disk scale-lengths of older populations (and given the correspondence between age and \[Mg/Fe\]), which we discussed in Sec. \[sec:hd\].
To see the difference between a chemistry- and an age-defined thick disk, in Table 2 we also list the fraction of stars with age$>9$ Gyr with respect to the total disk mass. We pick this age cut because it marks the last massive merger as discussed in paper I and in Sec. \[sec:paper1\]. While at the solar radius separation by \[Mg/Fe\] and age gives virtually the same thick to total disk mass fraction as a function of $|z|$, in the inner disk the age definition results in $\sim30\%$ more massive thick disk and in the outer disk in about 20% less massive. This is to say that in our model the age-defined thick disk is more centrally concentrated than the chemically defined one. In both cases the thin disk is more extended.
We also estimated the vertical velocity dispersions of the chemistry- and age-defined thin and thick disks in each one of spatial regions defined above. These numbers are also given in Table 2. Increase in velocity dispersion is seen with decreasing Galactic radius, as expected. However, only small variations are found with vertical distance from the disk midplane (for a given radius), which is consistent with expectation for the Milky Way [@bovy12c].
While here we used a purely chemical or age definition of the thick disk, it must be kept in mind that the situation is more complicated than that, especially outside the solar neighborhood. It is still unclear how the centrally concentrated chemically/age defined thick disk in the Milky Way reconciles with the extended kinematically defined thick disks in observations of external galaxies.
\[tab:fr\]
--------------- -------------- ---------------------------------------------------------------------------------------------------------------------- --------------
Inner disk Near the solar radius Outer disk
$4<r<8$ kpc $7.9<r<8.1$ kpc $8<r<15$ kpc
Chemistry-defined thick disk: fr$_{\rm [Mg/Fe]>0.2}$, $\sigma_{\rm z,thin}$ \[km/s\], $\sigma_{\rm z,thck}$ \[km/s\]
$|z|<0.1$ kpc 0.14, 25, 53 0.14, 20, 36 0.13, 16, 33
$|z|<0.5$ kpc 0.20, 30, 54 0.16, 22, 43 0.16, 17, 34
$|z|<1.0$ kpc 0.26, 32, 56 0.22, 23, 42 0.19, 19, 35
$|z|<3.0$ kpc 0.30, 35, 58 0.27, 25, 48 0.25, 20, 40
Age-defined thick disk: fr$_{\rm age>9}$, $\sigma_{\rm z,thin}$ \[km/s\], $\sigma_{\rm z,thck}$ \[km/s\]
$|z|<0.1$ kpc 0.18, 22, 54 0.14, 19, 38 0.09, 16, 40
$|z|<0.5$ kpc 0.27, 26, 55 0.16, 20, 46 0.11, 17, 41
$|z|<1.0$ kpc 0.34, 28, 56 0.22, 21, 45 0.14, 18, 42
$|z|<3.0$ kpc 0.41, 29, 58 0.28, 22, 50 0.20, 19, 45
--------------- -------------- ---------------------------------------------------------------------------------------------------------------------- --------------
Conclusions
===========
In this work we investigated how chemo-kinematic relations change with position in the Galactic disk, by analyzing the chemo-dynamical model first introduced in [@mcm13] (paper I). The results of paper I were extended beyond the solar vicinity, considering both variations with Galactic radius and distance above the disk midplane. Our main results are as follows:
$\bullet$ We demonstrated that during mergers stars migrating outwards arrive significantly colder than the in-situ population (Fig. \[fig:rdsigz\]), as first suggested in paper I. This also has the important effect of working against disk flaring. Our results that stars migrating outwards in the disk cool the locally born population is in stark contrast to the suggestion that radial migration can form a thick disk. The need for massive mergers at high redshift or stars born hot in a turbulent phase (i.e, during the formation of the thick disk) has been indicated in a number of recent observational studies as well (e.g, @liu12 [@kordopatis13a; @minchev14a]).
$\bullet$ We investigated the effect of recycled gas flows and found that in the region $4<r<12$ kpc the introduced errors in \[Fe/H\] are less than 0.05-0.1 dex, related to the fact that inward and outward flows mostly cancel in that radial range.
$\bullet$ We show that radial migration cannot compete with the inside-out formation of the disk, which can be observed as the more centrally concentrated older disk populations (Fig. \[fig:den\]). This is in agreement with recent results from simulations of spiral galaxies (e.g., @brook12 [@stinson13; @bird13]) and observations in the MW [@bensby11; @bovy12a].
$\bullet$ Because contamination by radial migration and heating becomes more evident with increasing distance from the galactic center, significant flattening (with respect to the in-situ chemical evolution) in the age-metallicity relation is found outside the solar radius. However, at $r<9$ kpc the slope of the locally evolving population is mostly preserved, due to the opposite contribution of inward and outward migrators.
$\bullet$ We predict that the metallicity distributions of (unbiased) samples at different distances from the Galactic center peak at approximately the same value, \[Fe/H\] $\approx-0.15$ dex, and have similar metal-poor tails extending to \[Fe/H\] $\approx-1.3$ dex (Fig. \[fig:chem3\]). In contrast, the metal-rich tail decreases with increasing radius, thus giving rise to the expected decline of mean metallicity with radius. This effect results predominantly from stars close to the plane (Fig. \[fig:chem4\]).
$\bullet$ Similarly to \[Fe/H\], the \[Mg/Fe\] distribution always peaks at $\approx0.15$ dex, but its low-end tail is lost as radius increases, while the high-end tails off at \[Mg/Fe\] $\approx0.45$ dex.
$\bullet$ The radial metallicity and \[Mg/Fe\] gradients in our model show significant variations with height above the plane due to a different mixture of stellar ages (Fig. \[fig:grad\]). We find an inversion in the metallicity gradient from negative to weakly positive (at $r<10$ kpc), and from positive to negative for the \[Mg/Fe\] gradient, with increasing distance from the disk plane. We relate this to the disk inside-out formation, where older stellar populations are more centrally concentrated. This indicates the importance of considering the same spatial region for meaningful comparison between different studies, as well as observations and simulations.
$\bullet$ In contrast to the strong variation of chemical gradients with distance from the disk midplane for stars of all ages, the gradients of individual age groups do not vary significantly with distance from the plane (see Table 1). Gradients in the total population can vary strongly with $|z|$. We relate this to (i) the predominance of young stars close to the disk plane and old stars away from it, (ii) the more concentrated older stellar component, and (iii) the flaring of mono-age disk populations.
$\bullet$ The \[Fe/H\] distributions shift peaks from $\approx-0.15$ dex to $\approx-0.3$ dex when samples move from $|z|<0.5$ to $0.5<|z|<3$ kpc (Fig. \[fig:chem4\]). We showed that the strong effect on the high-\[Fe/H\] tail as a function of Galactic radius comes predominantly from the stars closer to the plane, related to both the chemical model used and the radial migration efficiency.
$\bullet$ Similarly to \[Fe/H\], the \[Mg/Fe\] distributions are also affected when varying the sample distance from the disk midplane. When changing $|z|<0.5$ to $0.5<|z|<3$ kpc, the \[Mg/Fe\] peaks shift from $\approx0.1$ dex to $\approx0.15-0.25$ dex, depending on the sample’s distance from the Galactic center.
Finally, we would like to emphasize an important dynamical effect that leads to better understanding of what shapes chemo-kinematic relations. We showed in paper I and this work, that migration is significant in our simulation for both stars and gas, which in principle should prevent us from recovering the disk chemical history. However, an interesting effect occurs away from the disk’s inner and outer boundaries, in the region $4<r<12$ kpc ($\sim$1.5-4 scale-lengths). In this radial range material is exchanged across any given radius in such a way as to approximately preserve the kinematical and chemical properties of the stars and gas that do not migrate, such as stellar velocity dispersions [@minchev12b] and mean age-\[Fe/H\] relations (see Figures \[fig:chem\] and \[fig:gas\]). Evidence of migration can still be found as scatter in the stellar age-\[Fe/H\] relation or the presence of low-velocity dispersion, $[\alpha$/Fe\]-enhanced stars in the solar neighborhood [@minchev14a].
In contrast, at the disk boundaries, stars migrating inwards accumulate in the inner disk (innermost one scale-length) and those migrating outwards accumulate in the disk outskirts (outside 3-4 scale-lengths). These effects should be possible to identify in observations, for example hot populations in disk outskirts if extended disk profiles are caused by radial migration [@minchev12b] or flattening in metallicity gradients in the disk outskirts (Sec. \[sec:gas\]).
The results presented in this work provide tests for surveys, such as RAVE, SEGUE, HERMES, GES, LAMOST, APOGEE, Gaia, WEAVE, and 4MOST. However, once again, we emphasize that direct comparison between the chemo-kinematic relations presented here and observational data may result in erroneous conclusions. For a proper comparison it is imperative to correct data for observational biases, as done for the SEGUE G-dwarf sample [@bovy12a]. Conversely, using the selection function of a given survey, mock observations can be extracted from our models that can be compared directly with the data, which is our choice of tackling this problem (Piffl et al., in preparation).
We thank the anonymous referee for helpful suggestion that have greatly improved the manuscript. We also thank G. Cescutti, B. Gibson, B. Famaey, M. Steinmetz, and R. de Jong for fruitful discussions.
, V. Z., [Figueira]{}, P., [Santos]{}, N. C., et al.: 2013, , A44
, C., [Majewski]{}, S. R., [Schiavon]{}, R., et al.: 2008, , 1018
, F., [Chiappini]{}, C., [Santiago]{}, B. X., et al.: 2014, , A115
, S. M., [Kovtyukh]{}, V. V., [Luck]{}, R. E., [L[é]{}pine]{}, J. R. D., [Maciel]{}, W. J., and [Beletsky]{}, Y. V.: 2002, , 491
, T., [Alves-Brito]{}, A., [Oey]{}, M. S., [Yong]{}, D., and [Mel[é]{}ndez]{}, J.: 2011, , L46
, T., [Feltzing]{}, S., and [Lundstr[ö]{}m]{}, I.: 2003, , 527
, J.: 2013, , 29
, J. C., [Kazantzidis]{}, S., [Weinberg]{}, D. H., [Guedes]{}, J., [Callegari]{}, S., [Mayer]{}, L., and [Madau]{}, P.: 2013, , 43
, C., [Chiappini]{}, C., [Minchev]{}, I., et al.: 2013a, , A19
, C., [Siebert]{}, A., [Piffl]{}, T., et al.: 2014,
, C., [Siebert]{}, A., [Piffl]{}, T., et al.: 2013b, , A59
, J., [Rix]{}, H.-W., and [Hogg]{}, D. W.: 2012a, , 131
, J., [Rix]{}, H.-W., [Liu]{}, C., [Hogg]{}, D. W., [Beers]{}, T. C., and [Lee]{}, Y. S.: 2012b, , 148
Bovy, J., Rix, H.-W., Hogg, D. W., et al. 2012c, , 755, 115
, C. B., [Stinson]{}, G. S., [Gibson]{}, B., et al.: 2012, , 690
, M., [Chiappini]{}, C., and [Pfenniger]{}, D.: 2011, , A75
, G., [Ng]{}, Y. K., and [Portinari]{}, L.: 1998, , 1045
, O., [Moll[á]{}]{}, M., [Costa]{}, R. D. D., and [Maciel]{}, W. J.: 2014, , 3688
, Y. Q., [Zhao]{}, G., [Carrell]{}, K., and [Zhao]{}, J. K.: 2011, , 184
, J. Y., [Rockosi]{}, C. M., [Morrison]{}, H. L., et al.: 2012a, , 51
, J. Y., [Rockosi]{}, C. M., [Morrison]{}, H. L., et al.: 2012b, , 149
, C., [Matteucci]{}, F., and [Gratton]{}, R.: 1997, , 765
, F., [Debbasch]{}, F., [Friedli]{}, D., and [Pfenniger]{}, D.: 1990, , 82
, S. and [Cunha]{}, K.: 2004, , 1115
, R. S., [Bellido-Tirado]{}, O., and [Chiappini]{}, C. e. a.: 2012, 8446
, V. P., [Mayer]{}, L., [Carollo]{}, C. M., [Moore]{}, B., [Wadsley]{}, J., and [Quinn]{}, T.: 2006, , 209
, W.: 2000, , 800
, P., [Haywood]{}, M., [Combes]{}, F., [Semelin]{}, B., and [Snaith]{}, O. N.: 2013, , A102
, E., [Vogelsberger]{}, M., and [Hernquist]{}, L.: 2013, , 34
, K., [Courteau]{}, S., and [Thacker]{}, R. J.: 2008, , 1821
, P., [Matteucci]{}, F., [Cayrel]{}, R., [Spite]{}, M., [Spite]{}, F., and [Chiappini]{}, C.: 2004, , 613
, K. and [Bland-Hawthorn]{}, J.: 2002, , 487
, K., [Bland-Hawthorn]{}, J., and [Barden]{}, S.: 2010, AAO Newsletter (February), in press
, P. M., [Thompson]{}, B., [Jackson]{}, K. M., et al.: 2013, , L1
, K.: 2011, , 2893
, F. A., [Minchev]{}, I., [O’Shea]{}, B. W., [Beers]{}, T. C., [Bullock]{}, J. S., and [Purcell]{}, C. W.: 2013, , 159
, F. A., [Minchev]{}, I., [O’Shea]{}, B. W., et al.: 2012a, , 3727
, F. A., [Minchev]{}, I., [Villalobos]{}, [Á]{}., [O’Shea]{}, B. W., and [Williams]{}, M. E. K.: 2012b, , 2163
, M. R., [Holtzman]{}, J. A., [Bovy]{}, J., et al.: 2014, , 116
, H. R., [Pilachowski]{}, C. A., and [Friel]{}, E. D.: 2011, , 59
, G., [Gilmore]{}, G., [Wyse]{}, R. F. G., [Steinmetz]{}, M., [Siebert]{}, A., [Bienaym[é]{}]{}, O., [McMillan]{}, P. J., [Minchev]{}, I., [Zwitter]{}, T., [Gibson]{}, B. K., [Seabroke]{}, G., [Grebel]{}, E. K., [Bland-Hawthorn]{}, J., [Boeche]{}, C., [Freeman]{}, K. C., [Munari]{}, U., [Navarro]{}, J. F., [Parker]{}, Q., [Reid]{}, W. A., and [Siviero]{}, A.: 2013, , 3231
, G., [Recio-Blanco]{}, A., [de Laverny]{}, P., [Gilmore]{}, G., [Hill]{}, V., [Wyse]{}, R. F. G., [Helmi]{}, A., [Bijaoui]{}, A., [Zoccali]{}, M., and [Bienaym[é]{}]{}, O.: 2011, , A107
, M., [Prantzos]{}, N., and [Athanassoula]{}, E.: 2013, , 1479
, Y. S., [Beers]{}, T. C., [Allende Prieto]{}, C., [Lai]{}, D. K., [Rockosi]{}, C. M., [Morrison]{}, H. L., [Johnson]{}, J. A., [An]{}, D., [Sivarani]{}, T., and [Yanny]{}, B.: 2011, , 90
, B., [Fran[ç]{}ois]{}, P., [Genovali]{}, K., et al.: 2013, , A31
, C. and [van de Ven]{}, G.: 2012, , 2144
, R. E. and [Lambert]{}, D. L.: 2011, , 136
, M., [Bournaud]{}, F., [Croton]{}, D. J., [Dekel]{}, A., and [Teyssier]{}, R.: 2012, , 26
, M., [Bournaud]{}, F., [Teyssier]{}, R., and [Dekel]{}, A.: 2009, , 250
, M., [Minchev]{}, I., and [Flynn]{}, C.: 2014a, , 2474
, M., [Minchev]{}, I., and [Flynn]{}, C.: 2014b,
, F. and [Tagger]{}, M.: 1997, , 442
, F.: 2012,
, I., [Boily]{}, C., [Siebert]{}, A., and [Bienayme]{}, O.: 2010, , 2122
, I., [Chiappini]{}, C., and [Martig]{}, M.: 2013, , A9
, I., [Chiappini]{}, C., [Martig]{}, M., [Steinmetz]{}, M., [de Jong]{}, R. S., [Boeche]{}, C., [Scannapieco]{}, C., [Zwitter]{}, T., [Wyse]{}, R. F. G., [Binney]{}, J. J., [Bland-Hawthorn]{}, J., [Bienayme]{}, O., [Famaey]{}, B., [Gibson]{}, B. K., [Grebel]{}, E. K., [Gilmore]{}, G., [Helmi]{}, A., [Kordopatis]{}, G., [Lee]{}, Y. S., [Munari]{}, U., [Navarro]{}, J. F., [Parker]{}, Q. A., [Quillen]{}, A. C., [Reid]{}, W. A., [Siebert]{}, A., [Siviero]{}, A., [Seabroke]{}, G., [Watson]{}, F., and [Williams]{}, M.: 2014, , L20
, I. and [Famaey]{}, B.: 2010, , 112
, I., [Famaey]{}, B., [Combes]{}, F., [Di Matteo]{}, P., [Mouhcine]{}, M., and [Wozniak]{}, H.: 2011, , 147
, I., [Famaey]{}, B., [Quillen]{}, A. C., [Dehnen]{}, W., [Martig]{}, M., and [Siebert]{}, A.: 2012a, , A127
, I., [Famaey]{}, B., [Quillen]{}, A. C., [Di Matteo]{}, P., [Combes]{}, F., [Vlaji[ć]{}]{}, M., [Erwin]{}, P., and [Bland-Hawthorn]{}, J.: 2012b, , A126
, I., [Nordhaus]{}, J., and [Quillen]{}, A. C.: 2007, , L31
, I., [Quillen]{}, A. C., [Williams]{}, M., [Freeman]{}, K. C., [Nordhaus]{}, J., [Siebert]{}, A., and [Bienaym[é]{}]{}, O.: 2009, , L56
, S., [Bono]{}, G., [Lemasle]{}, B., et al.: 2009, , 81
, M. A. C., [de Boer]{}, K. S., [Gilmore]{}, G., et al.: 2001, , 339
, A. C., [Dougherty]{}, J., [Bagley]{}, M. B., [Minchev]{}, I., and [Comparetta]{}, J.: 2011, , 762
, A. C., [Minchev]{}, I., [Sharma]{}, S., [Qin]{}, Y.-J., and [Di Matteo]{}, P.: 2014, , 1284
, P., [Reddy]{}, B. E., and [Lambert]{}, D. L.: 2012, , 3188
, A., [de Laverny]{}, P., [Kordopatis]{}, G., et al.: 2014, , A5
, H.-W. and [Bovy]{}, J.: 2013, , 61
, R., [Debattista]{}, V. P., [Quinn]{}, T. R., [Stinson]{}, G. S., and [Wadsley]{}, J.: 2008, , L79
, J. A. and [Binney]{}, J. J.: 2002, , 785
, G., [Prantzos]{}, N., [Meynet]{}, G., et al.: 2012, in [*EAS Publications Series*]{}, Vol. 54 of [*EAS Publications Series*]{}, pp 255–317
, M.: 2012, , 523
, G. S., [Bovy]{}, J., [Rix]{}, H.-W., et al.: 2013,
, C., [D’Onghia]{}, E., [Navarro]{}, J., and [Abadi]{}, M.: 2014,
, L. M., [Gardner]{}, S., [Yanny]{}, B., [Dodelson]{}, S., and [Chen]{}, H.-Y.: 2012, , L41
, B. and [Rockosi]{}, C., N. H. J. e. a.: 2009, , 4377
, D., [Carney]{}, B. W., and [Friel]{}, E. D.: 2012, , 95
Comparison between the chemical model and simulation SFHs
=========================================================
In Fig. \[fig:sfh\] we compare the SFH (as a function of cosmic time) of the simulation with that used for the chemical evolution model. Different colors correspond to different disk radii, as indicated. Good overall agreement is found for most radii. Some discrepancy can be seen at 2 and 4 kpc approximately at $t>3$ and $t>5$ Gyr, respectively, where the simulation SFH is somewhat stronger. On the other hand, at early times the simulation SFH for $r\ge4$ kpc is lower than the chemical model SFH. This means that the disk inside-out growth in somewhat delayed in the simulation. As discussed in paper I, we have assumed that these differences between the simulation and chemical model are not crucial for the dynamics. However, we weight the stars in the simulation to satisfy the SFH in the chemical model, which is important for obtaining the correct mixture of populations corresponding to the chemistry.
![ Comparison between the SFH (as a function of cosmic time) of the simulation and the chemical evolution model. Different colors correspond to different disk radii, as indicated. Good overall agreement is found for most radii. []{data-label="fig:sfh"}](sfh2.eps){width="8cm"}
Variations in chemo-kinematic properties with radius and distance from the disk midplane {#sec:a1}
========================================================================================
Figures \[fig:chemz1\] and \[fig:chemz2\] present similar information to Figures \[fig:chem\] and \[fig:chem2\], but for two different distances from the disk midplane, as indicated. For each 2-kpc annulus a sample close to the plane ($|z|<0.5$ kpc) and farther from the plane ($0.5<|z|<3$ kpc) is plotted. The dashed-pink lines in the first two columns show the mean. The decrease in age with Galactic radius can be seen in the densest contour levels (especially at $0.5<|z|<3$ kpc), which do not extend to young ages for the inner radial bins. For the samples away from the plane shifts in the distributions are seen to lower birth-radii, lower metallicities and higher \[Mg/Fe\], as expected for older populations.
We note that the \[Fe/H\]-\[Mg/Fe\] plane does not show a discontinuity, as frequently seen in observations. We demonstrated in paper I that a kinematically biased samples can give rise to this division between the thin and thick disks in the solar vicinity. However, as currently debated in the literature [@fuhrmann11; @anders14; @bovy12b], this gap may not be an artifact, but a real feature reflecting variations in the SFH of the MW disk. If this is confirmed in observations, it will serve as an additional constraint for our model, requiring the modification of its SFH and early chemistry. Except for the chemical discontinuity, no significant changes to the results presented in paper I and the current work are expected.
![image](chem_z1.eps){width="18cm"}
![image](chem_z2.eps){width="18cm"}
[^1]: These uncertainties are the same as those used for \[Fe/H\] and \[O/H\] in paper I, although there it was erroneously stated that $\pm0.05$ dex was the uncertainty in \[O/Fe\]
|
---
abstract: 'Oil and gas drilling is based, increasingly, on operational technology, whose cybersecurity is complicated by several challenges. We propose a graphical model for cybersecurity risk assessment based on Adversarial Risk Analysis to face those challenges. We also provide an example of the model in the context of an offshore drilling rig. The proposed model provides a more formal and comprehensive analysis of risks, still using the standard business language based on decisions, risks, and value.'
author:
- Aitor Couce Vieira Siv Hilde Houmb
- David Rios Insua
bibliography:
- 'ARAOGgramsec.bib'
title: A Graphical Adversarial Risk Analysis Model for Oil and Gas Drilling Cybersecurity
---
Introduction
============
Operational technology (OT) refers to hardware and software that detects or causes a change through the direct monitoring and/or control of physical devices, processes and events in the enterprise [@gartner]. It includes technologies such as SCADA systems. Implementing OT and information technology (IT) typically lead to considerable improvements in industrial and business activities, through facilitating the mechanization, automation, and relocation of activities in remote control centers. These changes usually improve the safety of personnel, and both the cost-efficiency and overall effectiveness of operations.
The oil and gas industry (O&G) is increasingly adopting OT solutions, in particular offshore drilling, through drilling control systems (drilling CS) and automation, which have been key innovations over the last few years. The potential of OT is particularly relevant for these activities: centralizing decision-making and supervisory activities at safer places with more and better information; substituting manual mechanical activities by automation; improving data through better and near real-time sensors; and optimizing drilling processes. In turn, they will reduce rig crew and dangerous operations, and improve efficiency in operations, reducing operating costs (typically of about \$300,000 per day).
Since many of the involved OT employed in O&G are currently computerized, they have become a major potential target for cyber attacks [@Shauk2013c], given their economical relevance, with large stakes at play. Indeed, we may face the actual loss of large oil reserves because of delayed maneuvers, the death of platform personnel, or potential large spills with major environmental impact with potentially catastrophic consequences. Moreover, it is expected that security attacks will soon target several production installations simultaneously with the purpose of sabotaging production, possibly taking advantage of extreme weather events, and attacks oriented towards manipulating or obtaining data or information. Cybersecurity poses several challenges, which are enhanced in the context of operational technology. Such challenges are sketched in the following section.
Cybersecurity Challenges in Operational Technology
--------------------------------------------------
Technical vulnerabilities in operational technology encompass most of those related with IT vulnerabilities [@byres2004myths], complex software [@DoDDefSci2013], and integration with external networks [@giani2009viking]. There are also and specific OT vulnerabilities [@zhu2011taxonomy; @brenner2013eyes]. However, OT has also strengths in comparison with typical IT systems employing simpler network dynamics.
Sound organizational cybersecurity is even more important with OT given the risks that these systems bring in. Uncertainties are considerable in both economical and technical sense [@anderson2010security]. Therefore better data about intrusion attempts are required for improving cybersecurity [@pfleeger2008cybersecurity], although gathering them is difficult since organizations are reluctant about disclosing such information [@ten2008vulnerability].
More formal approaches to controls and measures are needed to deal with advanced threat agents such as assessing their attack patterns and behavior [@hutchins2011intelligence] or implementing intelligent sensor and control algorithms [@cardenas2008research]. An additional problem is that metrics used by technical cybersecurity to evaluate risks usually tell little to those evaluating or making-decisions at the organizational cybersecurity level. Understanding the consequences of a cyber attack to an OT system is difficult. They could lead to production losses or the inability to control a plant, multimillion financial losses, and even impact stock prices [@byres2004myths]. One of the key problems for understanding such consequences is that OT systems are also cyber-physical systems (CPS) encompassing both computational and complex physical elements [@thomas2013bad].
Risk management is also difficult in this context [@mulligan2011doctrine]. Even risk standards differ on how to interpret risk: some of them assess the probabilities of risk, others focus on the vulnerability component [@hutchins2011intelligence]. Standards also tend to present oversimplifications that might alter the optimal decision or a proper understanding of the problem, such as the well-known shortcomings of the widely employed risk matrices [@cox2008matrix].
Cyber attacks are the continuation of physical attacks by digital means. They are less risky, cheaper, easier to replicate and coordinate, unconstrained by distance [@cardenas2009challenges], and they could be oriented towards causing high impact consequences [@DoDDefSci2013]. It is also difficult to measure data related with attacks such as their rate and severity, or the cost of recovery [@anderson2010security]. Examples include Stuxnet [@brenner2013eyes], Shamoon [@brenner2013eyes], and others [@cardenas2008research]. Non targeted attacks could be a problem also.
Several kinds of highly skilled menaces of different nature (e.g., military, hacktivists, criminal organizations, insiders or even malware agents) can be found in the cyber environment [@DoDDefSci2013], all of them motivated and aware of the possibilities offered by OT [@byres2004myths]. Indeed, the concept Advanced Persistent Threat (APT) has arisen to name some of the threats [@Ltd2011]. The diversity of menaces could be classified according their attitude, skill and time constraints [@dantu2007classification], or by their ability to exploit, discover or even create vulnerabilities on the system [@DoDDefSci2013]. Consequently, a sound way to face them is profiling [@atzeni2011here] and treating [@li2009botnet] them as adversarial actors.
Related Work Addressing the Complexities of Cybersecurity Challenges
--------------------------------------------------------------------
Several approaches have been proposed to model attackers and attacks, including stochastic modelling [@muehrcke2010behavior; @sallhammar2007stochastic], attack graph models [@kotenko2006attack] and attack trees [@mauw2006foundations], models of directed and intelligent attacks [@ten2008vulnerability]; models based on the kill chain attack phases [@hutchins2011intelligence], models of APT attack phases [@Ltd2011], or even frameworks incorporating some aspects of intentionality or a more comprehensive approach to risk such as CORAS [@lund2011model] or ADVISE [@Advise2013].
Game theory has provided insights concerning the behavior of several types of attackers such as cyber criminal APTs and how to deal with them. The concept of incentives can unify a large variety of agent intents, whereas the concept of utility can integrate incentives and costs in such a way that the agent objectives can be modeled in practice [@liu2005incentive]. Important insights from game theory are that the defender with lowest protection level tends to be a target for rational attackers [@Johnson2011], that defenders tend to under-invest in cybersecurity [@amin2011interdependence], and that the attackers target selection is costly and hard, and thus it needs to be carefully carried on [@florencio2013all]. In addition to such general findings, some game-theoretic models exist for cybersecurity or are applicable to it, modelling static and dynamic games in all information contexts [@roy2010survey]. However, game-theoretic models have their limitations [@hamilton2002challenges; @roy2010survey] such as limited data, the difficulty to identify the end goal of the attacker, the existence of a dynamic and continuous context, and that they are not scalable to the complexity of real cybersecurity problems in consideration. Moreover, from the conceptual point of view, they require common knowledge assumptions that are not tenable in this type of applications.
Additionally, several Bayesian models have been proposed for cybersecurity risk management such as a model for network security risk analysis [@xie2010using]; a model representing nodes as events and arcs as successful attacks [@dantu2007classification]; a dynamic Bayesian model for continuously measuring network security [@frigault2008measuring]; a model for Security Risk Management incorporating attacker capabilities and behavior [@dantu2009network]: or models for intrusion detection systems (IDS) [@balchanos2012probabilistic]. However, these models require forecasting attack behavior which is hard to come by.
Adversarial Risk Analysis (ARA) [@rios2009adversarial] combine ideas from Risk Analysis, Decision Analysis, Game-Theory, and Bayesian Networks to help characterizing the motivations and decisions of the attackers. ARA is emerging as a main methodological development in this area [@merrick2011comparative], providing a powerful framework to model risk analysis situations with adversaries ready to increase our threats. Applications in physical security may be seen in [@sevillano2012adversarial].
Our Proposal
------------
The challenges that face OT, cybersecurity and the O&G sector create a need of a practical, yet rigorous approach, to deal with them. Work related with such challenges provides interesting insights and tools for specific issues. However, more formal but understandable tools are needed to deal with such problems from a general point of view, without oversimplifying the complexity underlying the problem. We propose a model for cybersecurity risk decisions based on ARA, taking into account the attacker behavior. Additionally, an application of the model in drilling cybersecurity is presented, tailored to decision problems that may arise in offshore rigs employing drilling CS.
Model
=====
Introduction to Adversarial Risk Analysis
-----------------------------------------
ARA aims at providing one-sided prescriptive support to one of the intervening agents, the Defender (she), based on a subjective expected utility model, treating the decisions of the Attacker (he) as uncertainties. In order to predict the Attackers actions, the Defender models her decision problem and tries to assess her probabilities and utilities but also those of the Attacker, assuming that the adversary is an expected utility maximizer. Since she typically has uncertainty about those, she models it through random probabilities and uncertainties. She propagates such uncertainty to obtain the Attacker’s optimal random attack, which she then uses to find her optimal defense.
ARA enriches risk analysis in several ways. While traditional approaches provide information about risk to decision-making, ARA integrates decision-making within risk analysis. ARA assess intentionality thoroughly, enabling the anticipation and even the manipulation of the Attacker decisions. ARA incorporates stronger statistical and mathematical tools to risk analysis that permit a more formal approach of other elements involved in the risk analysis. It improves utility treatment and evaluation. Finally, an ARA graphical model improves the understandability of complex cases, through visualizing the causal relations between nodes.
The main structuring and graphical tool for decision problems are Multi-Agent Influence Diagrams (MAID), a generalization of Bayesian networks. ARA is a decision methodology derived from Influence Diagrams, and it could be structured with the following basic elements:
- *Decisions or Actions*. Set of alternatives which can be implemented by the decision makers. They represent what one can do. They are characterized as decision nodes (rectangles).
- *Uncertain States*. Set of uncontrollable scenarios. They represent what could happen. They are characterized as uncertainty nodes (ovals).
- *Utility and Value*. Set of preferences over the consequences. They represent how the previous elements would affect the agents. They are characterized as value nodes (rhombi).
- *Agents*. Set of people involved in the decision problem: decision makers, experts and affected people. In this context, there are several agents with opposed interests. They are represented through different colors.
We describe now the basic MAID that may serve as a template for cybersecurity problems in O&G drilling CS, developed using GeNIe [@genie].
Graphical Model
---------------
Our model captures the Defender cybersecurity main decisions prior to an attack perpetrated by an APT, which is strongly business-oriented. Such cyber criminal organization behavior suits utility-maximizing analysis, as it pursues monetary gains. A sabotage could also be performed by this type of agents, and they could be hired to make the dirty job for a foreign power or rival company. We make several assumptions in the Model, to make it more synthetic:
- We assume one Defender. The Attackers nodes do not represent a specific attacker, but a generalization of potential criminal organizations that represent business-oriented APTs, guided mostly by monetary incentives.
- We assume an atomic attack (the attacker makes one action), with several consequences, as well as several residual consequences once the risk treatment strategy is selected.
- The Defender and Attacker costs are deterministic nodes.
- We avoid detection-related activities or uncertainties to simplify the Model. Thus, the attack is always detected and the Defender is always able to respond to it.
- The scope of the Model is an assessment activity prior to any attack, as a risk assessment exercise to support incident handling planning.
- The agents are expected utility maximizers.
- The Model is discrete.
By adapting the proposed template in Figure 1, we may generalize most of the above assumptions to the cases at hand.
![image](pasted1)
**Figure 1**. MAID of the ARA Model for O&G drilling cybersecurity.
### Defender Decision and Utility Nodes
The Defender nodes, in white, are:
- *Protect* (*DP*) decision node. The Defender selects among security measures portfolios to increase protection against an Attack, e.g., access control, encryption, secure design, firewalls, or personal training and awareness.
- *Forensic System* (*DF*) decision node. The Defender selects among different security measures portfolios that may harm the Attacker, e.g., forensic activities that enable prosecution of the Attacker.
- *Residual Risk Treatment* (*DT*) decision node. This node models Defender actions after the assessment of other decisions made by the Defender and the Attacker. They are based on the main risk treatment strategies excluding risk mitigation, as they are carried on through the Protect and the Respond and Recovery nodes: avoiding, sharing, or accepting risk. This node must be preceded by the Protect defender decision node, and it must precede the Attack uncertainty node (the residual risk assessment is made in advance).
- *Respond and Recovery* (*DR*) decision node. The Defender selects between different response and recovery actions after the materialization of the attack, trying to mitigate the attack consequences. This will depend on the attack uncertainty node.
- *Defender Cost* (*DC*) deterministic node. The costs of the decisions made by the Defender are deterministic, as well as the monetary consequences of the attack (the uncertainty about such consequences is solved in the Monetary Consequences node). In a more sophisticated model, most of the costs could be modeled as uncertain nodes. This node depends on all decision nodes of the Defender and the Monetary Consequences uncertainty node.
- *Value Nodes* (*DCV* and *DHV*). The Defender evaluates the consequences and costs, taking into account her risk attitude. They depend on the particular nodes evaluated at each Value node.
- *Utility Nodes* (*DU*). This node merges the Value nodes of the Defender. It depends on the Defenders Value nodes.
The Decision nodes are adapted to the typical risk management steps, incorporating ways of evaluating managing sound organizational cybersecurity strategy, which takes into account the business implications of security controls, and prepare the evaluation of risk consequences. Related work (Section 1.2) on security costs and investments could incorporate further complexities underlying the above nodes.
### Attacker Decision and Utility Nodes
The Attacker nodes, in black, are:
- *Perpetrate* (AP) decision node. The [\[]{}generic[\]]{} Attacker decides whether he attacks or not. It could be useful to have a set of options for a same type of attack (e.g., preparing a quick and cheap attack, or a more elaborated one with higher probabilities of success). It should be preceded by the Protect and Residual Risk Treatment decision nodes, and might be preceded by the Contextual Threat node (in case the Attacker observes it).
- *Attacker Cost* (*AC*) deterministic node. Cost of the Attacker decisions. Preceded by the Perpetrate decision node.
- *Value Nodes* (*AMV* and *ACV*). The Attacker evaluates the different consequences and costs, taking into account his risk attitude. They depend on the deterministic or uncertainty nodes evaluated at each Value node.
- *Utility Nodes* (*AU*). It merges the Value nodes of the Attacker to a final set of values. It must depend on the Attackers Value nodes.
These nodes help in characterizing the Attacker, avoiding the oversimplification of other approaches. Additionally, the Defender has uncertainty about the Attacker probabilities and utilities. This is propagated over their nodes, affecting the Attacker expected utility and optimal alternatives, which are random. Such distribution over optimal alternatives is our forecast for the Attacker’s actions.
### Uncertainty Nodes
The uncertainty nodes in grey are:
- *Contextual Threats* (*UC*) uncertainty node. Those threats (materialized or not) present during the Attack. The Attacker may carry out a selected opportunistic Attack (e.g. hurricanes or a critical moment during drilling).
- *Attack* (*UA*) uncertainty node. It represents the likelihood of the attack event, given its conditioning nodes. It depends on the Perpetrate decision node, and on the Protect decision node.
- *Consequences* (*UM* and *UV*) uncertainty node. It represents the likelihood of different consequence levels that a successful attack may lead to. They depend on the Attack and Contextual Threat uncertainty nodes, and on the Respond and Recovery decision node.
- *Residual Consequences* (*URH*) uncertainty node. It represents the likelihood of different consequence levels after applying residual risk treatment actions. They depend on the Consequence node modelling the same type of impact (e.g., human, environmental, or reputation).
- *Counter-Attack* (*UCA*) uncertainty node. Possibility, enabled by a forensic system, to counter-attack and cause harm to the Attacker. Most of the impacts may be monetized. It depends on the Forensic System decision node.
Dealing with the uncertainties and complexities and obtaining a probability distribution for these nodes could be hard. Some of the methodologies and findings proposed in the sections 1.1 and 1.2 are tailored to deal with some of these complexities. Using them, the Model proposed in this paper could lead to limit the uncertainties in cybersecurity elements such as vulnerabilities, controls, consequences, attacks, attacker behavior, and risks. This will enable achieving simplification, through the proposed Model, without limiting the understanding of the complexities involved, and a sounder organizational cybersecurity.
Example
=======
We present a numerical example of the previous Model tailored to a generic decision problem prototypical of a cybersecurity case that may arise in O&G offshore rig using drilling CS. The model specifies a case in which the driller makes decisions to prevent and respond to a cyber attack perpetrated by a criminal organization with APT capabilities, in the context of offshore drilling and drilling CS. The data employed in this example are just plausible figures helpful to provide an overview of the problems that drilling cybersecurity faces. Carrying on the assessment that the Model enables may be helpful for feeding a threat knowledge base, incident management procedures or incident detection systems.
The context is that of an offshore drilling rig, a floating platform with equipment to drill a well through the seafloor, trying to achieve a hydrocarbon reservoir. Drilling operations are dangerous and several incidents may happen in the few months (usually between 2 or 4) that the entire operation may last. As OT, drilling CS may face most of the challenges presented in Section 1.1 (including being connected to Enterprise networks, an entry path for attackers) in the context of high-risk incidents that occur in offshore drilling.
Agent Decisions
---------------
### Defender Decisions
The Defender has to make three decisions in advance of the potential attack. In the Protect decision node (DP), the Defender must decide whether she invests in additional protection: if the Defender implements additional protective measures, the system will be less vulnerable to attacks. In the Forensic System decision node (DF), the Defender must decide whether she implements a forensic system or not. Implementing it enables the option of identifying the Attacker and pursuing legal or counter-hacking actions against him. The Residual Risk Treatment decision node (DT) represents additional risk treatment strategies that the Defender is able to implement: avoiding (aborting the entire drilling operation to elude the attack), sharing (buying insurance to cover the monetary losses of the attack), and accepting the risk (inheriting all the consequences of the attack, conditional on to the mitigation decisions of DP, FD, and DR).
Additionally, the Respond and Recovery decision node (DR) represents the Defenders decision between continuing and stopping the drilling operations as a reaction to the attack. Continuing the drilling may lead to worsen the consequences of the attack, whereas stopping the drilling will incur in higher costs due to holding operations. This is a major issue for drilling CS. In general, critical equipment should not be stopped, since core operations or even the safety of the equipment or the crew may be compromised.
### Attacker Decisions
For simplicity, in the Perpetrate decision node (AP) the Attacker decides whether he perpetrates the attack or not, although further attack options could be added. In this example, the attack aims at manipulating the devices directly under control of physical systems with the purpose of compromising drilling operations or harming equipment, the well, the reservoir, or even people.
Threat Outcomes and Uncertainty
-------------------------------
### Outcomes and Uncertainty during the Incident
The Contextual Threats uncertainty node (UC) represents the existence of riskier conditions in the drilling operations (e.g., bad weather or one of the usual incidents during drilling), which can clearly worsen the consequences of the attack. In this scenario, the Attacker is able to know, to some extent, these contextual threats (e.g., a weather forecast, a previous hacking in the drilling CS that permits the attacker to read what is going on in the rig).
The Attack uncertainty node (UA) represents the chances of the Attacker of causing the incident. If the Attacker decides not to execute his action, no attack event will happen. However, in case of perpetration, the chances of a successful attack will be lower if the Defender invests in protective measures (DC node). An additional uncertainty arises in case of materialization of the attack: the possibility to identify and counter-attack the node, represented by the Counter-Attack uncertainty node (UCA).
If the attack happens, the Defender will have to deal with different consequence scenarios. The Monetary (UM) and Human Consequences (UH) nodes represent the chances of different consequences or impact levels that the Defender may face. The monetary consequences refer to all impacts that can be measured as monetary losses, whereas human consequences represent casualties that may occur during an incident or normal operations. However, the Defender has the option to react to the attack by deciding whether she continues or stops the drilling (DR node). If the Defender decides to stop, there will be lower chances of casualties and lower chances of worst monetary consequences (e.g., loss of assets or compensations for injuries or deaths), but she will have to assume the costs of keeping the rig held (one day in our example) to deal with the cyber threat.
### Outcomes and Uncertainty in Risk Management Process
The previous uncertainties appear after the Attacker’s decision to attack or not. The Defender faces additional relevant uncertainties. She must make a decision between avoiding, sharing, or accepting the risk (DT node). Such decision will determine the final or residual consequences. The final monetary consequences are modeled through the Defender Cost deterministic node (DC node), whose outcome represents the cost of different Defender decisions (nodes DP, DF, DT, and DR). In case of accepting or sharing the risk, the outcome of the DC node will also inherit the monetary consequences of the attack (UM node). Similarly, the outcome of the Residual Human Consequences uncertainty node (URH) is conditioned by the risk treatment decisions (DC node) and, in case of accepting or sharing the risk, it will inherit the human consequences of the attack (UH node). If the Defender decides to avoid the risk, she will assume the cost of avoiding the entire drilling operations and will cause that the crew face a regular death risk rather than the higher death risk of offshore operations. If the Defender shares the risk, she will assume the same casualties as in UH and a fixed insurance payment, but she will avoid paying high monetary consequences. Finally, in case the Defender accepts the risk, she will inherit the consequences from the UM and UH nodes.
The Attacker Cost deterministic node (AC) provides the costs (non-uncertain by assumption) of the decision made by the Attacker. Since he only has two decisions (perpetrate or not), the node has only two outcomes: cost or not. This node could be eliminated, but we keep it to preserve the business semantics within the graphical model.
Agent Preferences
-----------------
The Defender aims at maximizing her expected utility, with the utility function being additive, through the Defender Utility node (DU). The Defender key objective is minimizing casualties, but he also considers minimizing his costs (in this example we assume she is risk-neutral). Each objective has its own weight in the utility function.
The objective of the Attacker is to maximize his expected utility, represented by an additive utility function, through the Attacker Utility node (AU). The Attacker key objective is maximizing the monetary consequences for the Defender. We assume that he is risk-averse towards this monetary impact (he prefers ensuring a lower impact than risking the operations trying to get a higher impact). He also considers minimizing his costs (i.e., being identified and perpetrating the attack). Each of these objectives has its own weight in the utility function, and its own value function. The Attacker does not care about eventual victims.
Uncertainty about the Opponent Decisions
----------------------------------------
The Attacker is able to know to some extent the protective decisions of the defender (DP node), gathering information while he tries to gain access to the drilling CS. While knowing if the Defender avoided the risk (avoiding all the drilling operations) is easy, knowing if the Defender chose between sharing or accepting the risk is difficult. The most important factor, the decision between continue or stop drilling in case of an attack, could be assessed by observing the industry or company practices. The Defender may be able to assess also how frequent similar attacks are, or how attractive the drilling rig is for this kind of attacker. In ARA, and from the Defender perspective, the AP node would be an uncertainty node whose values should be provided by assessing the probabilities of the different attack actions, through analyzing the decision problem from the Attacker perspective and obtaining his random optimal alternative.
Example Values
--------------
An annex provides the probability tables of the different uncertainty nodes employed to simulate the example in Genie (Tables 1 to 7). It also provides the different parameters employed in the utility and value functions (Tables 8 to 10). Additionally, the risk-averse values for AMV are obtained with $AMV=\sqrt[3]{\frac{DC}{10^{7}}}$; the risk-neutral values for DCV are obtained with $DCV=1-\frac{DC}{10^{7}}$; and, the values for DHV are 0 in case of victims and 1 in case of no victims.
Evaluation of Decisions
-----------------------
Based on the solution of the example, we may say that the Attacker should not perpetrate his action in case he believes the Defender will avoid or share the risk. However, the Attacker may be interested in perpetrating his action in case he believes that the Defender is accepting the risk. Additionally, the less preventive measures the Defender implements (DP and DT nodes), the more motivated the Attacker would be (if he thinks the Defender is sharing the risk). The Attackers expected utility is listed in Table 11 in the Annex. The Defender will choose in this example not to implement additional protection (DP node) without a forensic system (DF node). If the Defender believes that she is going to be attacked, then she would prefer sharing the risk (DT node) and stop drilling after the incident (DR node). In case she believes that there will be no attack, she should accept the risk and continue drilling. The Defenders expected utilities are listed in Table T12 in Annex.
Thus, the Defender optimal decisions create a situation in which the Attacker is more interested in perpetrating the attack. Therefore, to affect the Attackers behavior, the Defender should provide the image that her organization is concerned with safety, and especially that it is going to share risks. On the other hand, if the Attacker perceives that the Defender pays no attention to safety or that she is going to accept the risk, he will try to carry on his attack. The ARA solution for the Defender is the following:
1. Assess the problem from the point of view of the Attacker. The DT and DR nodes are uncertainty nodes since that Defender decisions are uncertain for the Attacker. The Defender must model such nodes in the way that she thinks the Attacker models such uncertainties. In general, perpetrating an attack is more attractive in case the Attacker strongly believes that the Defender is going to accept the risk or is going to continue drilling.
2. Once forecasted the Attackers decision, the Defender should choose between sharing and accepting the risk. Accepting the risk in case of no attack is better than sharing the risk, but accepting the risk in case of attack is worse.
Thus, the key factor for optimizing the decision of the Defender are her estimations on the uncertainty nodes that represent the DT and DR nodes for the attacker. Such nodes will determine the Attacker best decision, and this decision the Defender best decision.
Conclusions and Further Work
============================
We have presented the real problem and extreme consequences that OT cybersecurity in general, and drilling cybersecurity in particular, are facing. We also explained some of the questions that complicate cybersecurity, especially in OT systems. The proposed graphical model provides a more comprehensive, formal and rigorous risk analysis for cybersecurity. It is also a suitable tool, able of being fed by, or compatible with, other more specific models such as those explained in Section 1.
Multi-Agent Influence Diagrams provide a formal and understandable way of dealing with complex interactive issues. In particular, they have a high value as business tools, since its nodes translate the problem directly into business language: decisions, risks, and value. Typical tools employed in widely used risk standards, such as risk matrices, oversimplify the problem and limit understanding. The proposed ARA-based model provides a business-friendly interpretation of a risk management process without oversimplifying its underlying complexity.
The ARA approach permits us to include some of the findings of game theory applied to cybersecurity, and it also permits to achieve new findings. The model provides an easier way to understand the problem but it is still formal since the causes and consequences in the model are clearly presented, while avoiding common knowledge assumptions in game theory.
Our model presents a richer approach for assessing risk than risk matrices, but it still has the security and risk management language. In addition, it is more interactive and modular, nodes can be split into more specific ones. The proposed model can still seem quite formal to business users. However, data can be characterized using ordinal values (e.g., if we only know that one thing is more likely/valuable than other), using methods taken from traditional risk management, employing expert opinion, or using worst case figures considered realistic. The analysis would be poorer but much more operational.
Using the nodes of the proposed model as building blocks, the model could gain in comprehensiveness through adding more attackers or attacks, more specific decision nodes, more uncertainty nodes, or additional consequence nodes, such as environmental impact or reputation. Other operations with significant business interpretation can be done, such as sensitivity analysis (how much the decision-makers should trust a figure) or strength of the influence analysis (which are the key elements).
Its applicability is not exempt of difficulties and uncertainties, but in the same way than other approaches. Further work is needed to verify and validate the model and its procedures (in a similar way to the validation of other ARA-based models[@RiosInsua2013]), and to identify the applicability and usability issues that may arise. The model could gain usability through mapping only the relevant information to decision-makers (roughly, decisions and consequences) rather than the entire model.
[^1]
Appendix: Tables with Example Data {#appendix-tables-with-example-data .unnumbered}
==================================
****
**
---- --
**
****
**
---- ---- ---- ---- ----
** ** ** ** **
**
**
****
[|>p[3.5cm]{}|>m[1.1cm]{}|>m[1.1cm]{}|>m[1.1cm]{}|>m[1.1cm]{}|>m[1.1cm]{}|>m[1.1cm]{}|>m[1.1cm]{}|>m[1.1cm]{}|]{} ** & & [\
]{} ** & & & & [\
]{} ** & ** & ** & ** & ** & ** & ** & ** & **[\
]{} ** & & & & & & & & [\
]{} ** & & & & & & & & [\
]{} ** & & & & & & & & [\
]{}
****
[|>p[3.5cm]{}|>p[1.1cm]{}|>p[1.1cm]{}|>p[1.1cm]{}|>p[1.1cm]{}|>p[1.1cm]{}|>p[1.1cm]{}|>p[1.1cm]{}|>p[1.1cm]{}|]{} ** & & [\
]{} ** & & & & [\
]{} ** & ** & ** & ** & ** & ** & ** & ** & **[\
]{} ** & & & & & & & & [\
]{} ** & & & & & & & & [\
]{}
****
**
---- ---- ---- ---- ---- ---- ----
** ** ** ** ** ** **
**
**
****
**
---- ---- ---- ---- ----
** ** ** ** **
**
**
****
**
---- ---- -- -- --
**
**
**
**
**
**
****
**
---- --
**
****
**
---- ---- ---- ---- ----
** ** ** ** **
****
**
---- --
**
****
[|c|c|c|c|>m[1.2cm]{}|>m[1.2cm]{}|>m[1.2cm]{}|>m[1.2cm]{}|]{} & & & & & [\
]{} & & & & ** & ** & ** & **[\
]{} & & & ** & [\
]{} & & & ** & [\
]{} & & & ** & & **** & & ****[\
]{} & & & ** & & **** & & ****[\
]{} & & & ** & & & & [\
]{} & & & ** & & & & [\
]{} & & & ** & [\
]{} & & & ** & [\
]{} & & & ** & & **** & & ****[\
]{} & & & ** & & **** & & ****[\
]{} & & & ** & & & & [\
]{} & & & ** & & & & [\
]{} & & & ** & [\
]{} & & & ** & [\
]{} & & & ** & & **** & & ****[\
]{} & & & ** & & **** & & ****[\
]{} & & & ** & & & & [\
]{} & & & ** & & & & [\
]{} & & & ** & [\
]{} & & & ** & [\
]{} & & & ** & & **** & & ****[\
]{} & & & ** & & **** & & ****[\
]{} & & & ** & & & & [\
]{} & & & ** & & & & [\
]{}
****
[|c|c|c|c|>p[1.4cm]{}|>p[1.4cm]{}|>p[1.4cm]{}|>p[1.4cm]{}|]{} & & & & [\
]{} & & & & & [\
]{} & & & & ** & ** & ** & **[\
]{} & & & ** & & & & [\
]{} & & & ** & & & & [\
]{} & & & ** & & & & [\
]{} & & & ** & & & & [\
]{} & & & ** & & & & [\
]{} & & & ** & & & & [\
]{} & & & ** & & & & [\
]{} & & & ** & & & & [\
]{} & & & ** & & & & [\
]{} & & & ** & & & & [\
]{} & & & ** & & & & [\
]{} & & & ** & & & & [\
]{} & & & ** & & & & [\
]{} & & & ** & & & & [\
]{} & & & ** & & & & [\
]{} & & & ** & & & & [\
]{} & & & ** & & & & [\
]{} & & & ** & & & & [\
]{} & & & ** & & & & [\
]{} & & & ** & & & & [\
]{} & & & ** & & & & [\
]{} & & & ** & **** & & **** & [\
]{} & & & ** & & **** & & ****[\
]{} & & & ** & & & & [\
]{}
[^1]: **Acknowledgments**\
- Work supported by the EU’s FP7 Seconomics project 285223\
- David Rios Insua grateful to the support of the MINECO, Riesgos project and the Riesgos-CM program
|
---
abstract: 'Evolution algebras are non-associative algebras inspired from biological phenomena, with applications to or connections with different mathematical fields. There are two natural ways to define an evolution algebra associated to a given graph. While one takes into account only the adjacencies of the graph, the other includes probabilities related to the symmetric random walk on the same graph. In this work we state new properties related to the relation between these algebras, which is one of the open problems in the interplay between evolution algebras and graphs. On the one hand, we show that for any graph both algebras are strongly isotopic. On the other hand, we provide conditions under which these algebras are or are not isomorphic. For the case of [finite]{} non-singular graphs we provide a complete description of the problem, while for the case of [finite]{} singular graphs we state a conjecture supported by examples and partial results. [The case of graphs with an infinite number of vertices is also discussed.]{} As a sideline [of our work]{}, we revisit a result existing in the literature about the identification of the automorphism group of an evolution algebra, and we give an improved version of it.'
address: ' Paula Cadavid, Pablo M. Rodriguez Instituto de Ciências Matemáticas e de Computação, Universidade de São Paulo Caixa Postal 668, 13560-970 São Carlos, SP, Brazil e-mails: pacadavid@usp.br, pablor@icmc.usp.br Mary Luz Rodiño Montoya Instituto de Matemáticas - Universidad de Antioquia Calle 67 N$^{\circ}$ 53-108, Medellín, Colombia e-mail: mary.rodino@udea.edu.co '
author:
- 'Paula Cadavid, Mary Luz Rodiño Montoya and Pablo M. Rodríguez'
title: On the isomorphisms between evolution algebras of graphs and random walks
---
Introduction
============
In this paper we study evolution algebras, which are a new type of non-associative algebras. These algebras were introduced around ten years ago by Tian [@tian] and were motivated by evolution laws of genetics. With this application in mind, if one think in alleles as generators of algebras, then reproduction in genetics is represented by multiplication in algebra. The best general reference of the subject is [@tian], where the reader can found a review of preliminary definitions and properties, connections with other fields of mathematics, and a list of interesting open problems some of which remain unsolved so far. We refer the reader also to [@tian2] for an update of open problems in the Theory of Evolution Algebras, and to [@PMP]-[@Falcon/Falcon/Nunez/2017] and references therein for an overview of recent results on this topic. Formally, an evolution algebra is defined as follows.
\[def:evolalg\] Let $\A:=(\A,\cdot\,)$ be an algebra over a field $\mathbb{K}$. We say that $\A$ is an evolution algebra if it admits a countable basis $S:=\{e_1,e_2,\ldots , e_n,\ldots\}$, such that
$$\label{eq:ea}
\begin{array}{ll}
e_i \cdot e_i =\displaystyle \sum_{k} c_{ik} e_k,&\text{for any }i,\\[.3cm]
e_i \cdot e_j =0,&\text{if }i\neq j.
\end{array}$$
The scalars $c_{ik}\in \mathbb{K}$ are called the structure constants of $\mathcal{A}$ relative to $S$.
A basis $S$ satisfying is called natural basis of $\mathcal{A}$. $\mathcal{A}$ is real if $\mathbb{K}=\mathbb{R}$, and it is nonnegative if it is real and the structure constants $c_{ik}$ are nonnegative. In what follows, we always assume that $\mathcal{A}$ is real. In addition, if $0\leq c_{ik}\leq 1$, and
$$\sum_{k=1}^{\infty}c_{ik}=1,$$
for any $i$, then $\A$ is called a Markov evolution algebra. In this case, there is an interesting correspondence between the algebra $\A$ and a discrete time Markov chain $(X_n)_{n\geq 0}$ with states space $\{x_1,x_2,\ldots,x_n,\ldots\}$ and transition probabilities given by: $$\label{eq:tranprob}
\nonumber c_{ik}:=\mathbb{P}(X_{n+1}=x_k|X_{n}=x_i),$$ for $i,k\in \mathbb{N}^*$, and for any $n\in \mathbb{N}$, where $\mathbb{N}^*:=\mathbb{N}\setminus \{0\}$. For the sake of completeness we remind the reader that a discrete-time Markov chain is a sequence of random variables $X_0, X_1, X_2, \ldots, X_n, \ldots$, defined on the same probability space $(\Omega,\mathcal{B},\mathbb{P})$, taking values on the same set $\mathcal{X}$, and such that the Markovian property is satisfied, i.e., for any set of values $\{i_0, \ldots, i_{n-1},x_i, x_{k}\} \subset \mathcal{X}$, and any $n\in \mathbb{N}$, it holds $$\mathbb{P}(X_{n+1}=x_k|X_0 = i_0, \ldots, X_{n-1}=i_{n-1}, X_{n}=x_i)=\mathbb{P}(X_{n+1}=x_k|X_{n}=x_i).$$ Thus defined, in the correspondence between the evolution algebra $\A$ and the Markov chain $(X_n)_{n\geq 0}$ what we have is each state of $\mathcal{X}$ identified with a generator of $S$. For more details about the formulation and properties of Markov chains we refer the reader to [@karlin/taylor; @ross]. In addition, for a review of results related to the connection between Markov chains and evolution algebras we suggest [@tian Chapter 4].
In this work we are interested in studying evolution algebras related to graphs in a sense to be specified later. This interplay, i.e. evolution algebras and graphs, has attained the attention of many researchers in recent years. For a review of recent results, see for instance [@PMP; @PMP2; @camacho/gomez/omirov/turdibaev/2013; @Elduque/Labra/2015; @nunez/2014; @nunez/2013], and references therein. The rest of the section is subdivided into two parts. In the first one we review some of the standard notation of Graph Theory, while in the last one we give the definition of two different evolution algebras associated to a given graph. One of the open questions of the Theory of Evolution Algebras is to understand the relation between both induced algebras. The purpose of this paper is to advance in this question.
Basic notation of Graph Theory
------------------------------
A graph $G$ with $n$ vertices is a pair $(V,E)$ where $V:=\{1,\ldots,n\}$ is the set of vertices and $E:=\{(i,j)\in V\times V:i\leq j\}$ is the set of edges. If $(i,j)\in E$ or $(j,i)\in E$ we say that $i$ and $j$ are neighbors; we denote the set of neighbors of vertex $i$ by $\mathcal{N}(i)$ and the cardinality of this set by $\deg(i)$. Our definitions as well as our results, except when indicated, also hold for graphs with an infinite number of vertices, i.e. $V$ is a countable set and $|V|=\infty$. In that case we assume as an additional condition for the graph to be locally finite, i.e. $\deg(i)< \infty$ for any $i\in V$. In general, if $U\subseteq V$, we denote $\mathcal{N}(U) :=\{ j \in V : j\in \mathcal{N}(i) \text{ for some } i\in U\}$. We say that $G$ is a $d$-regular graph if $\deg(i) = d$ for any $i\in V$ and some positive integer $d$. We say that $G$ is a bipartite graph if its vertices can be divided into two disjoint sets, $V_1$ and $V_2$, such that every edge connects a vertex in $V_1$ to one in $V_2$. If $V_1$ has $m$ vertices, $V_2$ has $n$ vertices and every possible edge that could connect vertices in different subsets is part of the graph we call $G$ a complete bipartite graph and denote it by $K_{m,n}$. Moreover, we say that $G$ is a biregular graph if it is a bipartite graph $G=(V_1,V_2,E)$ for which every two vertices on the same side of the given bipartition have the same degree as each other. In this case, if the degree of the vertices in $V_1$ is $d_1$ and the degree of the vertices in $V_2$ is $d_2$, then we say that the graph is $(d_{1},d_2)$-biregular (see Fig. \[fig:bipartite\]). We notice that the family of biregular graphs includes any finite graph which may be seen as a bipartite graph with partitions $V_1$ and $V_2$ of sizes $m$ and $n$ respectively, for $m,n\geq 1$, such that $\deg(i) =d_1$ if $i\in V_1$, $\deg(i) =d_2$ if $i\in V_2$, where $d_1,d_2 \in \mathbb{N}$ satisfy $m\, d_1 = n\, d_2$, see Fig. \[fig:bipartite\](b). In addition, the class of biregular graphs includes some families of infinite graphs like $2$-periodic trees (see Fig. \[fig:tree\](a)) and $\mathbb{Z}^2$-periodic graphs with hexagonal lattice (see Fig. \[fig:tree\](b)).
The adjacency matrix of a given graph $G$, denoted by $A:=A(G)$, is an $n\times n$ symmetric matrix $(a_{ij})$ such that $a_{ij}=1$ if $i$ and $j$ are neighbors and $0$, otherwise. Then, we can write $\mathcal{N}(k)=\{\ell \in V: a_{k\ell}=1\},$ for any $k$. Note that the adjacency matrix for infinite graphs is well defined. A graph is said to be singular if its adjacency matrix $A$ is a singular matrix ($\det A =0$), otherwise the graph is said to be non-singular. All the graphs we consider are connected, i.e. for any $i,j\in V$ there exists a positive integer $n$ and a sequence of vertices $\gamma=(i_0,i_1,i_2,\ldots,i_n)$ such that $i_0=i$, $i_n=j$ and $(i_k,i_{k+1})\in E$ for all $k\in\{0,1,\ldots,n-1\}$. The sequence $\gamma$ is called a path connecting $i$ to $j$ with size $n$. The distance between two vertices $i$ and $j$, denoted by $d(i,j)$, is the size, i.e. number of edges, in the shortest path connecting them. For simplicity, we consider only graphs which are simple, i.e. without multiple edges or loops.
The evolution algebras associated to a graph
--------------------------------------------
The evolution algebra induced by a graph $G$ is defined in [@tian Section 6.1] as follows.
\[def:eagraph\] Let $G=(V,E)$ a graph with adjacency matrix given by $A=(a_{ij})$. The evolution algebra associated to $G$ is the algebra $\A(G)$ with natural basis $S=\{e_i: i\in V\}$, and relations
$$\begin{array}{ll}\displaystyle
e_i \cdot e_i = \sum_{k\in V} a_{ik} e_k,&\text{for }i \in V,\\[.3cm]
\end{array}$$ and $e_i \cdot e_j =0,\text{if }i\neq j.$
Another way of stating the relation for $e_i \cdot e_i$, for $i\in V$, is to say $e_i^2 = \sum_{k\in \mathcal{N}(i)} e_k$.
Let $G$ be the $(2,3)$-biregular graph with $10$ vertices of Fig. \[fig:bipartite\]. Then $\A(G)$ has natural basis $S=\{e_1,\ldots,e_{10}\}$, and relations
$$\begin{array}{c}
\mathcal{A}(G): \left\{
\begin{array}{lllll}
e_1^2= e_5 + e_8 +e_{10}, & e_2^2=e_5 +e_6 + e_9,&e_3^2= e_6+ e_7+ e_9, & e_4^2=e_7 + e_8 +e_{10},\\[.2cm]
e_5^2=e_1 + e_2, & e_6^2=e_2+e_3,& e_7^2 = e_3 + e_4, & e_8^2=e_1+ e_4,\\[.2cm]
e_9^2=e_2+ e_3,& e_{10}^2=e_1+ e_4, & e_i \cdot e_j =0, i\neq j . \\[.2cm]
\end{array}\right.
\end{array}$$
There is a second natural way to define an evolution algebra associated to $G=(V,E)$; it is the one induced by the symmetric random walk (SRW) on $G$. The SRW is a discrete time Markov chain $(X_n)_{n\geq 0}$ with state space given by $V$ and transition probabilities given by $$\mathbb{P}(X_{n+1}=k|X_{n}=i)=\frac{a_{ik}}{\deg(i)},$$ where $i,k\in V$, $n\in \mathbb{N}$ and, as defined before, $\deg(i)=\sum_{k\in V} a_{ik}.$ Roughly speaking, the sequence of random variables $(X_n)_{n\geq 0}$ denotes the set of positions of a particle walking around the vertices of $G$; at each discrete-time step the next position is selected at random from the set of neighbors of the current one. Since the SRW is a discrete-time Markov chain we may define its related Markov evolution algebra.
Let $G=(V,E)$ be a graph with adjacency matrix given by $A=(a_{ij})$. We define the evolution algebra associated to the SRW on $G$ as the algebra $\A_{RW}(G)$ with natural basis $S=\{e_i: i\in V\}$, and relations given by $$\begin{array}{ll}\displaystyle
e_i \cdot e_i = \sum_{k\in V}\left( \frac{a_{ik}}{\deg(i)}\right)e_k,&\text{for }i \in V,
\end{array}$$ and $e_i \cdot e_j =0, \text{ if } i\neq j.$
Consider again $G$ as being the $(2,3)$-biregular graph with $10$ vertices of Fig. \[fig:bipartite\]. Then $\A_{RW}(G)$ has natural basis $S=\{e_1,\ldots,e_{10}\}$, and relations
$$\begin{array}{c}
\mathcal{A}_{RW}(G): \left\{
\begin{array}{lllll}
e_1^2= \frac{1}{3}(e_5 + e_8 +e_{10}), & e_2^2= \frac{1}{3}(e_5 +e_6 + e_9),&e_3^2= \frac{1}{3}(e_6+ e_7+ e_9), & \\[.3cm]
e_4^2= \frac{1}{3}(e_7 + e_8 +e_{10}),& e_5^2=\frac{1}{2} (e_1 + e_2), & e_6^2=\frac{1}{2}(e_2+e_3),\\[.3cm]
e_7^2 =\frac{1}{2}( e_3 + e_4), & e_8^2= \frac{1}{2}(e_1+ e_4),& e_9^2= \frac{1}{2}(e_2+ e_3),& \\[.3cm]
e_{10}^2=\frac{1}{2}(e_1+ e_4), & e_i \cdot e_j =0, i\neq j . \\[.3cm]
\end{array}\right.
\end{array}$$
The aim of this paper is to contribute with the discussion about the relation between the algebras $\A_{RW}(G)$ and $\A(G)$ for a given graph $G$. We emphasize that this is one of the open problems stated by [@tian; @tian2], and which has been addressed partially by [@PMP]. Our approach will be the statement of conditions under which we can guarantee the existence or not of isomorphisms between these evolution algebras.
Isomorphisms
============
Main results
------------
Before to address with the existence of isomorphisms between $\A_{RW}(G)$ and $\A(G)$ for a given graph $G$, we start with a more general concept which is the isotopism of algebras introduced by Albert [@albert] as a generalization of that of isomorphism. This has been recently applied by [@Falcon/Falcon/Nunez/2017] to study two-dimensional evolution algebras.
\[def:isoto\] [@Falcon/Falcon/Nunez/2017 Section 2.1] Let $\mathcal{A}$ and $\mathcal{B}$ be two evolution algebras over a field $\mathbb{K}$, and let $S=\{e_i: i\in V\}$ be a natural basis for $\mathcal{A}$. We say that a triple $(f,g,h)$, where $f,g,h$ are three non-singular $\mathbb{K}$-linear transformations from $\mathcal{A}$ into $\mathcal{B}$ is an isotopism if $$f(u)\cdot g(v) = h(u\cdot v),\;\;\; \text{ for all }u, v \in\mathcal{A}.$$ In this case we say that $\mathcal{A}$ and $\mathcal{B}$ are isotopic. In addition, the triple is called
1. a strong isotopism if $f=g$ and we say that the algebras are strongly isotopic;
2. an isomorphism if $f=g=h$ and we say that the algebras are isomorphic.
In the case of an isomorphism we write $f$ instead of $(f, f, f)$. To be isotopic, strongly isotopic or isomorphic are equivalence relations among algebras, and we denote these three relations, respectively, by $\sim$, $\simeq$ and $\cong$. The concept of isotopism allows a first formal connection to be found between $\A_{RW}(G)$ and $\A(G)$.
For any graph $G$, $\A(G)\simeq \A_{RW}(G)$.
Consider two $\mathbb{K}$-linear maps, $f$ and $h$, from $\A(G)$ to $\A_{RW}(G)$ defined by $$f(e_i)=\sqrt{\deg(i)}\,e_i,\;\;\;\text{ and }\;\;\;h(e_i)=e_i,\;\;\;\text{ for all }i\in V.$$ Then, for $i\neq j$, $f(e_i)\cdot f(e_j)=\sqrt{\deg(i) \deg(j)} \left(e_i\cdot e_j \right)=0 = h(e_i\cdot e_j).$ On the other hand, for any $i\in V$, we have $$f(e_i)\cdot f(e_i)=\deg(i) \, e_i^2 =\deg(i) \sum_{k\in V}\left( \frac{a_{ik}}{\deg(i)}\right)e_k =\sum_{k\in V} a_{ik} \, e_k,$$ while $$h(e_i^2)=h\left(\sum_{k\in V} a_{ik}\,e_k\right)=\sum_{k\in V} a_{ik} \,e_k,$$ and the proof is completed.
Our next step is to obtain conditions on $G$ under which one have the existence or not of isomorphisms between $\A_{RW}(G)$ and $\A(G)$. This issue has been considered recently in [@PMP] for some well-known families of graphs. However, there is still a need for general results to address this question. The main result of the present work is a complete characterization of the problem for the case of [ finite]{} non-singular graphs.
\[theo:criterio\] Let $G$ be a finite non-singular graph. $\A_{RW}(G)\cong \A(G)$ if, and only if, $G$ is a regular or a biregular graph. Moreover, if $\A_{RW}(G)\ncong \A(G)$ then the only homomorphism between them is the null map.
We are restricting our attention on the existence or not of algebra isomorphisms in the sense of Definition \[def:isoto\](ii). We empathize that our results can be easily adapted to deal with evolution homomorphisms or evolution isomorphisms. According to Tian, see [@tian], the concept of evolution homomorphism is related to the one of homomorphism of algebras with an additional condition. More precisely, if $ \mathcal{A}$ and $\mathcal{B}$ are two evolution algebras over a field $\mathbb{K}$ and $S=\{e_i: i\in V\}$ is a natural basis for $ \mathcal{A}$, then [@tian Definition 4] say that a linear transformation $g: \mathcal{A} \longrightarrow \mathcal{B}$ is an evolution homomorphism, if $g(a\cdot b)=g(a)\cdot g(b)$ for all $a,b \in \mathcal{A} $ and $\{g(e_i):i\in V\}$ can be complemented to a natural basis for $\mathcal{B}$. Furthermore, if an evolution homomorphism is one to one and onto, it is an evolution isomorphism. Using the terminology in [@Cabrera/Siles/Velasco] we can rewrite the definition of Tian by saying that an evolution homomorphism $g: \mathcal{A} \longrightarrow \mathcal{B}$ between evolution algebras $ \mathcal{A}$ and $\mathcal{B}$ is an homomorphism such that the evolution algebra $\Im(f)$ has the extension property.
The proof of Theorem \[theo:criterio\] rely on a mix of results which holds for general graphs, meaning not necessarily [finite]{} and non-singular graphs, together with a description of the isomorphisms for the case of [ finite ]{}non-singular graphs. For the sake of clarity we left the proof for the next section. In what follows we discuss some examples.
Friendship graph $F_n$. Let us consider the friendship graph $F_n$, which is a finite graph with $2n+1$ vertices and $3n$ edges constructed by joining $n$ copies of the triangle graph with a common vertex (see Figure \[fig:friendhsipproof\]). We shall see that $\rank (A)= n$, which implies by Theorem \[theo:criterio\], because the graph is neither regular nor biregular, that if $f:\mathcal{A}(G)\longrightarrow \mathcal{A}_{RW}(G)$ is an homomorphism, then $f$ is the null map. This results has been stated in [@PMP Proposition 3.4], and therefore it is a Corollary of our Theorem \[theo:criterio\].
We assume the vertices of $F_n$ labelled as in Figure \[fig:friendhsipproof\], with the central vertex labelled by $2n+1$.
\[fig:friendhsipproof\]
(0,0) – (2,0.8)–(2,-0.8)–(0,0); (0,0) – (-0.8,2)–(0.8,2)–(0,0); (0,0) – (-1.8,1.1)–(-1.1,1.8)–(0,0); (0,0) – (1.8,1.1)–(1.1,1.8)–(0,0); (0,0) – (1.8,-1.1)–(1.1,-1.8)–(0,0); (0,0) circle (2.5pt); (-1.5,1.7) node\[above,font=\] [$2n$]{}; (2,0.8) circle (2.5pt); (1.5,1.7) node\[above,font=\] [$3$]{}; (2,-0.8) circle (2.5pt); (1,-1.8) node\[below,font=\] [$8$]{}; (-0.8,2) circle (2.5pt); (-0.8,2.1) node\[above,font=\] [$1$]{}; (0.8,2) circle (2.5pt); (0.8,2.1) node\[above,font=\] [$2$]{}; (1.1,-1.8) circle (2.5pt); (1.1,1.8) circle (2.5pt); (-1.1,1.8) circle (2.5pt);
(-1.8,1.1) circle (2.5pt); (-2.5,1.3) node\[font=\] [$2n-1$]{}; (1.8,1.1) circle (2.5pt); (1.9,1.2) node\[above,font=\] [$4$]{}; (1.8,-1.1) circle (2.5pt); (2.3,1) node\[below,font=\] [$5$]{}; (2.3,-1) node\[above,font=\] [$6$]{}; (1.9,-1.2) node\[below,font=\] [$7$]{};
(-3,0) node\[font=\] [$2n+1$]{}; (-2.9,-0.2) edge (-0.1,-0.1);
(-1.1,0.5) to\[out=240,in=-145\] (0.5,-1.1);
Then the adjacency matrix $A$ of the graph has elements $$a_{ij}=\left\{
\begin{array}{cl}
1,& \text{ if }i\text{ is odd (even) and }j=i+1\; (j=i-1),\\[.2cm]
1,& \text{ if } i=2n+1\text{ and }j\in\{1,\ldots, n\}\\[.2cm]
0,& \text{ other case.}
\end{array}\right.$$
That is, $A$ is given by
$$\begin{bmatrix}
0&1&0&0&\cdots&0&1\\[.2cm]
1&0&0&0&\cdots &0&1\\[.2cm]
0&0&0&1&\cdots &0&1\\[.2cm]
\vdots & \vdots& \vdots&\vdots &\ddots & \vdots&\vdots\\[.2cm]
0&0&0&0&\cdots &0&1\\[.2cm]
1&1&1&1&\cdots &1&0
\end{bmatrix}.$$
Denote by $C_i$ the $ith$-column of matrix $A$, for $i\in\{1,\ldots,2n+1\}$, and assume that $\sum_{i=1}^{2n+1} \alpha_i C_i = 0,$ where $\alpha_i$ is a constant, for $i\in\{1,\ldots,2n+1\}$. Now it is not difficult to see that the following equations hold: $$\begin{aligned}
\alpha_k + \alpha_{2n+1} = 0,& \text{ for }k\in\{1,\ldots,2n\},\label{eq:frien1}\\[.2cm]
\sum_{i=1}^{2n}\alpha_i =0.\label{eq:frien2}&\end{aligned}$$ Thus, by adding and we obtain $\sum_{i=1}^{2n}\alpha_i + 2n (\alpha_{2n+1})=0$ which, together with , implies $\alpha_{2n+1}=0$. Thus we can conclude, now from , that $\alpha_k =0$ also for $k\in\{1,\ldots,2n\}$. This in turns implies that $\{C_1,\ldots,C_{2n+1}\}$ forms a linearly independent set of vectors and hence $\rank(A)=n$.
A natural question that needs to be raised is if the result stated in Theorem \[theo:criterio\] holds for [finite]{} singular graphs also. In the sequel we provide some examples suggesting a positive answer.
\[exa:regular\] Consider the $3$-regular graph $G$ represented as in Fig. 2.2.
(-1,1) – (-3,1) – (-3,-1)–(-1,-1); (3,1) – (5,1) – (5,-1)–(3,-1)–(5,1); (3,1) – (5,-1); (-3,1) – (-1,-1); (-3,-1) – (-1,1); (2,0) –(3,-1); (2,0) –(3,1); (0,0) – (-1,1); (0,0) – (-1,-1); (0,0) – (2,0);
(0,0) circle (2.5pt); (0.3,-0.3) node [$1$]{}; (-1,1) circle (2.5pt); (-1,1.4) node [$2$]{}; (-3,1) circle (2.5pt); (-3,1.4) node [$3$]{}; (-3,-1) circle (2.5pt); (-3,-1.4) node [$4$]{}; (-1,-1) circle (2.5pt); (-1,-1.4) node [$5$]{}; (2,0) circle (2.5pt); (1.7,-0.3) node [$6$]{}; (3,1) circle (2.5pt); (3,1.4) node [$7$]{}; (5,1) circle (2.5pt); (5,1.4) node [$8$]{}; (5,-1) circle (2.5pt); (5,-1.4) node [$9$]{}; (3,-1) circle (2.5pt); (3,-1.4) node [$10$]{};
The evolution algebras induced by $G$, and by the random walk on $G$, respectively, have natural basis $\{e_1,e_2,e_3,e_4,e_5\}$ and relations given by:
$$\begin{array}{cc}
\mathcal{A}(G): \left\{
\begin{array}{l}
e_1^2= e_2 + e_5 + e_6,\\[.2cm]
e_2^2=e_5^2= e_1 + e_3 + e_4,\\[.2cm]
e_3^2=e_2 + e_4+ e_5,\\[.2cm]
e_4^2=e_2 + e_3+ e_5,\\[.2cm]
e_6^2=e_1+e_7+e_{10},\\[.2cm]
e_7^2 = e_{10}^2=e_6 + e_8 + e_9,\\[.2cm]
e_8^2=e_7 + e_9 + e_{10},\\[.2cm]
e_i \cdot e_j =0, i\neq j,
\end{array}\right.
&
\mathcal{A}_{RW}(G): \left\{
\begin{array}{l}
e_1^2= \frac{1}{3}\,e_2 +\frac{1}{3}\, e_5 +\frac{1}{3}\, e_6,\\[.2cm]
e_2^2=e_5^2=\frac{1}{3}\, e_1 +\frac{1}{3}\, e_3 +\frac{1}{3}\, e_4,\\[.2cm]
e_3^2=\frac{1}{3}\,e_2 + \frac{1}{3}\,e_4+ \frac{1}{3}\,e_5,\\[.2cm]
e_4^2=\frac{1}{3}\,e_2 +\frac{1}{3}\, e_3+\frac{1}{3}\, e_5,\\[.2cm]
e_6^2=\frac{1}{3}\,e_1+\frac{1}{3}\,e_7+\frac{1}{3}\,e_{10},\\[.2cm]
e_7^2 = e_{10}^2=\frac{1}{3}\,e_6 + \frac{1}{3}\,e_8 +\frac{1}{3}\, e_9,\\[.2cm]
e_8^2=\frac{1}{3}\,e_7 + \frac{1}{3}\,e_9 + \frac{1}{3}\,e_{10},\\[.2cm]
e_i \cdot e_j =0, i\neq j.
\end{array}\right.
\end{array}$$
Note that $\mathcal{N}(2)=\mathcal{N}(5)=\{1,3,4\}$, and $\mathcal{N}(7)=\mathcal{N}(10)=\{6,8,9\}$, implies $\det A =0$. Moreover, as $G$ is a $3$-regular graph, we have by [@PMP Theorem 3.2(i)] that $\mathcal{A}_{RW}(G) \cong \mathcal{A}(G)$. It is not difficult to see that the map $f:\mathcal{A}_{RW}(G)\longrightarrow \mathcal{A}(G)$ defined by $f(e_i) = (1/3)\, e_i$, for any $i\in V$ is an isomorphism.
\[exa:bipartite\] Consider the complete bipartite graph $K_{m,n}$ with partitions of sizes $m\geq 1$ and $n\geq 1$, respectively, and assume that the set of vertices is partitioned into the two subsets $V_1:=\{1,\ldots,m\}$ and $V_2:=\{m+1,\ldots,m+n\}$ (see Fig. 2.3).
\[FIG:bipartite\]
(0,0) – (4,-1); (0,0) – (4,-4); (0,0) – (4,-2.5);
(0,-1) – (4,-1); (0,-1) – (4,-4); (0,-1) – (4,-2.5);
(0,-2) – (4,-1); (0,-2) – (4,-4); (0,-2) – (4,-2.5);
(0,-3) – (4,-1); (0,-3) – (4,-4); (0,-3) – (4,-2.5);
(0,-4) – (4,-1); (0,-4) – (4,-4); (0,-4) – (4,-2.5);
(0,-5) – (4,-1); (0,-5) – (4,-4); (0,-5) – (4,-2.5);
(0,0) circle (2.5pt); at (-0.5,0) [$1$]{}; (0,-1) circle (2.5pt); at (-0.5,-1) [$2$]{}; (0,-2) circle (2.5pt); at (-0.5,-2) [$3$]{}; (0,-3) circle (2.5pt); at (-0.5,-3) [$4$]{}; (0,-4) circle (2.5pt); at (-0.5,-4) [$5$]{}; (0,-5) circle (2.5pt); at (-0.5,-5) [$6$]{};
(4,-2.5) circle (2.5pt); at (4.5,-2.5) [$8$]{}; (4,-4) circle (2.5pt); at (4.5,-4) [$9$]{}; (4,-1) circle (2.5pt); at (4.5,-1) [$7$]{};
It is not difficult to see that $\det A =0$. The associated evolution algebras $\mathcal{A}_{RW}(K_{m,n})$ and $\mathcal{A}(K_{m,n})$ are defined in [@PMP]. Indeed, by [@PMP Theorem 3.2(ii)] we have that $\mathcal{A}_{RW}(K_{m,n}) \cong \mathcal{A}(K_{m,n})$. Moreover, let $f_{\pi}:\mathcal{A}_{RW}(K_{m,n})\longrightarrow \mathcal{A}(K_{m,n})$ be defined by $$f_{\pi}(e_i)=\left\{
\begin{array}{ll}
m^{-1/3}n^{-2/3} e_{\pi(i)}, &\text{for }i\in V_1;\\[.2cm]
m^{-2/3}n^{-1/3} e_\pi(i), &\text{for }i\in V_2,
\end{array}\right.$$ where $\pi \in S_{m+n}$ is such that $\pi(i)\in V_1$ if, and only if, $i\in V_1$.
\[exa:tree\] A tree $T$ with $m+n+2$ vertices and diameter $3$ may be represented as in Fig. 2.4. The set of vertices may be partitioned in $V_1:=\{1,\ldots,m\}$, $V_2:=\{m+1,\ldots,m+n\}$, and $\{m+n+1,m+n+2\}$, in such a way $\mathcal{N}(i) =\{ n+m+1\}$ for any $i\in V_1$, $\mathcal{N}(i) = \{n+m+2\}$ for any $i\in V_2$, and $n+m+1$ and $n+m+2$ are neighbors. Then $\det A =0$.
\[FIG:tree\]
(2,0) – (4,1);
(2,0) –(4,-2); (2,0) –(4,2); (2,0) –(4,0); (0,0) – (-2,1); (0,0) – (-2,2); (0,0) – (-2,0); (0,0) – (-2,-2); (0,0) – (2,0);
(0,0) circle (2.5pt); (0,-0.4) node [$u$]{}; (-2,2) circle (2.5pt); (-2.5,2) node [$1$]{}; (-2,0) circle (2.5pt); (-2.5,0) node [$3$]{}; (-2,-0.8) node [$\vdots$]{}; (4,-0.8) node [$\vdots$]{}; (2,0) circle (2.5pt); (2,-0.4) node [$v$]{}; (4,2) circle (2.5pt); (5,2) node [$m+1$]{}; (4,0) circle (2.5pt); (5,0) node [$m+3$]{}; (4,1) circle (2.5pt); (5,1) node [$m+2$]{}; (4,-2) circle (2.5pt); (5,-2) node [$m+n$]{}; (-2,-2) circle (2.5pt); (-2.5,-2) node [$m$]{}; (-2,1) circle (2.5pt); (-2.5,1) node [$2$]{}; (-1,2) circle (2.5pt); (-1,-2) circle (2.5pt); (1,2) circle (2.5pt); (1,-2) circle (2.5pt);
We shall see that this is an example of graph where the only homomorphism is the null map. For the sake of simplicity we consider the case $m=n=2$; the general case could be checked following the same arguments as below with some additional work. For $m=n=2$, the tree $T$ induces the following evolution algebras: take the natural basis $\{e_1,e_2,e_3,e_4,e_5,e_6\}$ and the relations given by
$$\begin{array}{cc}
\mathcal{A}(T): \left\{
\begin{array}{l}
e_1^2 = e_2^2 = e_5,\\[.2cm]
e_3^2 = e_4^2 = e_6,\\[.2cm]
e_5^2=e_1 + e_{2}+e_6,\\[.2cm]
e_6^2=e_3 + e_{4}+e_5,\\[.2cm]
e_i \cdot e_j =0, i\neq j,
\end{array}\right.
&
\mathcal{A}_{RW}(T): \left\{
\begin{array}{l}
e_1^2 = e_2^2 = e_5,\\[.2cm]
e_3^2 = e_4^2 = e_6,\\[.2cm]
e_5^2=\frac{1}{3}\,e_1 +\frac{1}{3}\, e_{2}+\frac{1}{3}\,e_6,\\[.2cm]
e_6^2=\frac{1}{3}\,e_3 +\frac{1}{3}\, e_{4}+\frac{1}{3}\, e_5,\\[.2cm]
e_i \cdot e_j =0, i\neq j.
\end{array}\right.
\end{array}$$
Assume $f:\mathcal{A}_{RW}(T) \longrightarrow \mathcal{A}(T)$ is an homomorphism such that for any $i\in\{1,2,3,4,5,6\}$ $$f(e_i )= \sum _{k=1} ^{6}t_{ik}e_k,$$ where the $t_{ik}$’s are scalars. Thus, $$\begin{aligned}
f(e_1^2) = f(e_2^2) = f(e_5) &=& \sum_{k=1}^{6} t_{5k} e_k,\label{eq:pri12}\\
f(e_3^2) = f(e_4^2) = f(e_6) &=& \sum_{k=1}^{6} t_{6k} e_k,\label{eq:pri34}\\
f(e_5^2) = f\left(\frac{1}{3}(e_1 + e_2 + e_6)\right) &=& \frac{1}{3}\sum_{k=1}^{6}(t_{1k}+t_{2k}+t_{6k})e_k,\label{eq:pri5}\\
f(e_6^2) = f\left(\frac{1}{3}(e_3 + e_4 + e_5)\right) &=& \frac{1}{3}\sum_{k=1}^{6}(t_{3k}+t_{4k}+t_{5k})e_k,\label{eq:pri6}\\
f(e_i)\cdot f(e_j) &=&\sum_{k=1}^{6}\left(\sum_{\ell \in \mathcal{N}(k)}t_{i\ell}t_{j\ell}\right) e_k,\label{eq:priij}\end{aligned}$$ for any $i, j\in V$, which together with $$\label{eq:isopro}
f(e_i \cdot e_j)=f(e_i)\cdot f(e_j), \text{ for any }i,j\in V,$$ imply the following set of equations. If $i\neq j$, then $0=f(e_i\cdot e_j)$ and we obtain by and :
$$\begin{aligned}
t_{ik}t_{jk} &=&0, \text{ for }k\in\{5,6\},\label{eq:exT1}\\
t_{ik}t_{jk} + t_{i\,k+1}t_{j\,k+1}&=&0, \text{ for }k\in\{1,3\}.\label{eq:exT2}\end{aligned}$$
By with $i=j$ and $i\in\{1,2,3,4\}$ we obtain by and the following: if $k=5$ and $\ell \in\{1,2\}$, or if $k=6$ and $\ell\in \{3,4\}$, it holds
$$\begin{aligned}
t_{k5} &=&t_{\ell \,1}^2 + t_{\ell \,2}^2 + t_{\ell \,6}^2,\label{eq:exT3}\\
t_{k6}&=&t_{\ell \,3}^2 + t_{\ell \,4}^2 + t_{\ell \,5}^2,\label{eq:exT4}\\
t_{k1}=t_{k2}&=&t_{\ell 5}^2,\label{eq:exT5}\\
t_{k3}=t_{k4}&=&t_{\ell 6}^2.\label{eq:exT6}\end{aligned}$$
On the other hand, by and with $i=j=5$ and we obtain: if $k=5$ and $\ell \in\{1,2\}$, or if $k=6$ and $\ell\in \{3,4\}$, we get
$$\begin{aligned}
3\, t_{5k}^2 &=& t_{1 \ell} + t_{2 \ell} + t_{6 \ell},\label{eq:exT7}\\
3(t_{51}^2 + t_{52}^2 + t_{56}^2) &=& t_{15} + t_{25} + t_{65},\label{eq:exT8}\\
3(t_{53}^2 + t_{54}^2 + t_{55}^2) &=& t_{16} + t_{26} + t_{66}.\label{eq:exT9}\end{aligned}$$
Finally, following , and with $i=j=6$: if $k=5$ and $\ell \in\{1,2\}$, or if $k=6$ and $\ell\in \{3,4\}$
$$\begin{aligned}
3\, t_{6k}^2 &=& t_{3 \ell} + t_{4 \ell} + t_{5 \ell},\label{eq:exT10}\\
3(t_{61}^2 + t_{62}^2 + t_{66}^2) &=& t_{35} + t_{45} + t_{55},\label{eq:exT11}\\
3(t_{63}^2 + t_{64}^2 + t_{65}^2) &=& t_{36} + t_{46} + t_{56}.\label{eq:exT12}\end{aligned}$$
By , for $k\in\{5,6\}$, we have $t_{ik} =0$ for all $ i\in V$, or there exists at most one $i\in V$ such that $t_{ik} \not = 0$ and $t_{jk} =0$ for all $j \not = i$. This implies, by and that $$\begin{aligned}
\label{eq:exT13}
t_{ki}=t_{i k}=0,& \text{ for }k\in\{5,6\}\text{ and }i\in \{1,2,3,4\}. \end{aligned}$$
From now on we shall consider two different cases, namely, $t_{55} = t_{65}=0$ or $t_{55} = 0$ and $t_{65}\neq0$; indeed it should be three cases but the case $t_{55} \neq 0$ and $t_{65}= 0$ is analogous to the last one. Note that we already have, see , $t_{i5}=0$ for $i\in\{1,2,3,4\}$.\
[**Case $1$:**]{} $t_{55} = t_{65}=0$. In this case we get by , that $$\begin{aligned}
\label{eq:exT14}
t_{i1}=t_{i2}=0,\text{ for }i\in\{1,2,3,4\}.\end{aligned}$$ In addition, by we have $t_{56}=0$, and this in turns implies by , for $k=5$, $$\begin{aligned}
\label{eq:exT15}
t_{i3}=t_{i4}=0,&\text{ for }i\in \{1,2\}. \end{aligned}$$ Analogously, implies $t_{66}=0$, which in turns implies by , for $k=6$, $$\begin{aligned}
\label{eq:exT16}
t_{i3}=t_{i4}=0,& \text{ for }i\in\{3,4\}.\end{aligned}$$ Therefore, as $t_{i5}=0$ for any $i\in\{1,2,3,4,5,6\}$, $t_{56}=t_{66}=0$, and - hold, we conclude that $f$ is the null map.\
[**Case $2$:**]{} $t_{55} = 0$ and $t_{65}\neq 0$. As before, $t_{55} = 0$ implies by , for $k=5$ $$\begin{aligned}
\label{eq:exT17}
t_{i1}=t_{i2}=0,&\text{ for }i\in\{1,2\},\end{aligned}$$ and by together with we have $t_{66}=0$, which implies . Now, observe that it should be $t_{56}\neq0$; othercase and lead us to $t_{65}=0$, which is a contradiction. So assume $t_{56}\neq0$. By , for $k=5$ we have $$2 t_{56} = t_{13}^2+t_{14}^2+t_{23}^2+t_{24}^2 = t_{13}^2+2t_{13}t_{23}+t_{23}^2+t_{14}^2+2t_{14}t_{24}+t_{24}^2,$$ where the last equality comes from for $k=3$. In other words, we have $$2 t_{56} = (t_{13}+t_{23})^2+(t_{14}+ t_{24})^2,$$ which by for $k=6$, and using that $t_{63}=t_{64}=0$ by , leads us to $2 t_{56} = 9 t_{56}^4$. Then $t_{56}=\left(2/9\right)^{1/3}$. Now and imply $3 t_{56}^2 = t_{65}$, and then $t_{65}=(4/3)^{1/3}\approx 1.1$. On the other hand, we could discover the value of $t_{65}$ following the same steps as the ones for $t_{56}$. In that direction one get by and that $t_{65}=(2/243)^{1/6}\approx 0.45$, which is a contradiction.
Our analysis of Case $2$ lead us to conclude that the only option is the one of Case $1$. Therefore, $f$ must be the null map.
Examples \[exa:regular\], \[exa:bipartite\] and \[exa:tree\] consider different singular graphs. From different arguments and applying previous results we have checked that either there exists an isomorphism between $\mathcal{A}_{RW}(G)$ and $\mathcal{A}(G)$, or the only homomorphism between these algebras is the null map. This leads us to think that Theorem \[theo:criterio\] holds for [finite]{} singular graphs also. However, further work needs to be carried out to establish whether this is true or not so we state it as a conjecture for future research.
\[conjecture\] Let $G$ be a [finite]{} graph. $\A_{RW}(G)\cong \A(G)$ if, and only if, $G$ is a regular or a biregular graph. Moreover, if $\A_{RW}(G)\ncong \A(G)$ then the only homomorphism between them is the null map.
Some results for general graphs
-------------------------------
As stated in the previous Section, the existence of isomorphisms between $\mathcal{A}(G)$ and $\mathcal{A}_{RW}(G)$ has been stablished in [@PMP] for the particular case of regular and complete bipartite graphs. As we show next this result can be extended for biregular graphs.
\[theo:generalization\] Let $G=(V_1,V_2,E)$ be a biregular graph. Then $\A(G) \cong \A_{RW}(G)$.
Assume that $G=(V_1,V_2,E)$ is a $(d_1,d_2)$-biregular graph and consider the linear map $f:\mathcal{A}(G)\longrightarrow \mathcal{A}_{RW}(G)$ defined by $$\label{eq:iso}
f(e_i)=\left\{
\begin{array}{cl}
\left(d_1^2 d_2\right)^{1/3}\, e_i,& \text{ if }i\in V_1,\\[.2cm]
\left(d_1 d_2^2\right)^{1/3}\, e_i,& \text{ if }i\in V_2.
\end{array}\right.$$ Thus defined $f$ is an isomorphism between $\A(G)$ and $\A_{RW}(G)$.
By [@PMP Theorem 3.2] and Proposition \[theo:generalization\] we have that $\mathcal{A}_{RW}(G) \cong \mathcal{A}(G)$ provided $G$ is either a regular or a biregular graph. At this point, the reader could ask if the converse is true. The following result sheds some light on this question.
\[theo:sufficient\] Let $G=(V,E)$ be a graph. Assume that there exist an isomorphism $f:\mathcal{A}(G)\longrightarrow \mathcal{A}_{RW}(G)$ defined by $$\label{eq:isofunc}
f(e_i) = \alpha_i e_{\pi(i)},\;\;\;\text{ for all }i\in V,$$ where $\alpha_i \neq 0$ is a scalar, for $i\in V$, and $\pi$ is an element of the symmetric group [$S_{V}$]{}. Then $G$ is a biregular graph or a regular graph.
Assume that there map $f:\mathcal{A}(G)\longrightarrow \mathcal{A}_{RW}(G)$ is an isomorphism defined by $f(e_i) = \alpha_i e_{\pi(i)}$, where $\alpha_i \neq 0$, for $i\in V$, and $\pi \in {\color{black}S_{V}}$. Since $f$ is linear we have that $$f(e_{i}^2)= f \left( \sum_{\ell \in \mathcal{N}(i) }e_{\ell} \right) = \sum_{\ell \in \mathcal{N}(i) } f(e_{\ell}) = \sum_{\ell \in \mathcal{N}(i) } \alpha_{\ell} e_{\pi(\ell)}$$ for $i\in V$. On the other hand, since $f$ is an homomorphism then we have for any $i\in V$: $$f(e_{i}^2)=f(e_i)\cdot f(e_i) = \alpha_ i^2 e_{\pi(i)}^2= \alpha_ i^2 \sum_{\ell \in \mathcal{N}(\pi(i))} \frac{1}{deg(\pi(i))} \, e_{\ell} = \frac{\alpha_ i^2}{\deg(\pi(i))} \sum_{\ell \in \mathcal{N}(\pi(i))} \,e_{\ell}.$$ Then $$\sum_{\ell \in \mathcal{N}(i) } \alpha_{\ell} e_{\pi(\ell)} = \frac{\alpha_ i^2}{\deg(\pi(i))} \sum_{\ell \in \mathcal{N}(\pi(i))} \,e_{\ell}, \text{ for all } i \in V.$$ Therefore $ \mathcal{N}(i)= \mathcal{N}(\pi(i))$ i. e. $ \deg(i)=\deg(\pi(i))$ for all $i\in V$. It follows $$\label{eq:prova2a}
\alpha_\ell = \frac{\alpha_{i}^2}{\deg(i)}, \hspace{0.3cm} \text { for all }\, \ell \in \mathcal{N}(i)$$ This implies that, for $$\label{eq:prova2b}
\ell_1, \ell_2 \in \mathcal{N}(i), \,\,\,\,\alpha_{\ell_1}=\alpha_{\ell_2}.$$
Given $i\in V$, if $\ell \in \mathcal{N}(i)$ then $i \in \mathcal{N}(\ell)$, hence by $$\label{eq:prova2c}
\alpha_i = \frac{\alpha_{\ell}^2}{\deg(\ell)} \hspace{0.3cm} \text{ for all }\ell \in \mathcal{N}(i)$$ So by and for $\ell_1, \ell_2 \in \mathcal{N}(i)$ $$\frac{\alpha_{\ell_{1}}^2}{\deg(\ell_{1})} = \alpha_i = \frac{\alpha_{\ell_{2}}^2}{\deg(\ell_{2})} .$$ As a consequence, we obtain the following condition on the degrees in the graph:
$$\label{eq:prova2}
\text{ for any }i\in V, \text{ if }\ell_1, \ell_2 \in \mathcal{N}(i) \text{ then }\deg(\ell_1)=\deg(\ell_2).$$
Now let us fix a vertex, say $1$, and note that by we have $\deg(\ell)=\deg(1)$ for any $\ell \in V$ such that there is a path of even size from $1$ to $\ell$ (see Fig. \[fig:vertices\]).
(-4,0) – (5,0); (-4.5,0) – (5.5,0);
(-4,0) circle (2pt); (-4,-0.3) node\[font=\] [$i$]{}; (-3,0) circle (2pt); (-3,0.3) node\[font=\] [$j$]{}; (-2,0) circle (2pt); (-2,-0.3) node\[font=\] [$k$]{}; (-1,0) circle (2pt); (0,0) circle (2pt); (5,0) circle (2pt); (4,0) circle (2pt); (3,0) circle (2pt); (2,0) circle (2pt); (1,0) circle (2pt);
(-0.5,2.7) to \[out=270,in=90\] (-4,0.3); (0,2.7) to \[out=270,in=90\] (-2,0.3); (0.5,2.7) to \[out=270,in=90\] (0,0.3); (1,2.7) to \[out=270,in=90\] (2,0.3); (1.5,2.7) to \[out=270,in=90\] (4,0.3);
(0.5,3) node\[font=\] [[*vertices with degree*]{} $\deg(i)$]{}; (0.5,-3) node\[font=\] [[*vertices with degree*]{} $\deg(j)$]{};
(-0.5,-2.7) to \[out=90,in=270\] (-3,-0.3); (0,-2.7) to \[out=90,in=270\] (-1,-0.3); (0.5,-2.7) to \[out=90,in=270\] (1,-0.3); (1,-2.7) to \[out=90,in=270\] (3,-0.3); (1.5,-2.7) to \[out=90,in=270\] (5,-0.3);
Analogously, we have $\deg(\ell_1)=\deg(\ell_2)$ for any $\ell_1,\ell_2 \in V$ such that there is a path of odd size from $1$ to $\ell_k$, $k\in\{1,2\}$. Now let us define $V_1:=\{j \in V: d(1,j) \text{ is } even \}$ and $V_2:=\{j \in V: d(1,j) \text{ is } odd\}$. Notice that by our previous comments our definition of $V_1$ and $V_2$ is enough to guarantee that $\deg(i)=\deg(j)$ for $i,j\in V_k$, and $k\in\{1,2\}$. If every edge on $G$ connects a vertex in $V_1$ to one in $V_2$, then $G$ is a biregular graph. In the opposite case, if there exist $i,j \in V_1$ such that $i \in \N(j)$, we claim that $G$ is a $\deg(1)$-regular graph. To see this, we fix these vertices $i,j$, let $U_1:= \N(i)$, and for $m\in \mathbb{N}, m>1,$ let $U_m := \N(U_{m-1})$. Since $G$ is a connected graph, if for any $n \in \mathbb{N}$ is true that $$\bigcup_{i=1}^{n}U_i \subseteq V_1,$$ then $V_1 =V$. Otherwise, there exist $q \in \mathbb{N}$ such that $\bigcup_{i=1}^{q}U_i \nsubseteq V_1$. Let $\ell \in \left( \bigcup_{i=1}^{q} U_i \right) \cap V_2$. Then there is a $t \in \{1,\ldots,q\}$ such that $\ell \in U_t \cap V_2.$ Note that as $\ell \in U_t $ there is a path $\gamma=(i_{0},i_1, \ldots, i_{t-1},i_{t})$ of size $t$ connecting $i$ to $\ell$; i.e. $i_0=i$ and $i_t=\ell$. If $t$ is even then $\deg(\ell)= \deg(i)$ and then $G$ is $\deg(1)$-regular. If $t$ is odd, we consider the path $\gamma_1=(j,i,i_1, \ldots, i_{t-1},i_{t})$ connecting $j$ to $\ell$, which has size even so $\deg(j)=\deg(\ell)$, but $\deg(j)=\deg(1)$, and therefore $G$ is a $\deg(1)$-regular graph. The same argument holds by assuming the existence of a pair of vertices $i,j \in V_2$ such that $i \in \N(j)$.
For the rest of the paper, we adopt the notation $f_{\pi}$ for a map between evolution algebras, with the same natural basis, defined by . Even if $\mathcal{A}_{RW}(G) \cong \mathcal{A}(G)$ it is important to note that not every map defined as in is an isomorphism, as we illustrate in the following example.
\[exa:cycle\] Let $C_5$ the cycle graph or circular graph with $5$ vertices (see Fig. \[FIG:cycle\]).
at (a1) [$1$]{}; at (a2) [$2$]{}; at (a3) [$3$]{}; at (a4) [$4$]{}; at (2.3,0) [$5$]{};
\[FIG:cycle\]
Consider the evolution algebras induced by $C_5$, and by the random walk on $C_5$, respectively. That is, consider the evolution algebras whose natural basis is $\{e_1,e_2,e_3,e_4,e_5\}$ and relations are:
$$\begin{array}{cc}
\mathcal{A}(C_5): \left\{
\begin{array}{l}
e_1^2= e_2 + e_5,\\[.2cm]
e_i^2=e_{i-1}+e_{i+1}, i\in \{2,3,4\},\\[.2cm]
e_5^2=e_1 + e_{4}, \\[.2cm]
e_i \cdot e_j =0, i\neq j.
\end{array}\right.
&
\mathcal{A}_{RW}(C_5): \left\{
\begin{array}{l}
e_1^2= \frac{1}{2}\,e_2 +\frac{1}{2}\, e_5,\\[.2cm]
e_i^2=\frac{1}{2}\,e_{i-1}+\frac{1}{2}\,e_{i+1}, i\in \{2,3,4\},\\[.2cm]
e_5^2=\frac{1}{2}\,e_1 +\frac{1}{2}\, e_{4}, \\[.2cm]
e_i \cdot e_j =0, i\neq j.
\end{array}\right.
\end{array}$$
Note that $C_5$ is a $2$-regular graph, then by [@PMP Theorem 3.2(i)] $\mathcal{A}_{RW}(C_5) \cong \mathcal{A}(C_5)$ as evolution algebras. However, we shall see that not all map $f_{\pi}$, with $\pi \in S_5$, is an isomorphism. Indeed, let $\pi$ is given by
$$\label{eq:piexa}
\pi:= \left(
\begin{array}{ccccc}
1 & 2 & 3 & 4 & 5\\[.2cm]
3& 2 & 1& 4 & 5
\end{array}\right).$$
We shall verify that $f_{\pi}:\mathcal{A}_{RW}(C_5)\longrightarrow \mathcal{A}(C_5)$ defined by is not an isomorphism. In order to do it, it is enough to note that
$$f_{\pi}(e_1 ^2) = \frac{1}{2}f_{\pi}\left(e_2 + e_5\right) = \frac{1}{2} \left(\alpha_2 e_{2} + \alpha_5 e_5\right),$$ while
$$f_{\pi}(e_1)\cdot f_{\pi}(e_1)= \alpha_1^2 e_3^2 = \alpha_1^2 \left(e_2 + e_4\right).$$
Therefore $f_{\pi}(e_1 ^2) \neq f_{\pi}(e_1)\cdot f_{\pi}(e_1)$.
\[prop:pi\]Let $G$ be a graph and let $A=(a_{ij})$ be its adjacency matrix. Assume $f_{\pi}:\mathcal{A}_{RW}(G)\longrightarrow \mathcal{A}(G)$ is an isomorphism defined as in , i.e. $$f_{\pi}(e_i) = \alpha_i e_{\pi(i)},\;\;\;\text{ for all }i\in V,$$ where $\alpha_i \neq 0$, $i\in V$, are scalars and $\pi \in {\color{black}S_{V}}$. Then $\pi$ satisfies $$\label{eq:condpi}
a_{i\pi^{-1}(k)}\,\alpha _{\pi^{-1}(k)}=\deg(i)\,\alpha_i^2\,a_{\pi(i)k} , \text{ for } i,k \in V.$$
Since $f_{\pi}$ is an homomorphism we have that $$f_{\pi}(e_i^2)=f_{\pi}(e_i)\cdot f_{\pi}(e_i)=\alpha_{i}^2 e^{2}_{\pi(i)}= \alpha_{i}^{2} \sum_{k=1}^{|V|} a_{\pi(i)k} e_k , \text { for } i\in V.$$ On the other hand $$f_{\pi}(e_{i}^{2})=f_{\pi}\left(\sum_{k=1}^{|V|}\left(\frac{a_{ik}}{\deg(i)}\right)e_k\right)= \sum_{k=1}^{|V|} \left(\frac{a_{ik}}{\deg(i)}\right) \alpha_{k}e_{\pi(k)}.$$ Then $$a_{i\pi^{-1}(k)}\,\alpha _{\pi^{-1}(k)}=\deg(i)\,\alpha_i^2\,a_{\pi(i)k},$$ for any $i,k \in V$, where $\pi^{-1} \in {\color{black}S_{V}}$ denotes the inverse of $\pi$, i.e. $\pi^{-1}(j)=i$ if, and only if, $\pi(i)=j$, for any $i,j\in V$. We notice that either in the case $|V|=\infty$ the previous sums are summations of a finite number of terms. This is because we are considering locally finite graphs.
Let $C_5$ be the cycle graph considered in Example \[exa:cycle\], and let $f_{\pi}:\mathcal{A}_{RW}(C_5)\longrightarrow \mathcal{A}(C_5)$, where $\pi$ is given by . Taking $i=1$ and $k=4$ we have on one hand $a_{\pi(1) 4}=a_{34}=1$, while, on the other hand, $a_{1 \pi^{-1}(4)}=a_{14}=0$. This is enough to see that there exist no sequence of non-zero scalars $(\alpha_i)_{i\in V}$ such that holds. Therefore, by Proposition \[prop:pi\], $f_{\pi}$ it is not an isomorphism. On the other hand, a straightforward calculation shows that the element of $S_5$ given by $$\sigma:= \left(
\begin{array}{ccccc}
1 & 2 & 3 & 4 & 5\\[.2cm]
2& 3 & 4& 5 & 1
\end{array}\right),$$ satisfies , provided $\alpha_i =1/2$ for any $i\in V$. [Moreover it is possible to check that]{} $f_{\sigma}:\mathcal{A}_{RW}(C_5)\longrightarrow \mathcal{A}(C_5)$ defined for $i\in V$ by $f_{\sigma}(e_i) =(1/2) e_{\sigma(i)}$ is an isomorphism.
Isomorphisms for the case of [finite]{} non-singular graphs and proof of Theorem \[theo:criterio\]
--------------------------------------------------------------------------------------------------
\[theo:principal\] Let $G$ be a non-singular graph with $n$ vertices and let $A=(a_{ij})$ be its adjacency matrix. If $f:\mathcal{A}_{RW}(G)\longrightarrow \mathcal{A}(G)$ is an homomorphism, then either $f$ is the null map or $f$ is an isomorphism defined by $$f(e_i) = \alpha_i e_{\pi(i)},\;\;\;\text{ for all }i\in V,$$ where $\alpha_i \neq 0$, $i\in V$, are scalars and $\pi$ is an element of the symmetric group $S_n$.
Let $f:\mathcal{A}_{RW}(G)\longrightarrow \mathcal{A}(G)$ an homomorphism such that $$\nonumber f(e_i)=\sum_{k=1}^{n} t_{ik}e_k, \,\,\, \text{for any } i\in V,$$ where the $t_{ik}$’s are scalars. Then $f(e_i)\cdot f(e_j)=0$ for any $i\neq j$, which implies $$0=\sum_{k\in V}t_{ik} t_{jk} e_k^2 =\sum_{k\in V} t_{ik} t_{jk} \left(\sum_{r\in V} a_{kr} e_r\right)=\sum_{r\in V} \left(\sum_{k\in V} t_{ik} t_{jk} a_{kr}\right) e_r.$$ This in turns implies, for any $r\in V$, $$\sum_{k\in V} t_{ik} t_{jk} a_{kr} =0.$$ In other words we have, for $i\neq j$, $A^T
\begin{bmatrix}
t_{i1}t_{j1} & t_{i2}t_{j2} & \cdots & t_{in}t_{jn}
\end{bmatrix}^{T} = \begin{bmatrix}
0 & 0 & \cdots & 0
\end{bmatrix}^{T}$, where $B^T$ denotes the transpose of the matrix $B$. As the adjacency matrix $A$ is non-singular then $$\label{eq:tij}
\nonumber t_{ik}t_{jk}=0, \text{ for any }i,j,k\in V \text{ with }i\neq j.$$ Thus for any fixed $k\in V$ we have $t_{ik} =0$ for all $ i\in V$, or there exists at most one $i:=i(k)\in V$ such that $t_{ik} \not = 0$ and $t_{jk} =0$ for all $j \not = i$. Then $$\label{eq:supp}
(\supp f(e_i) ) \cap (\supp f(e_j)) =\emptyset, \text{ for }i\neq j, \text{ and }\displaystyle \cup_{i\in V} (\supp f(e_i) ) = V,$$ where $\supp f(e_i)=\{j\in V: t_{ij}\neq 0\}$. In what follows we consider two cases.\
[**Case 1.**]{} For any $k\in V$ there exists $i\in V$ such that $t_{ik} \not = 0$ and $t_{jk} =0$ for all $j \not = i$. In this case the sequence $(t_{ij})_{i,j\in V}$ contains only $n$ scalars different from zero. Assume that there exists $i\in V$ such that $t_{i j_1}\neq 0$ and $t_{i j_2}\neq 0$ for some $j_1, j_2 \in V$. This implies the existence of $m\in V$ such that $f(e_m) =0$, which in turns implies $f(e_m)\cdot f(e_m)=0$. On the other hand, as $f(e_m)\cdot f(e_m) = f(e_m^2)$, we have $$0=f(e_m^2) =f\left(\sum_{\ell \in V}\left(\frac{a_{m\ell}}{k_m}\right) e_{\ell}\right)=\sum_{\ell \in V} \left(\frac{a_{m\ell}}{k_m}\right) f(e_{\ell}).$$ We can use to conclude $f(e_{\ell})=0$ for any $\ell$ such that $a_{m\ell}=1$. In other words, for any $\ell \in \mathcal{N}(m)$ it holds that $f(e_\ell)=0$. This procedure may be repeated, now for any $\ell \in \mathcal{N}(m)$, i.e. we can prove for any $v\in\mathcal{N}({\ell})$ that $f(e_v)=0$. As we are dealing with a connected graph, this procedure may be repeated until to cover all the vertices of $G$, and therefore we can conclude that $f(e_{i})=0$ for any $i\in V$, which is a contradiction. Therefore, for any $i\in V$ there exists only one $j:=j(i)$ such that $t_{ij}\neq 0$. Hence $f$ it must to be defined as $$f(e_i) = \alpha_i e_{\pi(i)},\;\;\;\text{ for all }i\in V,$$ where the $\alpha_i$’s are scalars, and $\pi$ is an element of the symmetric group $S_n$.\
[**Case 2.**]{} Assume that there exist $k\in V$ such that $t_{ik} =0$ for all $ i\in V$. Then the sequence $(t_{ij})_{i,j\in V}$ contains at most $n-1$ scalars different from zero, which implies the existence of $\ell \in V$ such that $f(e_{\ell})=0$. By applying similar arguments as the ones of Case 1 we conclude that $f(e_i)=0$ for any $i\in V$ and therefore $t_{ij}=0$ for any $i,j \in V$. Thus $f$ is the null map.
In Proposition \[theo:principal\] we assume that the adjacency matrix $A$ is non-singular. We point out that this hypothesis is equivalent to the transition matrix, say $A_{RW}$, of the symmetric random walk on $G$ be non-singular. In fact, if we denote by $F_i$ the $i$-th row of $A$ then we can write $A^T=
\begin{bmatrix}
F_1 & F_2 & \cdots & F_n
\end{bmatrix}
$. Then $A_{RW}^T=
\begin{bmatrix}
(1/\deg(1)) F_1 & (1/\deg(2)) F_2 & \cdots &(1/\deg(n)) F_n \end{bmatrix}
$, and therefore $\det A_{RW} = \left(\deg(1)\times \deg(2) \times \cdots \times \deg(n)\right)^{-1} \, \det A$.
### Proof of Theorem \[theo:criterio\]
Together, [@PMP Theorem 3.2(i)], and Propositions \[theo:generalization\], \[theo:sufficient\], \[theo:principal\] gain in interest if we realize that they provide a necessary and sufficient condition for the existence of isomorphisms in the case of non-singular graphs. Indeed, assume that $G$ is a non-singular graph and notice that $\A_{RW}(G)\cong \A(G)$ implies, by Proposition \[theo:principal\], that the isomorphisms between $\A_{RW}(G)$ and $\A(G)$ are given by . Then by Proposition \[theo:sufficient\] we conclude that $G$ is a regular or a biregular graph. For the reciprocal, it is enough to apply Proposition \[theo:generalization\] and [@PMP Theorem 3.2(i)].
In Conjecture \[conjecture\] we claim that Theorem \[theo:criterio\] should be true for the case of finite singular graphs. Indeed, we believe that the conjecture should be true for infinite graphs too. To see that we notice the Propositions \[theo:generalization\] and \[theo:sufficient\] hold for infinite graphs. Therefore a generalization in this direction should be focus on an extension of Proposition \[theo:principal\] to deal with infinite non-singular adjacency matrices.
Connection with the automorphisms of $\mathcal{A}(G)$
=====================================================
The purpose of this sections is twofold. First we show that the problem of looking for the isomorphisms between $\mathcal{A}_{RW}(G)$ and $\mathcal{A}(G)$ is equivalent to the question of looking for the automorphisms of $\mathcal{A}(G)$, provided $G$ is a regular graph. Second, we use the previous comparison to revisit a result obtained by [@camacho/gomez/omirov/turdibaev/2013], which exhibit the automorphism group of an evolution algebra. Then we give a better presentation of such result.
As usual we use $\aut \mathcal{A}(G)$ to denote the automorphism group of $\mathcal{A}(G)$.
\[prop:auto\] Let $G$ be a $d$-regular graph. Then any isomorphism $f:\mathcal{A}_{RW}(G)\longrightarrow \mathcal{A}(G)$ induces a $g\in \aut \mathcal{A}(G)$. Analogously, any $g\in \aut \mathcal{A}(G)$ induces an isomorphism $f:\mathcal{A}_{RW}(G)\longrightarrow \mathcal{A}(G)$.
Assume that $f:\mathcal{A}_{RW}(G)\longrightarrow \mathcal{A}(G)$ is an isomorphism and consider $g:\mathcal{A}(G)\longrightarrow \mathcal{A}(G)$ such that $g(e_i)=d\, f(e_i)$ for any $i\in V$. If $i\neq j$ then $g(e_i)\cdot g(e_j) = 0$. On the other hand, for $i\in V$ $$g(e_i ^2)=g\left(\sum_{j\in V}a_{ij} e_j \right) = \sum_{j\in V} a_{ij} d f(e_j),$$ while $$g(e_i)\cdot g(e_i) = d^2\,f(e_i) \cdot f(e_i) = d^2\, f(e_i^2) = d^2\, f\left(\sum_{j\in V} \left(\frac{a_{ij}}{d} \right)e_j\right) = \sum_{j\in V} a_{ij} d f(e_j).$$ Thus $g$ is an automorphism of $ \mathcal{A}(G)$. The other assertion may be proved in analogous way by considering $f(e_i)=d^{-1}\, g(e_i)$ for any $i\in V$.
The correspondence described in the previous proposition allow us to state the following result.
\[prop:autograph\] Let $G$ be a non-singular regular graph with $n$ vertices, and let $\mathcal{A}(G)$ be its associated evolution algebra. Then $\aut \mathcal{A}(G) \subseteq \{g_{\pi}: \pi \in S_{n}\}$.
Let $g \in \aut \mathcal{A}(G)$. By the proof of Proposition \[prop:auto\], there exists an isomorphism $f:\mathcal{A}_{RW}(G)\longrightarrow \mathcal{A}(G)$ such that $f(e_i) := (1/d)\,g(e_i)$, for any $i\in V$. On the other hand, as $G$ is a non-singular graph we have by Proposition \[theo:principal\] that $f(e_i)=\alpha_i e_{\pi(i)}$, where the $\alpha_i$’s are scalars and $\pi$ is an element of the symmetric group $S_n$. Therefore $g=g_\pi$ and the proof is completed.
In [@camacho/gomez/omirov/turdibaev/2013 Proposition 3.1] it has been stated that for any evolution algebra $E$ with a non-singular matrix of structural constants it holds that $\aut E=\{g_\pi:\pi \in S_n\}$. Example \[exa:cycle\] shows that if $E:=\mathcal{A}(C_5)$ (so $\det A = 2$), then $\aut E\subsetneq \{g_\pi:\pi \in S_n\}$, which contradicts the equality stated by [@camacho/gomez/omirov/turdibaev/2013]. The mistake behind their result is in the proof. Indeed, although the authors assume correctly that an automorphism $g$ should verify $g(e_i\cdot e_j)=g(e_i)\cdot g(e_j)$, they only check this equality when $i\neq j$. When one check also the equality for $i=j$ one can obtain the condition that $\pi$ must satisfy in order to be an automorphism. This is the spirit behind our Proposition \[prop:pi\]. The same arguments of our proof lead to the following version of [@camacho/gomez/omirov/turdibaev/2013 Proposition 3.1].
Let $E$ be an evolution algebra with natural basis $\{e_i:i\in V\}$, and a non-singular matrix of structural constants $C=(c_{ij})$. Then $$\aut E=\{g_{\pi}:\pi \in S_n \text{ and }c_{i\pi^{-1}(k)}\,\alpha _{\pi^{-1}(k)}=\alpha_i^2\,c_{\pi(i)k} , \text{ for any } i,k \in V\}.$$
Acknowledgments {#acknowledgments .unnumbered}
===============
P.C. was supported by CNPq (grant number 235081/2014-0). P.M.R was supported by FAPESP (grant numbers 2016/11648-0, 17/10555-0) and CNPq (grant number 304676/2016-0).
[99]{}
A. A. Albert, [*Non-associative algebras: I. Fundamental concepts and isotopy*]{}, Ann. of Math. [**2**]{} (1942), No. 43, 685–707.
P. Cadavid, M. L. Rodiño Montoya and P. M. Rodríguez, [*A connection between evolution algebras, random walks, and graphs*]{}. J. Algebra Appl. In Press.
P. Cadavid, M.L. Rodiño Montoya and P.M. Rodriguez, Characterization theorems for the space of derivations of evolution algebras associated to graphs. Linear and Multilinear Algebra (2018).
Y. Cabrera, M. Siles, M. V. Velasco, [*Evolution algebras of arbitrary dimension and their decompositions*]{}, Linear Algebra Appl. [**495**]{}, 2016, 122-162.
Y. Cabrera, M. Siles, M. V. Velasco, [*Classification of three-dimensional evolution algebras*]{}, Linear Algebra Appl. [**524**]{}, 2017, 68-108.
L. M. Camacho, J. R. Gómez, B. A. Omirov and R. M. Turdibaev, [*Some properties of evolution algebras*]{}, Bull. Korean Math. Soc. [**50**]{} (2013), No. 5, 1481–1494.
A. Elduque and A. Labra, [*Evolution algebras and graphs*]{}, J. Algebra Appl. [**14**]{} (2015), 1550103.
A. Elduque and A. Labra, [*On nilpotent evolution algebras*]{}, Linear Algebra Appl. [**505**]{} (2016), 11-31.
O. J. Falcón, R. M. Falcón and J. Núñez, [*Classification of asexual diploid organisms by means of strongly isotopic evolution algebras defined over any field*]{}, J. Algebra [**472**]{} (2017), 573–593.
S. Karlin and H. M Taylor, [*An Introduction to Stochastic Modeling*]{}, 3rd ed., Academic Press, 1998.
J. Núñez, M. L. Rodríguez-Arévalo and M. T, Villar, [*Certain particular families of graphicable algebras*]{}, Appl. Math. Comput. [**246**]{} (2014), 416–425.
J. Núñez, M. L. Rodríguez-Arévalo and M. T, Villar, [*Mathematical tools for the future: Graph Theory and graphicable algebras*]{}, Appl. Math. Comput. [**219**]{} (2013), 6113–6125.
S. M. Ross, [*Introduction to Probability Models*]{}, 10th Ed., Academic Press, 2010.
J. P. Tian, [*Evolution algebras and their applications*]{}, Springer-Verlag Berlin Heidelberg, 2008.
J. P. Tian, [*Invitation to research of new mathematics from biology: evolution algebras*]{}, Topics in functional analysis and algebra, Contemp. Math. 672, 257-272, Amer. Math. Soc., Providence, RI, 2016.
|
---
abstract: 'We investigate solutions of Einstein field equations for the non-static spherically symmetric perfect fluid case using different equations of state. The properties of an exact spherically symmetric perfect fluid solutions are obtained which contain shear. We obtain three different solutions out of these one turns out to be an incoherent dust solution and the other two are stiff matter solutions.'
author:
- |
M. Sharif [^1] and T. Iqbal\
Department of Mathematics, University of the Punjab,\
Quaid-e-Azam Campus Lahore-54590, PAKISTAN.
title: '**Non-Static Spherically Symmetric Perfect Fluid Solutions**'
---
**INTRODUCTION**
================
There is no shortage of exact solutions of Einstein field equation (EFEs). However, because General Relativity is highly non-linear, it is not always easy to understand what qualitative features solutions might possess. Different people have been working on the investigation of spherically symmetric perfect fluid solutions with shear \[1-5\]. Nearly all the solutions have been obtained by imposing symmetry conditions. It is known that non-linear partial differential equations admit large classes of solutions, many of which are unphysical.
EFEs for static spherically symmetric distribution of perfect fluid have been investigated by many authors using different approaches \[6\]. One approach is to prescribe an equation of state, $\rho = \rho
(p)$ which relates the energy density $\rho$ and isotropic pressure p. In this paper we shall extend this idea of solving EFEs for non-static spherically symmetric spacetimes. We shall examine systematically the field equations for non-static spherically symmetric perfect fluid solutions. We obtain three different solutions which have shear non-zero. Since most of the models in the literature \[6\] for spherically symmetric are shear-free. Thus it will be interesting to study the non-static solutions which contain shear.
The breakup of the paper is as follows. In section two we shall write down the field equations. In the third section we attempt all the possible solutions in three classes using different equations of state. In section four we shall evaluate kinematic quantities for the solutions obtained. Finally in the last section we shall conclude our discussion.
**Field Equations**
===================
The non-static spherically symmetric metric has the form \[5\]
$$ds^2=e^{2\nu (r,t)}dt^2-e^{2\lambda (r,t)}dr^2-R^2(r,t)d\Omega ^2,$$
where $d\Omega ^2=d\theta ^2+\sin ^2\theta \,d\varphi ^2$.
For a perfect fluid distribution, the energy-momentum tensor is given by $$T_{ab}=(\rho+p)u_a u_b -pg_{ab},\quad u_a u^b=1, \quad a,b=0,1,2,3.$$ where $\rho$ and $p$ are the energy density and the pressure of the fluid respectively. The four velocity of the fluid has the form $u^a
=(e^{-\nu(r,t)},0,0,0)$. The field equations for the metric (1) can be written down \[6\] $$\kappa \rho =\frac 1{R^2}-\frac 2Re^{-2\lambda }(R^{^{\prime \prime
}}-R^{^{\prime }}\lambda ^{^{\prime }}+\frac{R^{^{\prime }2}}{2R})+\frac
2Re^{-2\nu }(\stackrel{.}{R}\stackrel{.}{\lambda }+\frac{\stackrel{.}{R}^2}{%
2R}),$$ $$\kappa p=-\frac 1{R^2}+\frac 2Re^{-2\lambda }(R^{^{\prime }}\nu ^{^{\prime
}}+\frac{R^{^{\prime }2}}{2R})-\frac 2Re^{-2\nu }(\stackrel{..}{R}-\stackrel{%
.}{R}\stackrel{.}{\nu }+\frac{\stackrel{.}{R}^2}{2R}),$$ $$\kappa pR=e^{-2\lambda }((\nu ^{^{\prime \prime }}+\nu ^{^{\prime }2}-\nu
^{\prime }\lambda ^{^{\prime }})R+R^{^{\prime \prime }}+R^{^{\prime }}\nu
^{^{\prime }}-R^{^{\prime }}\lambda ^{^{\prime }})$$ $$-e^{-2\nu }((\stackrel{..}{\lambda }+\stackrel{.}{\lambda }^2-\stackrel{.}{%
\lambda }\stackrel{.}{\nu })R+\stackrel{..}{R}+\stackrel{.}{R}\stackrel{.}{%
\lambda }-\stackrel{.}{R}\stackrel{.}{\nu }),$$
$$\stackrel{.}{R}^{^{\prime }}-\,\stackrel{.}{R}\nu ^{^{\prime }}-\stackrel{}{%
R^{^{\prime }}}\stackrel{.}{\lambda }=0,$$
where the dot denotes partial derivative with respect to time ‘$t$’ and the prime indicates partial derivative with respect to the coordinate ‘$r$’. The spatial coordinate ‘$r$’ refers to the comoving radius and ‘$\kappa $’ is the gravitational constant. The consequences of the energy-momentum conservation $T_{;b}^{ab}=0$ are the relations $$p^{^{\prime }}=-(\rho +p)\nu ^{^{\prime }},\quad\stackrel{.}{\rho }%
=-(\rho +p)(\stackrel{.}{\lambda }+2\frac{\stackrel{.}{R}}R).$$
We now consider the equation of state $$p=(\gamma -1)\rho ,\quad\rho +p=\rho \gamma , \quad 1\leq \gamma \leq 2,$$ where $\gamma$ is a constant. (The limits on $\gamma$ result from the requirement that the stresses be pressures rather than tensions and that the speed of sound in the fluid be less than the speed of light in vacuum). For $\gamma =1$, the pressure vanishes, so that the equation of state is that of incoherent dust. For $\gamma =\frac 43$, the equation of state is that of photon gas or a gas of non-interacting relativistic particles. For $\gamma =2$, the equation of state reduces to the stiff matter case.
**Non-Static Spherically Symmetric Solutions**
==============================================
To simplify the field equations we solve them for some special cases:
**[THE CASE]{} R=R(t), $\lambda =\lambda$(t)**
----------------------------------------------
This class is identical with the well-known Kantowski–Sachs class of cosmological models \[7-9\]. We can choose $R=t$ without loss of generality, use the equation of state given by Eq.(8) and attempt all the possible solutions.
[**When** $\gamma =1$]{}
This gives $p=0$ and the EFEs.(3-6) will reduce to $$\kappa \rho =\frac 1{t^2}+\frac 2te^{-2\nu }(\stackrel{.}{\lambda }+\frac
1{2t}),$$ $$0=2t\stackrel{.}{\nu }e^{-2\nu }-1-e^{-2\nu },$$
$$0=-e^{-2\nu }[(\stackrel{..}{\lambda }+\stackrel{.}{\lambda }^2-\stackrel{.}{%
\lambda }\stackrel{.}{\nu })t+\stackrel{.}{\lambda }-\stackrel{.}{\nu }].$$
Eq.(10) can easily be solved which gives $$\,\nu =\frac 12\ln \left| \frac t{c-t}\right|,$$ where c is an arbitrary constant. For this value of $\nu$, Eq.(9) gives $$\,\stackrel{.}{\lambda}=\frac{\rho \kappa t^3-c}{2t(c-t)}.$$ Substituting the values of $\stackrel{..}{\lambda},
\stackrel{.}{\lambda}^2, \stackrel{.}{\lambda}, \stackrel{.}{\nu}$ in Eq.(11), we have $$\stackrel{.}{\rho}+\frac{3c-4t}{2t(c-t)}\rho =\frac{t^2\kappa }{2(t-c)}
\rho ^2,$$ Solving this, we get
$$\rho =\left[ \kappa \{(\frac t{c-t})^{\frac 12}-\sin^{-1}(\frac tc)^{\frac 12}\}%
\sqrt{ct^3-t^4}^{}+c_1\sqrt{ct^3-t^4}\right] ^{-1},$$
where $c_1$ is an integration constant. By substituting this value of $\rho$ in $\stackrel{.}{\lambda}$ and integrating, the value of $\lambda$ will become $$\lambda=\ln\left[c_2\{1-(\frac{c-t}{t})^\frac 12 sin^{-1}(\frac tc)^\frac 12+(\frac
{c-t}{t})^\frac 12 \frac{c_1}{\kappa}\}\right],$$ where $c_2$ is an integration constant. The energy-momentum conservation relations (7) are also satisfied by this solution.
The resulting spacetime is $$ds^2=\frac t{c-t}dt^2-\left[c_2\{1-(\frac{c-t}{t})^\frac 12 sin^{-1}(\frac tc)^\frac 12+(\frac
{c-t}{t})^\frac 12 \frac{c_1}{\kappa}\}\right]^2dr^2-t^2d\Omega ^2$$
If we take a special case for which $\lambda = constant$ and $\gamma =2(\rho =p),$ then EFEs (3-6) will give $$\nu =\ln \left| \frac{\alpha^{2}t^{2}}{1-\alpha^{2}t^{2}}\right| ^{%
\frac{1}{2}},$$ where $\alpha$ is an arbitrary constant. The energy density and the pressure can be evaluated as $$\rho=p=\frac{1}{\kappa \alpha^2t^4}.$$ The corresponding metric will be
$$ds^2=\frac{\alpha^2t^2}{1-\alpha^2t^2}dt^2-dr^2-t^2d\Omega ^2.$$
**[THE CASE]{} R=R(t), $\lambda =\lambda $ (r,t)**
--------------------------------------------------
This class of solution was examined by Korkina and Martinenko \[10,11\]. In this case EFEs will be the same as Eqs.(3-6) except that now $\lambda
=\lambda (r,t).$ We solve this system of partial differential equations using Eq.(8).
[**When** $\gamma =2$]{}
In this case $p=\rho$, EFEs (3-6) will give
$$\rho =\frac 1{\kappa t^2}[y(1+\frac{2\stackrel{.}{z}}z t)+1],$$
$$p=-\frac 1{\kappa t^2}[\stackrel{.}{y}t+1+y],$$
$$p=-\frac 1{\kappa t^2}[y\frac{\stackrel{..}{z}}{z}t^2+\stackrel{.}{y}\frac{
\stackrel{.}{z}}{2z}t^2+y\frac{\stackrel{.}{z}}{z}t+\frac{\stackrel{.}{y}}{2}t],$$
where $y=e^{-2\nu (t)},\quad z =e^{\lambda (r,t)}$. From Eqs.(18) and (19), we have $$\stackrel{..}{z}+\stackrel{.}{z}[\frac{\stackrel{.}{y}}{2y}%
+\frac 1t]-z[\frac{\stackrel{.}{y}}{2yt}+\frac{y+1}{yt^2}]=0.$$ This second order non-linear partial differential equation can be solved using Herlt’s method \[5\] by choosing $$y(t)=e^{-2\nu }=\frac 1{n^2-1}+\beta t^{-2(n+1)}, \quad n^2\neq 1$$ where $\beta$ is an arbitrary constant. Eq.(20) then becomes $$\stackrel{..}{z}+A\stackrel{.}{z}-Bz =0,$$ where
$$A=\frac{t^{-1}-n(n^2-1)\beta t^{-2n-3}}{1+\beta (n^2-1)t^{-2n-2}}\,,$$
$$B=\frac{n^2t^{-2}-n(n^2-1)\beta t^{-2n-4}}{1+\beta (n^2-1)t^{-2n-2}}$$
Eq.(22) has the special solution $z_s=t^n.$ Let us substitute $$z=C(r,t)\,t^n$$ into Eq.(22), we obtain
$$\stackrel{..}{C}+[A+2nt^{-1}]\stackrel{.}{C}=0,$$
This can easily be solved, and the general solution becomes
$$z(r,t)=t^n\{\beta_1(r)\int_{\beta_2(r)}^t\frac{t^{^{\prime }-n}}{\sqrt{%
t^{^{\prime }2n+2}+\beta (n^2-1)}}dt^{^{\prime }}\},$$
where $\beta_1(r)\,$ and $\beta_2(r)$ are arbitrary functions of the variable $r.$ The energy density $\rho $ and pressure $p$ can now be computed easily using Eqs.(17) and (18).
The corresponding metric will become
$$ds^2=[y(t)]^{-1}dt^2-z^2(r,t)^{}dr^2-t^2d\Omega ^2,$$
where $y(t)$ and $z \left( r,t\right) \,$ are given by Eqs.(21) and (27) respectively. It is to be noticed that this solution corresponds to the solution obtained by Herlt \[5\].
THE GENERAL CASE R=R(r,t)
-------------------------
To solve the general case we take the following two assumptions: $$(i)\nu =0,\lambda =\lambda (r,t);\quad (ii)\nu =\nu
(r,t),\lambda =0.$$
**(i) When** $\nu =0,\lambda =\lambda (r,t)$
EFE (6) implies that $\lambda =\ln \left| fR^{^{\prime }}\right|,$ where $f(r)$ is an arbitrary function of $r$ and $R$ is an arbitrary function of coordinates $r$ and $t.$ Also $R^{\prime }\neq 0,$ which implies that $R\neq R(t).$
Here arises two cases i.e. either $R=R(r)$ or $R=R(r,t).$ For $R=R(r),$ we can have $\rho$ and $p$ by replacing $\lambda$ in Eqs.(3) and (4) respectively. $$\rho =\frac 1\kappa (\frac 1{R^2}-\frac 1{f^2R^2}+\frac{2f^{^{\prime }}}{%
R^{}R^{^{\prime }}f^3}),$$ $$p=\frac 1\kappa (-\frac 1{R^2}+\frac 1{f^2R^2}).$$ But Eq.(5) gives $$p=-\frac{f^{^{\prime }}}{\kappa RR^{^{\prime }}f^3}.$$ By comparing the two values of $p,$ we obtain $$R=\frac{\sqrt{f^2-1}}{lf},$$ where $l$ is an integration constant. The resulting metric becomes $$ds^2=dt^2-\frac{f'^2}{l^2f^2(f^2-1)}dr^2-\frac{f^2-1}{l^2f^2}d\Omega^2.$$ This turns out a class of spherically symmetric static spacetimes. For $f=\frac{1}{\sqrt{1-r^2}}$ and $l=1$, it reduces to Einstein metric. From the above equations we obtain $\rho=\frac{3l^2}{\kappa},\quad
p=-\frac{l^2}{\kappa}$ which implies that $\rho +3p=0$.
**(ii)When** $\nu =\nu (r,t), \lambda =0$, EFE (6) will reduce to $\nu =\ln \left| g\stackrel{.}{R}\right|,$ where g(t) is an arbitrary function of t and R is an arbitrary function of coordinates r and t. Also $\stackrel{.}{R}\neq 0,$ which implies that $R\neq R(r).$ This gives that either $R=R(t),R=R(r,t).$ For $R=R(t),$ we have the spacetime similar to Eq.(16). The quantities $\rho$ and p also turn out to be the same.
For $R=R(r,t)$, the solutions need to be investigated.
**[Kinematic of the Velocity Field]{}**
=======================================
The spherically symmetric solutions can be classified according to their kinematical properties \[6\]. The rotation is given by
$$\omega_{ab}=u_{[a;b]}+\stackrel{.}{u_{[a}}u_{b]}.$$
Tha acceleration can be found by
$$\stackrel{.}{u_a}=u_{a;b}u^b.$$
For the expansion we have
$$\Theta=u^a_{;a}.$$
The components of the shear-tensor are given by
$$\sigma_{ab}=u_{(a;b)}+\stackrel{.}{u}_{(a}u_{b)}-\frac 13 \Theta h_{ab},$$
where $h_{ab}=g_{ab}-u_a u_b$ is the projection operator. The square brackets denote antisymmetrization and the round brackets indicate symmetrization. The shear invariant is given as $\sigma_{ab}\sigma^{ab}$.
Now we find all the above quantities for the solutions obtained. The rotation and the acceleration are zero for all the solutions. The expansion, for the first solution, is
$$\Theta=(\frac{c-t)}{t})^{\frac 12}[\frac{\rho \kappa t^3-c}{2t(c-t)}+\frac 2t]$$
The components of the shear-tensor are given by
$$\sigma_{11}=\frac 23(\frac{c-t)}{t})^{\frac 12}[\frac 1t-\frac{\rho \kappa t^3-c}{2t(c-t)}]
\left[c_2\{1-(\frac{c-t}{t})^\frac 12 sin^{-1}(\frac tc)^\frac 12+(\frac{c-t}{t})^\frac 12
\frac{c_1}{\kappa}\}\right]^2,$$
$$\sigma_{22}=\frac 13 t^2(\frac{c-t)}{t})^{\frac 12}[\frac{\rho \kappa t^3-c}{2t(c-t)}-\frac 1t],$$
$$\sigma_{33}=\sin^2\theta \sigma_{22}.$$
For the second solution, the expansion factor is
$$\Theta=\frac{2(1-\alpha^2t^2)^{\frac 12}}{\alpha t^2}$$
The components of the shear-tensor are given by
$$\sigma_{11}=\frac{2(1-\alpha^2t^2)^{\frac 12}}{3\alpha t^2},$$
$$\sigma_{22}=-\frac{(1-\alpha^2t^2)^{\frac 12}}{3\alpha},$$
$$\sigma_{33}=\sin^2\theta \sigma_{22}.$$
For the third solution, we have
$$\Theta=(y)^{\frac 12}(\frac {\stackrel{.}{z}}{2z}+\frac 2t)$$
The components of the shear-tensor are given by $$\sigma_{11}=\frac 23(\frac 1t-\frac {\stackrel{.}{z}}{2z})z\sqrt y,$$ $$\sigma_{22}=\frac 13 t^2(\frac {\stackrel{.}{z}}{2z}-\frac 1t)\sqrt y,$$
$$\sigma_{33}=\sin^2\theta \sigma_{22}.$$
The shear invariant turns out to be 3 for all the solutions.
The rate of change of expansion with respect to proper time is given by Raychaudhuri’s equation \[12\] $$\frac{d\Theta}{d\tau}=-\frac 13 \Theta^2-\sigma_{ab}\sigma^{ab}+\omega_{ab}u^au^b-R_{ab}u^au^b$$ We evaluate it only for the second solution as it is simple to understand. For this solution it becomes $$\frac{d\Theta}{d\tau}=-\frac{2}{\alpha^2}(\frac{1+\alpha^2t^2}{t^4})$$
**SUMMARY**
===========
This section presents a brief summary of the results obtained in the previous section. We shall discuss these results and will comment on possible future directions of development.
We have been able to solve partially the EFEs for the classes of non-static spherically symmetric spacetimes using different equations of state for perfect fluid. These have been classified into three categories. The summary of each case is given below:
**1.** $\mathbf{R=R(t),}$** **$\mathbf{\lambda =\lambda (t)}$
In this case, the dust solution gives
$$ds^2=\frac t{c-t}dt^2-\left[c_2\{1-(\frac{c-t}{t})^\frac 12 sin^{-1}(\frac tc)^\frac 12+(\frac
{c-t}{t})^\frac 12 \frac{c_1}{\kappa}\}\right]^2dr^2-t^2d\Omega ^2$$
The stiff matter solution becomes
$$ds^2=\frac{\alpha^2t^2}{1-\alpha^2t^2}dt^2-dr^2-t^2d\Omega ^2.$$
**2.** $\mathbf{R=R(t),}$** **$\mathbf{\lambda =\lambda (r,t)}$
The stiff matter solution gives
$$ds^2=[y(t)]^{-1}dt^2-z^2(r,t)dr^2-t^2d\Omega ^2,$$
where y and z are already given.
**3.** $\mathbf{R=R(r,t)}$
In this case we obtain two solutions. The first solution, infact, turns out a class of spherically symmetric static spacetimes. The other (stiff matter) solution is exactly the same as Eq.(49).
The non-static spherically symmetric solutions with equations of state split up into three classes of solutions. In the first case we obtain two different solutions. One of these two becomes the dust solution and the other becomes a stiff matter solution. The pressure and the energy density are positive every where. In case two we have a stiff matter solution only. However, it is much difficult to comprehend it. In the 3rd case we obtain two solutions. The first solution becomes a class of spherically symmetric static spacetimes depending upon the arbitrary function f. If we choose $f=\frac{1}{\sqrt{1-r^2}}$, it reduces to Einstein spacetime. The energy density is positive everywhere while the pressure is negative. The other (stiff matter) coincides with one of the solutions in the first case. It is interesting to note that all the non-static solutions which have been found contain shear which are a very few solutions in the literature.
Finally, we discuss the behaviour of the rate of change of expansion for one solution given by Eq.(49). We see from Eq.(47) that the rate will never be positive but this will always be negative. As the time t tends to zero, the rate approaches to $-\infty$ and as t goes to $\infty$, the rate tends to zero. This shows that the spacetime is contracting or collapsing and the flux gets focussed along the proper time. Similarly, the other solutions can be discussed.
We have tried to obtain non-static spherically symmetric solutions for some particular classes and partial solutions have been attempted in these three classes. Thus a total of three solutions have been obtained. To obtain new solutions one has to solve the remaining cases of these classes. Then we can attempt the general solution of non-static spherically symmetric solutions.
[**Acknowledgment**]{}
One of the authors (MS) would like to thank Prof. Chul H. Lee for the hospitality at the Department of Physics and the Korea Science and Engineering Foundation (KOSEF) for the postdoc fellowship at Hanyang University Seoul, KOREA, where some of this work was completed.
[**References**]{}
[\[1\]]{} Knutsen, H.: [*Gen. Rel. Grav.*]{} [**24**]{}(1992)1297.
[\[2\]]{} Knutsen, H.: [*Class. Quant. Grav.*]{} [**11**]{}(1995)2817.
[\[3\]]{} Kitamura, S.: [*Class. Quant. Grav.*]{} [**11**]{}(1994)195.
[\[4\]]{} Kitamura, S.: [*Class. Quant. Grav.*]{} [**12**]{}(1995)827.
[\[5\]]{} Herlt, Eduard: [*Gen. Rel. Grav.*]{} [**28**]{}(1996)919.
[\[6\]]{} Kramer, D. et al: [*Exact Solutions of Einstein’s Field Equations* ]{}(Cambridge University Press 1980).
[\[7\]]{} Kompaneets, A. and Chernnow, A.S.: [*ZhETF*]{} [**47**]{}(1964)1939, [*\[Sov. Phys. JETP*]{} [**20**]{}(1965)1303\].
[\[8\]]{} Kantowski, R. and Sachs, R. K.: [*J. Math. Phys*]{}. [**7**]{}(1966)443.
[\[9\]]{} Krasinki: [*A Physics in An Inhomogeneous Universe*]{} (Cambridge University Press 1996).
[\[10\]]{} Korkina, M. P., Martinenko, V.G.: [*Ukr. Fiz. Zh.*]{} [**20**]{}(1975)626.
[\[11\]]{} Korkina, M. P., Martinenko, V.G.: [*Ukr. Fiz. Zh.*]{} [**20**]{}(1975)2044.
[\[12\]]{} Wald, R.M.: [*General Relativity*]{} (University of Chichago Press Chichago, 1984).
[^1]: e-mail: hasharif@yahoo.com
|
---
abstract: '**Abstract.** A celebrated result of Sch[ü]{}tzenberger says that a language is star-free if and only if it is is recognized by a finite aperiodic monoid. We give a new proof for this theorem using local divisors.'
author:
- |
Manfred Kufleitner\
[FMI, University of Stuttgart, Germany[^1]]{}\
[`kufleitner@fmi.uni-stuttgart.de`]{}
title: |
Star-Free Languages\
and Local Divisors
---
Introduction
============
The class of regular languages is built from the finite languages using union, concatenation, and Kleene star. Kleene showed that a language over finite words is definable by a regular expression if and only if it is accepted by some finite automaton [@kle56]. In particular, regular languages are closed under complementation. It is easy to see that a language is accepted by a finite automaton if and only if it is recognized by a finite monoid. As an algebraic counterpart for the minimal automaton of a language, Myhill introduced the *syntactic monoid*, [[*cf.*]{}]{} [@rs59]. An extended regular expression is a term over finite languages using the operations union, concatenation, complementation, and Kleene star. By Kleene’s Theorem, a language is regular if and only if it is definable using an extended regular expression. It is natural to ask whether some given regular language can be defined by an extended regular expression with at most $n$ nested iterations of the Kleene star operation—in which case one says that the language has generalized star height $n$. The resulting decision problem is called the *generalized star height problem*. Generalized star height zero means that no Kleene star operations are allowed. Consequently, languages with generalized star height zero are called *star-free*. Schützenberger showed that a language is star-free if and only if its syntactic monoid is aperiodic [@sch65sf]. Since aperiodicity of finite monoids is decidable, this yields a decision procedure for generalized star height zero. To date, it is unknown whether or not all regular languages have generalized star height one.
In this paper, we give a proof of Schützenberger’s result based on *local divisors*. In commutative algebra, local divisors were introduced by Meyberg in 1972, see [@FeTo02; @Mey72]. In finite semigroup theory and formal languages, local divisors were first used by Diekert and Gastin for showing that pure future local temporal logic is expressively complete for free partially commutative monoids [@dg06IC].
This is a prior version of an invited contribution at the 16th International Workshop on Descriptional Complexity of Formal Systems (DCFS 2014) in Turku, Finland [@kuf14dcfs].[^2]
Preliminaries {#sec:prem}
=============
The set of finite words over an alphabet $A$ is $A^*$. It is the free monoid generated by $A$. The empty word is denoted by ${\varepsilon}$. The *length ${\left|\mathinner{u}\right|}$* of a word $u = a_1 \cdots a_n$ with $a_i \in A$ is $n$, and the *alphabet* ${\mathrm{alph}}(u)$ of $u$ is ${\left\{\mathinner{a_1, \ldots, a_n}\right\}}
\subseteq A$. A language is a subset of $A^*$. The concatenation of two languages $K,K' \subseteq A^*$ is $K \cdot K' =
{\left\{uv\mathrel{\left|\vphantom{uv}\vphantom{u \in K, v \in K'}\right.}u \in K, v \in K'\right\}}$, and the set difference of $K$ by $K'$ is written as $K \setminus K'$. Let $A$ be a finite alphabet. The class of *star-free languages* ${\mathrm{SF}}(A^*)$ over the alphabet $A$ is defined as follows:
- $A^* \in {\mathrm{SF}}(A^*)$ and ${\left\{a\right\}} \in {\mathrm{SF}}(A^*)$ for every $a
\in A$.
- If $K,K' \in {\mathrm{SF}}(A^*)$, then each of $K \cup K'$, $K \setminus K'$, and $K\cdot K'$ is in ${\mathrm{SF}}(A^*)$.
By Kleene’s Theorem, a language is regular if and only if it can be recognized by a deterministic finite automaton [@kle56]. In particular, regular languages are closed under complementation and thus, every star-free language is regular.
\[lem:subsetSF\] If $B \subseteq A$, then ${\mathrm{SF}}(B^*) \subseteq {\mathrm{SF}}(A^*)$.
It suffices to show $B^* \in {\mathrm{SF}}(A^*)$. We have $ B^* = A^* \,
\setminus \, \bigcup_{b \not\in B} A^* b A^*$.
A monoid $M$ is *aperiodic* if for every $x \in M$ there exists a number $n \in {\mathbb{N}}$ such that $x^n = x^{n+1}$.
\[lem:AP\] Let $M$ be aperiodic and $x,y \in M$. Then $xy = 1$ if and only if $x = 1$ and $y=1$.
If $xy = 1$, then $1 = xy = x^n y^n = x^{n+1} y^n = x \cdot 1 = x$.
A monoid $M$ *recognizes* a language $L \subseteq
A^*$ if there exists a homomorphism $\varphi : A^* \to M$ with $\varphi^{-1}\big(\varphi(L)\big) = L$. A consequence of Kleene’s Theorem is that a language is regular if and only if it is recognizable by a finite monoid, see [[*e.g.*]{}]{} [@pin86]. The class of *aperiodic languages* ${\mathrm{AP}}(A^*)$ contains all languages $L
\subseteq A^*$ which are recognized by some finite aperiodic monoid.
The *syntactic congruence $\equiv_L$* of a language $L \subseteq
A^*$ is defined as follows. For $u,v \in A^*$ we set $u \equiv_L v$ if for all $p,q \in A^*$ we have $puq \in L {\mathrel{\Leftrightarrow}}pvq \in L$. The *syntactic monoid* ${\mathrm{Synt}}(L)$ of a language $L \subseteq A^*$ is the quotient $A^* / \equiv_L$ consisting of the equivalence classes modulo $\equiv_L$. The *syntactic homomorphism* $\mu_L
: A^* \to {\mathrm{Synt}}(L)$ with $\mu_L(u) = {\left\{v\mathrel{\left|\vphantom{v}\vphantom{u \equiv_L v}\right.}u \equiv_L v\right\}}$ satisfies $\mu_L^{-1}\big(\mu_L(L)\big) = L$. In particular, ${\mathrm{Synt}}(L)$ recognizes $L$ and it is the unique minimal monoid with this property, see [[*e.g.*]{}]{} [@pin86].
Let $M$ be a monoid and $c \in M$. We introduce a new multiplication $\circ$ on $cM
\cap Mc$. For $xc,cy \in cM \cap Mc$ we let $$xc \circ cy = xcy.$$ This operation is well-defined since $x'c = xc $ and $cy' = cy$ implies $x' c y' = xc y' = x cy$. For $cx, cy \in Mc$ we have $cx
\circ cy = cxy \in Mc$. Thus, $\circ$ is associative and $c$ is the neutral element of the monoid $M_c = (cM \cap Mc, {\circ}, c)$. Moreover, $M'= {\left\{x\in M\mathrel{\left|\vphantom{x\in M}\vphantom{cx \in Mc}\right.}cx \in Mc\right\}}$ is a submonoid of $M$ such that $M' \to
cM \cap Mc$ with $x \mapsto cx$ becomes a homomorphism. It is surjective and hence, $M_c$ is a divisor of $(M, \cdot, 1)$ called the *local divisor of $M$ at $c$*. Note that if $c^2 = c$, then $M_c$ is just the local monoid $(cMc,\cdot,c)$ at the idempotent $c$.
\[lem:loc\] If $M$ is a finite aperiodic monoid and $1 \neq c \in M$, then $M_c$ is aperiodic and ${\left|\mathinner{M_c}\right|} < {\left|\mathinner{M}\right|}$.
If $x^n = x^{n+1}$ in $M$ for $cx \in Mc$, then $(cx)^n = cx^n =
cx^{n+1} = (cx)^{n+1}$ where the first and the last power is in $M_c$. This shows that $M_c$ is aperiodic. By [Lemma \[lem:AP\]]{} we have $1 \not\in cM$ and thus $1 \in M \setminus M_c$.
Sch[ü]{}tzenberger’s Theorem on star-free languages
===================================================
The following proposition establishes the more difficult inclusion of Schützenberger’s result ${\mathrm{SF}}(A^*) = {\mathrm{AP}}(A^*)$. Its proof relies on local divisors.
\[prp:ap:sf\] Let $\varphi : A^* \to M$ be a homomorphism to a finite aperiodic monoid $M$. Then for all $p \in M$ we have $\varphi^{-1}(p) \in {\mathrm{SF}}(A^*)$.
We proceed by induction on $({\left|\mathinner{M}\right|},{\left|\mathinner{A}\right|})$ with lexicographic order. If $\varphi(A^*) = {\left\{1\right\}}$, then depending on $p$ we either have $\varphi^{-1}(p) = \emptyset$ or $\varphi^{-1}(p) = A^*$. In any case, $\varphi^{-1}$ is in ${\mathrm{SF}}(A^*)$. Note that is includes both bases cases $M = {\left\{1\right\}}$ and $A = \emptyset$. Let now $\varphi(A^*) \neq {\left\{1\right\}}$. Then there exists $c \in A$ with $\varphi(c) \neq 1$. We set $B = A \setminus {\left\{c\right\}}$ and we let $\varphi_c : B^* \to M$ be the restriction of $\varphi$ to $B^*$. We have $$\label{eq:1}
\varphi^{-1}(p) \;=\;
\varphi_c^{-1}(p) \,\cup \!\!\!\!
\bigcup_{\scriptsize\begin{array}{c}
p = p_1 p_2 p_3
\end{array}} \!\!\!\!
\varphi^{-1}_{c}(p_1) \cdot
\big(\varphi^{-1}(p_2) \cap
c \/ A^* \cap A^* \hspace*{-0.5pt} c \big) \cdot
\varphi^{-1}_{c}(p_3).$$ The inclusion from right to left is trivial. The other inclusion can be seen as follows: Every word $w$ with $\varphi(w) = p$ either does not contain the letter $c$ or we can factorize $w = w_1 w_2 w_3$ with $c \not\in {\mathrm{alph}}(w_1 w_3)$ and $w_2 \in cA^* \cap A^*c$, [[*i.e.*]{}]{}, we factorize $w$ at the first and the last occurrence of $c$. Equation is established by setting $p_i = \varphi(w_i)$. By induction on the size of the alphabet, we have $\varphi_c^{-1}(p_i) \in {\mathrm{SF}}(B^*)$, and thus $\varphi_c^{-1}(p_i) \in {\mathrm{SF}}(A^*)$ by [Lemma \[lem:subsetSF\]]{}.
Since ${\mathrm{SF}}(A^*)$ is closed under union and concatenation, it remains to show $\varphi^{-1}(p) \cap c\/A^* \cap A^* c \in {\mathrm{SF}}(A^*)$ for $p \in \varphi(c) M \cap M \varphi(c)$. Let $$T = \varphi_c(B^*).$$ The set $T$ is a submonoid of $M$. In the remainder of this proof, we will use $T$ as a finite alphabet. We define a substitution $$\begin{aligned}
{2}
\sigma : \ \,&&( B^*\hspace*{1pt} c )^* \ &\to \ T^* \\
&& v_1 c \cdots v_k c \ &\mapsto \ \varphi_c(v_1) \cdots \varphi_c(v_k)
\end{aligned}$$ for $v_i \in B^*$. In addition, we define a homomorphism $\psi : T^* \to M_c$ with $M_c = (\varphi(c) M \cap M \varphi(c), \circ, \varphi(c))$ by $$\begin{aligned}
{2}
\psi : \ && T^* &\to M_c \\
&& \varphi_c(v) &\mapsto \varphi(cvc)
\end{aligned}$$ for $\varphi_c(v) \in T$. Consider a word $w = v_1 c \cdots v_k c$ with $k \geq 0$ and $v_i \in
B^*$. Then $$\begin{aligned}[b]
\psi \bigl(\sigma(w) \bigr)
&= \psi\bigl( \varphi_c(v_1) \varphi_c(v_2) \cdots \varphi_c(v_k) \bigr) \\
&= \varphi(c v_1 c) \circ \varphi(c v_2 c) \circ
\cdots \circ \varphi(c v_k c) \\
&= \varphi(c v_1 c v_2 \cdots c v_k c)
= \varphi(cw).
\label{eq:important}
\end{aligned}$$ Thus, we have $cw \in \varphi^{-1}(p)$ if and only if $w \in \sigma^{-1} \bigl( \psi^{-1}(p) \bigr)$. This shows $\varphi^{-1}(p)
\cap c\/A^* \cap A^* c = c \cdot \sigma^{-1} \bigl( \psi^{-1}(p) \bigr)$ for every $p \in \varphi(c) M \cap M \varphi(c)$. In particular, it remains to show $\sigma^{-1} \bigl( \psi^{-1}(p) \bigr) \in {\mathrm{SF}}(A^*)$. By [Lemma \[lem:loc\]]{}, the monoid $M_c$ is aperiodic and ${\left|\mathinner{M_c}\right|} < {\left|\mathinner{M}\right|}$. Thus, by induction on the size of the monoid we have $\psi^{-1}(p) \in
{\mathrm{SF}}(T^*)$, and by induction on the size of the alphabet we have $\varphi_c^{-1}(t) \in {\mathrm{SF}}(B^*)
\subseteq {\mathrm{SF}}(A^*)$ for every $t \in T$. For $t \in T$ and $K,K' \in
{\mathrm{SF}}(T^*)$ we have $$\begin{aligned}
\sigma^{-1}(T^*) &= A^* c \cup {\left\{1\right\}} \\
\sigma^{-1}(t) &= \varphi_c^{-1}(t) \cdot c \\
\sigma^{-1}(K \cup K')
&= \sigma^{-1}(K) \cup \sigma^{-1}(K') \\
\sigma^{-1}(K \setminus K')
&= \sigma^{-1}(K) \setminus \sigma^{-1}(K') \\
\sigma^{-1}(K \cdot K') &= \sigma^{-1}(K) \cdot
\sigma^{-1}(K').
\end{aligned}$$ Only the last equality requires justification. The inclusion from right to left is trivial. For the other inclusion, suppose $w = v_1
c \cdots v_k c \in \sigma^{-1} (K \cdot K')$ for $k \geq 0$ and $v_i
\in B^*$. Then $\varphi_c(v_1) \cdots
\varphi_c(v_k) \in K \cdot K'$, and thus $\varphi_c(v_1) \cdots
\varphi_c(v_i) \in K$ and $\varphi_c(v_{i+1}) \cdots \varphi_c(v_k)
\in K'$ for some $i \geq 0$. It follows $v_1 c \cdots v_i c \in
\sigma^{-1}(K)$ and $v_{i+1} c \cdots v_k c \in K'$. This shows $w
\in \sigma^{-1}(K) \cdot \sigma^{-1}(K')$.
We conclude that $\sigma^{-1}(K) \in {\mathrm{SF}}(A^*)$ for every $K \in
{\mathrm{SF}}(T^*)$. In particular, we have $\sigma^{-1} \big(\psi^{-1}(p)\big) \in {\mathrm{SF}}(A^*)$.
A more algebraic viewpoint of the proof of [Proposition \[prp:ap:sf\]]{} is the following. The mapping $\sigma$ can be seen as a length-preserving homomorphism from a submonoid of $A^*$—freely generated by the infinite set $B^* \hspace*{1pt}c$—onto $T^*$; and this homomorphism is defined by $\sigma(vc) = \varphi_c(v)$ for $vc \in B^* \hspace*{0.5pt}
c$. The mapping $\tau : M\varphi(c) \cup {\left\{1\right\}}
\to M_c$ with $\tau(x) = \varphi(c) \cdot x$ defines a homomorphism. Now, by Equation the following diagram commutes:
(0,0) node (Pc) [$(B^* \hspace*{0.5pt} c)^*$]{}; (3,0) node (Q) [$T^*$]{}; (0,-2) node (M) [$M\varphi(c) \cup {\left\{1\right\}}$]{}; (3,-2) node (Mc) [$M_c$]{}; (Pc) – node\[above\] [$\sigma$]{} (Q); (Pc) – node\[left\] [$\varphi$]{} (M); (Q) – node\[right\] [$\psi$]{} (Mc); (M) – node\[above\] [$\tau$]{} (Mc);
[$\Diamond$]{}
The following lemma gives the remaining inclusion of ${\mathrm{SF}}(A^*) = {\mathrm{AP}}(A^*)$. Its proof is standard; it is presented here only to keep this paper self-contained.
\[lem:sf:ap\] For every language $L \in {\mathrm{SF}}(A^*)$ there exists an integer $n(L)
\in {\mathbb{N}}$ such that for all words $p,q,u,v \in A^*$ we have $$p\, u^{n(L)}q \in L \ \Leftrightarrow \ p\, u^{n(L)+1}q \in L.$$
For the languages $A^*$ and ${\left\{a\right\}}$ with $a \in A$ we define $n(A^*) = 0$ and $n({\left\{a\right\}}) = 2$. Let now $K,K' \in {\mathrm{SF}}(A^*)$ such that $n(K)$ and $n(K')$ exist. We set $$\begin{gathered}
n(K \cup K') = n(K \setminus K') = \max \bigl( n(K), n(K') \bigr), \\
n(K \cdot K') = n(K) + n(K') + 1.
\end{gathered}$$ The correctness of the first two choices is straightforward. For the last equation, suppose $p\, u^{n(K)+n(K')+2}q \in K \cdot K'$. Then either $p\,
u^{n(K)+1} q' \in K$ for some prefix $q'$ of $u^{n(K')+1} q$ or $p'\, u^{n(K')+1} q \in K'$ for some suffix $p'$ of $p u^{n(K) +
1}$. By definition of $n(K)$ and $n(K')$ we have $p\, u^{n(K)} q' \in
K$ or $p'\, u^{n(K')} q \in K'$, respectively. Thus $p\,
u^{n(K)+n(K')+1}q \in K \cdot K'$. The other direction is similar: If $p\, u^{n(K)+n(K')+1}q \in K \cdot K'$, then $p\,
u^{n(K)+n(K')+2}q \in K \cdot K'$. This completes the proof.
\[thm:schutz\] Let $A$ be a finite alphabet and let $L \subseteq A^*$. The following conditions are equivalent:
1. \[aaa:schutz\] $L$ is star-free.
2. \[bbb:schutz\] The syntactic monoid of $L$ is finite and aperiodic.
3. \[ccc:schutz\] $L$ is recognized by a finite aperiodic monoid.
“\[aaa:schutz\]$\;\Rightarrow\;$\[bbb:schutz\]”: Every language $L \in {\mathrm{SF}}(A^*)$ is regular. Thus ${\mathrm{Synt}}(L)$ is finite, [[*cf.*]{}]{} [@pin86]. By [Lemma \[lem:sf:ap\]]{}, we see that ${\mathrm{Synt}}(L)$ is aperiodic. The implication “\[bbb:schutz\]$\;\Rightarrow\;$\[ccc:schutz\]” is trivial. If $\varphi^{-1}\big(\varphi(L)\big) = L$, then we can write $L = \bigcup_{p \in \varphi(L)} \varphi^{-1}(p)$. Therefore, “\[ccc:schutz\]$\;\Rightarrow\;$\[aaa:schutz\]” follows by [Proposition \[prp:ap:sf\]]{}.
The syntactic monoid of a regular language (for instance, given by a nondeterministic automaton) is effectively computable. Hence, from the equivalence of conditions “\[aaa:schutz\]” and “\[bbb:schutz\]” in [Theorem \[thm:schutz\]]{} it follows that star-freeness is a decidable property of regular languages. The equivalence of “\[aaa:schutz\]” and “\[ccc:schutz\]” can be written as $${\mathrm{SF}}(A^*) = {\mathrm{AP}}(A^*).$$ The equivalence of “\[bbb:schutz\]” and “\[ccc:schutz\]” is rather trivial: The class of finite aperiodic monoids is closed under division, and the syntactic monoid of $L$ divides any monoid that recognizes $L$, see [[*e.g.*]{}]{} [@pin86].
Acknowlegdements {#acknowlegdements .unnumbered}
----------------
The author would like to thank Volker Diekert and Benjamin Steinberg for many interesting discussions on the proof method used in [Proposition \[prp:ap:sf\]]{}.
[10]{} V. Diekert and P. Gastin. Pure future local temporal logics are expressively complete for [M]{}azurkiewicz traces. , 204:1597–1619, 2006.
A. [Fern[á]{}ndez L[ó]{}pez]{} and M. [Toc[ó]{}n Barroso]{}. The local algebras of an associative algebra and their applications. In J. Misra, editor, [*Applicable Mathematics in the Golden Age*]{}, pages 254–275. Narosa, 2002.
S. C. Kleene. Representation of events in nerve nets and finite automata. In C. E. Shannon and J. McCarthy, editors, [*Automata Studies*]{}, number 34 in Annals of Mathematics Studies, pages 3–40. Princeton University Press, 1956.
M. Kufleitner. Star-free languages and local divisors. In H. Jürgensen, J. Karhumäki, and A. Okhotin, editors, Proceedings of DCFS 2014. LNCS vol. 8614, pp. 23–28, 2014.
K. Meyberg. Lectures on algebras and triple systems. Technical report, University of Virginia, Charlottesville, 1972.
J.-[É]{}. Pin. . North Oxford Academic, London, 1986.
M. O. Rabin and D. Scott. Finite automata and their decision problems. , 3:114–125, 1959. Reprinted in E. F. Moore, editor, [*Sequential Machines: Selected Papers*]{}, Addison-Wesley, 1964.
M. P. Sch[ü]{}tzenberger. On finite monoids having only trivial subgroups. , 8:190–194, 1965.
[^1]: The author gratefully acknowledges the support by the German Research Foundation (DFG) under grant and the support by .
[^2]: The final publication is available at Springer via `http://dx.doi.org/10.1007/``978-``3-``319-``09704-``6_3`.
|
---
abstract: 'A covariant spectator quark model is applied to estimate the valence quark contributions to the $F_1^\ast(Q^2)$ and $F_2^\ast(Q^2)$ transition form factors for the $\gamma N \to P_{11}(1440)$ reaction. The Roper resonance, $P_{11}(1440)$, is assumed to be the first radial excitation of the nucleon. The model requires no extra parameters except for those already fixed by the previous studies for the nucleon. The results are consistent with the experimental data in the high $Q^2$ region, and those from the lattice QCD. Finally the model is also applied to estimate the meson cloud contributions from the CLAS and MAID analysis.'
author:
- 'G. Ramalho'
- 'K. Tsushima'
title: |
Valence quark contributions for\
the $\gamma N \to P_{11}(1440)$ transition
---
[ address=[Centro de F[í]{}sica Teórica de Part[í]{}culas, Av. Rovisco Pais, 1049-001 Lisboa, Portugal]{} ]{}
[ address=[ Excited Baryon Analysis Center (EBAC) in Theory Center, Thomas Jefferson National Accelerator Facility, Newport News, Virginia 23606, USA\
and\
CSSM, School of Chemistry and Physics, University of Adelaide, Adelaide SA 5005, Australia]{} ]{}
Introduction
============
Study of the meson-nucleon reactions is one of the most important research topics associated with modern accelerators like CEBAF at Jefferson laboratory, and defines new challenges for theoretical models. Although the electroproduction of nucleon resonances ($\gamma N \to N^\ast$) is expected to be governed by the short range interaction of quarks and gluons, i.e., perturbative QCD, at low and intermediate four-momentum transfer squared $Q^2$, one has to rely on some effective and phenomenological approaches such as constituent quark models.
The Roper \[$P_{11}(1440)$\] resonance is particularly interesting among the many experimentally observed nucleon resonances. Although usual quark models predict the Roper state as the first radial excitation of the nucleon, they usually result to give a much larger mass than the experimentally observed mass [@Aznauryan07]. Experimentally the Roper has a large width which implies that it can be a $\pi N$ or $\pi \pi N$ molecular-like system, or alternatively, a system of a confined three-quark core surrounded by a large amount of meson clouds. The system can be studied using dynamical coupled channel models for the meson-nucleon reactions [@Burkert04].
To describe the $\gamma N \to P_{11}(1440)$ transition, we use a covariant spectator model [@Gross69; @Nucleon; @Omega; @Octet; @NDelta]. The model has been successfully applied to the nucleon [@Nucleon; @Octet; @FixedAxis; @ExclusiveR] and $\Delta$ systems [@NDelta; @NDeltaD; @LatticeD; @DeltaFF]. A particular interest of this work is the model for the nucleon [@Nucleon]. In the covariant spectator quark model a baryon is described as a three-valence quark system with an on-shell quark-pair or diquark with mass $m_D$, while the remaining quark is off-shell and free to interact with the electromagnetic fields. The quark-diquark vertex is represented by a baryon $B$ wave function $\Psi_B$ that effectively describes quark confinement [@Nucleon]. To represent the nucleon system, we adopt the simplest structure given by a symmetric and antisymmetric combination of the diquark states combined to a relative S-state with the remaining quark [@Nucleon]: \_N= \_N(P,k) \[eqPsiN\] where $\Phi_{S}^{0,1}$ \[$\Phi_{I}^{0,1}$\] is the spin \[isospin\] state which corresponds to the diquark with the quantum number 0 or 1. The function $\psi_N$ is a scalar wave function which depends exclusively on $(P-k)^2$, where $P$ ($k$) is the baryon (diquark) momentum. As the Roper shares the same spin and isospin quantum numbers with the nucleon its wave function $\Psi_R$ can also be represented by Eq. (\[eqPsiN\]) except for the scalar wave function $\psi_R$.
The constituent quark electromagnetic current in the model is described by j\_I\^= ( f\_[1+]{} + f\_[1-]{} \_3 ) (\^- )+ ( f\_[2+]{} + f\_[2-]{} \_3 ) j\_2 , where $M$ is the nucleon mass and $\tau_3$ is the isospin operator. To parameterize the electromagnetic structure of the constituent quark in terms of the quark form factors $f_{1\pm}$ and $f_{2\pm}$, we adopt a vector meson dominance based parametrization given by two vector meson poles: a light one corresponds to the $\rho$ or $\omega$ vector meson, and the heavier one corresponds to an effective heavy meson with mass $M_h= 2M$ to take accounts of the short range phenomenology. This parametrization allows us to extend the model for other regimes such as lattice QCD [@Omega; @LatticeD; @Lattice].
The $\gamma N \to P_{11}$ transition in the model is described by a relativistic impulse approximation in terms of the initial $P_-$ and final $P_+$ baryon momenta, with the diquark (spectator) on-mass-shell [@Roper]: J\^&=& 3 \_ \_k |\_R (P\_-,k)j\_I\^\_N(P\_-,k),\
&=& |u\_R (P\_+) u(P\_-). In the first line the sum is over the diquark states $\Lambda=\{s,\lambda\}$, where $s$ and $\lambda=0,\pm1$ stand for the scalar diquark and the vector diquark polarization, respectively, and the covariant integral is $\int_k \equiv \int d^3k/[2E_D(2\pi)^3]$, where $E_D$ is the diquark energy. The factor 3 is due to the flavor symmetry. In the second line the transition form factors $F_1^\ast$ and $F_2^\ast$ are defined independent of the frame using the Dirac spinors of the Roper ($u_R$) and nucleon ($u$) with the respective masses $M_R$ and $M$. For simplicity the spin projection indices are suppressed.
![$\gamma N \to P_{11}(1440)$ transition form factors [@Roper]. The solid \[dotted\] line represents the result for $M_R=1.440$ GeV \[1.750 GeV\]. Data are taken from Ref. [@CLAS].[]{data-label="Roper"}](F1cT2 "fig:"){width="3.0in"} ![$\gamma N \to P_{11}(1440)$ transition form factors [@Roper]. The solid \[dotted\] line represents the result for $M_R=1.440$ GeV \[1.750 GeV\]. Data are taken from Ref. [@CLAS].[]{data-label="Roper"}](F2dT2 "fig:"){width="3.0in"}
To represent the scalar wave functions as functions of $(P-k)^2$, we can use a dimensionless variable \_[\_B]{}= , where $M_B$ is the baryon mass. With this form, the nucleon and the Roper scalar wave functions are given by [@Roper]: \_N(P,k)&=& ,\
\_R(P,k)&=& . The nucleon scalar wave function is chosen to reproduce the asymptotic form predicted by pQCD for the nucleon form factors ($G_E,G_E \sim 1/Q^4$) and to describe the elastic nucleon form factor data [@Nucleon]. The parameters $\beta_1$ and $\beta_2$ are associated with the momentum range. With $\beta_2 > \beta_1$, $\beta_1$ and $\beta_2$ are the long-range and short-range regulators, respectively. The expression for the Roper scalar wave function is inspired by the nonrelativistic wave functions in the spherical harmonic potentials [@Aznauryan07; @Capstick95; @Diaz04], where the factor $\beta_3- \chi_R$ (linear in the momentum variable $\chi_R$) characterizes the radial excitation. The factors $N_0$ and $N_1$ are normalization constants given by the condition $\int_k |\psi_B|^2=1$ at $Q^2=0$ for $B=N,R$. The new parameter $\beta_3$, associated with the Roper, is fixed by the orthogonality with the nucleon wave function.
Results
=======
The results for the $\gamma N \to P_{11}(1440)$ transition form factors are presented in Fig. \[Roper\]. Recall that all parameters associated with the Roper have been determined by the relation with the nucleon. Therefore, the results are genuine predictions. The agreement with the data is excellent for $Q^2>2$ GeV$^2$, in particular for $F_1^\ast$. Our results are also consistent with other work for the intermediate and high $Q^2$ region. See Ref. [@Roper] for more details. In dynamical coupled channel models [@Suzuki10] the Roper appears as a heavy bare system ($\approx$ 1.750 MeV) dressed by meson clouds that reduce the mass of the system to near the observed mass ($\approx$ 1.440 MeV). To explore further this reduction of mass due to the meson clouds, we replace the physical mass by the ’bare’ mass. The result is indicated by the dotted line. As one can observe from the figure the effect due to the Roper bare mass is small although it approaches very slightly to the data.
The disagreement in the low $Q^2$ region in Fig. \[Roper\] can be interpreted as a limitation of the approach, since only valence quark effects have been included, but not the quark-antiquark, or meson cloud effects. The meson cloud effects are expected to be important in the low $Q^2$ region [@Aznauryan07; @Burkert04; @CLAS]. This argument is supported by the fact that when the model extended to a heavy pion mass lattice QCD regime, where the meson cloud effect is suppressed, since the result agrees well the heavy pion mass lattice data [@Roper; @Lin09]. The importance of the meson cloud contributions in inelastic reactions was also observed in the $\gamma N \to \Delta$ reaction [@NDelta; @NDeltaD; @LatticeD]. Furthermore, it was shown that the covariant spectator quark model can describe both the lattice and physical data [@LatticeD]. Based on the successes in describing the heavy pion mass lattice QCD data and the high $Q^2$ region data, we have some confidence that the model can describe well the valence quark contributions. Thus, we can use the spectator quark model to estimate the meson cloud contributions.
To estimate the meson cloud contributions, we decompose the form factors as, F\_i\^(Q\^2)= F\_i\^b (Q\^2) + F\_i\^[mc]{}(Q\^2) (i=1,2), where $F_i^b$ and $F_i^{mc}$ represent the valence quark (bare) and meson cloud contributions, respectively. This decomposition is justified if the meson is created by the overall baryon but not by a single quark in the baryon core. Replacing $F_i^b$ by the result of the spectator quark model, one can estimate the meson cloud contributions in the $\gamma N \to P_{11}(1440)$ transition. We estimate the meson cloud contributions in two different ways as presented in Fig. \[Amp\] (left panel). One is based on the MAID fit made for the whole $Q^2$ region, and estimates the bands associated with the meson cloud assuming the uncertainty of the CLAS data points (red region). The other estimate is from CLAS data for each data point by subtracting the valence quark contribution. The results are represented by circles.
Similarly, the meson cloud contributions can be determined for the helicity amplitudes, $A_{1/2}$ and $S_{1/2}$, associated with the photon polarizations of $+1$ and $0$ respectively. Representing the corresponding amplitudes by $R=A_{1/2}$ and $S_{1/2}$, respectively, one can obtain the meson cloud contributions: R\^[mc]{}(Q\^2)= R(Q\^2) - R\^[b]{}(Q\^2). The results are also shown in Fig. \[Amp\] (right panel). Both methods suggest the significant meson cloud effects for low $Q^2$ region ($Q^2 < 1$ GeV$^2$), and a fast falloff with increasing $Q^2$.
In conclusion, the spectator quark model can be effectively applied to study the nucleon resonances, particularly in the intermediate and high $Q^2$ region, where the valence quark effects are dominant, and both the relativity and covariance are essential. Another example of a successful application of the present approach can be found in Ref. [@Delta1600].
The authors would like to thank V. D. Burkert for the invitation. G. R. thanks the organizers for the financial support. G. R. was supported by the Portuguese Fundação para a Ciência e Tecnologia (FCT) under the grant SFRH/BPD/26886/2006. Notice: Authored by Jefferson Science Associates, LLC under U.S. DOE Contract No. DE-AC05-06OR23177. The U.S. Government retains a non-exclusive, paid-up, irrevocable, world-wide license to publish or reproduce this manuscript for U.S. Government purposes.
$\begin{array}{cc}
\includegraphics[width=3.0in]{F1subMx} \vspace{.05cm}
& \hspace{.1cm}
\includegraphics[width=3.0in]{A12e}\vspace{.06cm} \\
& \\
\includegraphics[width=3.0in]{F2subMx} & \hspace{.1cm}
\includegraphics[width=3.0in]{S12d}
\end{array}$
[99]{}
I. G. Aznauryan, Phys. Rev. C [**76**]{}, 025212 (2007).
V. D. Burkert and T. S. H. Lee, Int. J. Mod. Phys. E [**13**]{}, 1035 (2004).
F. Gross, Phys. Rev. [**186**]{}, 1448 (1969).
F. Gross, G. Ramalho and M. T. Peña, Phys. Rev. C [**77**]{}, 015202 (2008).
G. Ramalho, K. Tsushima and F. Gross, Phys. Rev. D [**80**]{}, 033004 (2009). F. Gross, G. Ramalho and K. Tsushima, Phys. Lett. B [**690**]{}, 183 (2010).
G. Ramalho, M. T. Peña and F. Gross, Eur. Phys. J. A [**36**]{}, 329 (2008). F. Gross, G. Ramalho and M. T. Peña, Phys. Rev. C [**77**]{}, 035203 (2008). G. Ramalho, F. Gross, M. T. Peña and K. Tsushima, arXiv:1008.0371 \[hep-ph\].
G. Ramalho, M. T. Peña and F. Gross, Phys. Rev. D [**78**]{}, 114017 (2008).
G. Ramalho and M. T. Peña, Phys. Rev. D [**80**]{}, 013008 (2009).
G. Ramalho, M. T. Peña and F. Gross, Phys. Rev. D [**81**]{}, 113011 (2010).
G. Ramalho and M. T. Peña, J. Phys. G [**36**]{}, 115011 (2009).
G. Ramalho and K. Tsushima, Phys. Rev. D [**81**]{}, 074020 (2010).
I. G. Aznauryan [*et al.*]{} \[CLAS Collaboration\], Phys. Rev. C [**80**]{}, 055203 (2009).
L. Tiator and M. Vanderhaeghen, Phys. Lett. B [**672**]{}, 344 (2009).
H. W. Lin, S. D. Cohen, R. G. Edwards and D. G. Richards, Phys. Rev. D [**78**]{}, 114508 (2008). S. Capstick and B. D. Keister, Phys. Rev. D [**51**]{}, 3598 (1995). B. Julia-Diaz, D. O. Riska and F. Coester, Phys. Rev. C [**69**]{}, 035212 (2004) \[Erratum-ibid. C [**75**]{}, 069902 (2007)\].
N. Suzuki, B. Julia-Diaz, H. Kamano, T. S. Lee, A. Matsuyama and T. Sato, Phys. Rev. Lett. [**104**]{}, 042302 (2010).
G. Ramalho and K. Tsushima, Phys. Rev. D [**82**]{}, 073007 (2010).
|
---
abstract: 'We consider the vectorial Zakharov system describing Langmuir waves in a weakly magnetized plasma. In its original derivation [@Z] the evolution for the electric field envelope is governed by a Schrödinger type equation with a singular parameter which is usually large in physical applications. Motivated by this, we study the rigorous limit as this parameter goes to infinity. By using some Strichartz type estimates to control separately the fast and slow dynamics in the problem, we show that the evolution of the electric field envelope is asymptotically constrained onto the space of irrotational vector fields.'
address:
- 'GSSI, Gran Sasso Science Institute, Viale F. Crispi 7, 67100 L’Aquila, Italy'
- 'Scuola Normale Superiore, Piazza dei Cavalieri, 7, 56126 Pisa, Italy'
author:
- Paolo Antonelli
- Luigi Forcella
title: THE ELECTROSTATIC LIMIT FOR THE 3D ZAKHAROV SYSTEM
---
Introduction. {#introduction}
=============
In this paper we consider the vectorial Zakharov system [@Z] describing Langmuir waves in a weakly magnetized plasma. After a suitable rescaling of the variables it reads [@SS] $$\label{ZG}
\left\{ \begin{array}{ll}
i\partial_tu-\alpha\nabla\times\nabla\times u+\nabla(\operatorname{div}u)=nu\\
\frac{1}{c_s^2} \partial_{tt}n-\Delta n=\Delta|u|^2
\end{array} \right.,$$
subject to initial conditions $$u(0)=u_0, \quad n(0)=n_0, \quad\d_tn(0)=n_1.$$ Here $u:\R\times\R^3\to\C^3$ describes the slowly varying envelope of the highly oscillating electric field, whereas $n:\R\times\R^3\to\R$ is the ion density fluctuation. The rescaled constants in are $\alpha=\frac{c^2}{3v_e^2}$, $c$ being the speed of light and $v_e=\sqrt{\frac{T_e}{m_e}}$ the electron thermal velocity, while $c_s$ is proportional to the ion acoustic speed. In many physical situations the parameter $\alpha$ is relatively large, see for example table 1, p. 47 in [@TtH], hence hereafter we will only consider $\alpha\geq1$. In the large $\alpha$ regime, the electric field is almost irrotational and in the electrostatic limit $\alpha\to\infty$ the dynamics is asymptotically described by $$\label{ZL}
\left\{ \begin{array}{ll}
i\partial_tu+\Delta u=\Q(nu) \\
\frac{1}{c_s^2} \partial_{tt}n-\Delta n=\Delta|u|^2
\end{array} \right.,$$ where $\Q=-(-\Delta)^{-1}\nabla\operatorname{div}$ is the Helmholtz projection operator onto irrotational vector fields. By further simplifying it is possible to consider the so called scalar Zakharov system $$\label{Z}
\left\{ \begin{array}{ll}
i\partial_tu+\Delta u=nu \\
\frac{1}{c_s^2} \partial_{tt}n-\Delta n=\Delta|u|^2
\end{array} \right.,$$ which retains the main features of . In the subsonic limit $c_s\to\infty$ we find the cubic focusing nonlinear Schrödinger equation $$i\d_tu+\Delta u+|u|^2u=0.$$
The Cauchy problem for the Zakharov system has been extensively studied in the mathematical literature. For the local and global well-posedness, see [@SS2; @OT1; @OT2; @KPV; @BC] and the recent results concerning low regularity solutions [@GTV; @BH]. In [@M] formation of blow-up solutions is studied by means of virial identities, see also [@GM] where self-similar solutions are constructed in two space dimensions. The subsonic limit $c_s\to\infty$ for is investigated in [@SW]. Furthermore, some related singular limits are also studied in [@MN], considering the Klein-Gordon-Zakharov system. Here in this paper we do not consider such limits, hence without loss of generalities we can set $c_s=1$.
The aim of our research is to rigorously study the electrostatic limit for the vectorial Zakharov equation, namely we show that mild solutions to converge towards solutions to as $\alpha\to\infty$.
As we will see below, we will investigate this limit by exploiting two auxiliary systems associated to , , namely systems and below. Those are obtained by considering $v=\d_tu$ as a new variable and by studying the Cauchy problem for the auxiliary system describing the dynamics for $(v, n)$ and a state equation for $u$ (see for more details). This approach, already introduced in [@OT1; @OT2] to study local and global well-posedness for the Zakharov system , overcomes the problem generated by the loss of derivatives on the term $|u|^2$ in the wave equation, but in our context it introduces a new difficulty. Indeed the initial data $v(0)$ is not uniformly bounded for $\alpha\geq1$, see also the beginning of below for a more detailed mathematical discussion.
For this reason we will need to consider a family of well-prepared initial data; more precisely we will take a set $u_0^\alpha$ of initial states for the Schrödinger part in which converges to an irrotational initial datum for .
We consider initial data $(u_0^\alpha, n_0^\alpha, n_1^\alpha)\in H^2(\R^3)\times H^1(\R^3)\times L^2(\R^3)=:\mathcal H_2$ for , converging in the same space to a set of initial data $(u_0^\infty, n_0^\infty, n_1^\infty)\in \mathcal{H}_2$, with $u_0^\infty$ an irrotational vector field, and we show the convergence in the space $$\begin{aligned}
\mathcal X_T:=\big\{(u, n)\;:\;&u\in L^q(0, T;W^{2, r}(\R^3)),\;\forall\;(q, r)\;\textrm{admissible pair},\\
&n\in L^\infty(0, T;H^1(\R^3))\cap W^{1, \infty}(0, T;L^2(\R^3))\big\}.
\end{aligned}$$
For a more detailed discussion about notations and the spaces considered in this paper we refer the reader to .
Before stating our main result we first recall the local well-posedness result in $\mathcal H_2$ for system .
\[lwpOT1\] Let $(u_0, n_0, n_1)\in\mathcal H_2$, then there exist a maximal time $0<T_{max}\leq\infty$ and a unique solution $(u, n)$ to such that $u\in\mathcal C([0, T_{max}); H^2)\cap\mathcal C^1([0, T_{max});L^2)$, $n\in\mathcal C([0, T_{max});H^1)\cap\mathcal C^1([0, T_{max});L^2)$. Furthermore the solution depends continuously on the initial data and the standard blow-up alternative holds true: either $T_{max}=\infty$ and the solution is global or $T_{max}<\infty$ and we have $$\lim_{t\rightarrow T_{max}}\|(u, n, \d_tn)(t)\|_{\mathcal H_2}=\infty.$$
Analogously we are going to prove the same local well-posedness result for system . Moreover, despite the fact that the initial datum for is not uniformly bounded for $\alpha\geq1$ (see the discussion at the beginning of ), we can anyway infer some a priori bounds in $\alpha$ for the solution $(u^\alpha, n^\alpha)$ to .
\[thm:lwp\] Let $(u^\alpha_0, n^\alpha_0, n^\alpha_1)\in\mathcal H_2$, then there exist a maximal time $T^\alpha_{max}>0$ and a unique solution $(u^\alpha, n^\alpha)$ to such that
- $u^\alpha\in\mathcal C([0, T_{max}^\alpha); H^2)\cap\mathcal C^1([0, T_{max}^\alpha);L^2)$,
- $n^\alpha\in\mathcal C([0, T_{max}^\alpha);H^1)\cap\mathcal C^1([0, T_{max}^\alpha);L^2)$.
Furthermore the existence times $T^\alpha_{max}$ are uniformly bounded from below, $0<T^\ast\leq T^\alpha_{max}$ for any $\alpha\geq1,$ and we have $$\|(u^\alpha, n^\alpha, \d_tn^\alpha)\|_{L^\infty(0, T;\mathcal H_2)}+\|\d_tu^\alpha\|_{L^2(0, T;L^6)}\leq C(T, \|u_0^\alpha, n_0^\alpha, n_1^\alpha\|_{\mathcal H_2}),$$ for any $0<T<T^\alpha_{max}$, where the constant above does not depend on $\alpha\geq1$.
Our main result in this paper is the following one.
\[thm:main\] Let $(u_0^\alpha, n_0^\alpha, n_1^\alpha)\in \mathcal{H}_2$ and let $(u^\alpha, n^\alpha)$ be the maximal solution to defined on the time interval $[0, T_{max}^\alpha)$. Let us assume that $$\lim_{\alpha\to\infty}\|(u_0^\alpha, n_0^\alpha, n_1^\alpha)-(u_0^\infty, n_0^\infty, n_1^\infty)\|_{\mathcal{H}_2}=0,$$ for some $(u_0^\infty, n_0^\infty, n_1^\infty)\in \mathcal{H}_2$ such that $u_0^\infty=\Q u_0^\infty$, and let $(u^\infty, n^\infty)$ be the maximal solutions to in the interval $[0, T_{max}^\infty)$ with such initial data. Then $$\liminf_{\alpha\to\infty}T_{max}^\alpha\geq T^\infty_{max}$$ and we have the following convergence $$\lim_{\alpha\to\infty}\|(u^\alpha, n^\alpha)-(u^\infty, n^\infty)\|_{\mathcal X_T}=0,$$ for any $0<T<T_{max}^\infty$.
The paper is structured as follows. In we fix some notations and give some preliminary results which will be used in the analysis of the problem below. In we show the local well-posedness of system in the space $\mathcal{H}_2$. Finally in we investigate the electrostatic limit and prove the main theorem.
Acknowledgements {#acknowledgements .unnumbered}
----------------
This paper and its project originated after many useful discussions with Prof. Pierangelo Marcati, during second author’s M. Sc. thesis work. We would like to thank P. Marcati for valuable suggestions.
Preliminary results and tools. {#sect:prel}
==============================
In this section we introduce notations and some preliminary results which will be useful in the analysis below. The Fourier transform of a function $f$ is defined by $$\mathcal F(f)(\xi)=\hat f(\xi)=\int_{\R^3}e^{-2\pi ix\cdot\xi}f(x)\,dx,$$ with its inverse $$f(x)=\int_{\R^3}e^{2\pi ix\cdot\xi}\hat f(\xi)\,d\xi.$$ Given an interval $I\subset\mathbb{R},$ we denote by $L^q(I;L^r)$ the Bochner space equipped with the norm defined by $$\|f\|_{L^q(I;L^r)}=\bigg(\int_{I}\|f(s)\|_{L^r(\mathbb{R}^3)}^q\,ds\bigg)^{1/q},$$ where $f=f(s,x).$ When no confusion is possible, we write $L^q_tL^r_x=L^q(I;L^r(\R^3))$. Given two Banach spaces $X, Y$, we denote $\|f\|_{X\cap Y}:=\max\{\|f\|_X, \|f\|_Y\}$ for $f\in X\cap Y$. With $W^{k,p}$ we denote the standard Sobolev spaces and for $p=2$ we write $H^k=W^{k, 2}$. $A\lesssim B$ means that there exists a universal constant $C$ such that $A\leq CB$ and in general in a chain of inequalities the constant may change from one line to the other.As already said in the Introduction, given a vector field $F$, we denote by $\Q F=-(-\Delta)^{-1}\nabla\operatorname{div}F$ its projection into irrotational fields, moreover $\P=1-\Q$ is its orthogonal projection operator onto solenoidal fields. Let us just recall that $\nabla\times F$ is the standard curl operator on $\R^3$.
The space of initial data is denoted by $\mathcal{H}_2:=H^2(\R^3)\times H^1(\R^3)\times L^2(\R^3)$. A pair of Lebesgue exponents is called *Schrödinger admissible* (or simply admissible) if $2\leq q\leq\infty$, $2\leq r\leq 6$ and they are related through $$\frac1q=\frac32\left(\frac12-\frac1r\right).$$ Given a time interval $I\subset\R$ we denote the Strichartz space $S^0$(I) to be the closure of the Schwartz space with the norm $$\|u\|_{S^0(I)}:=\sup_{(q, r)}\|u\|_{L^q(I;L^r(\R^3))},$$ where the $\sup$ is taken over all admissible pairs; furthermore we write $$S^2(I)=\{u\in S^0(I)\;:\;\nabla^2u\in S^0(I)\}.$$
We define moreover the space $$\mathcal W^1(I)=\{n\,:\,n\in L^\infty(I;H^1)\cap W^{1, \infty}(I;L^2)\}$$ endowed with the norm $$\|n\|_{\mathcal W^1(I)}=\|n\|_{L^{\i}(I;H^1)}+\|\d_tn(t)\|_{L^{\i}(I;L^2)}.$$ The space of solutions we consider in this paper is given by $$\mathcal X_T=\{(u, n)\;:\;u\in S^2([0, T]),\;n\in \mathcal W^1([0,T])\}.$$ We will also use the following notation: $$\begin{aligned}
\mathcal C([0, T); \mathcal{H}_2)=\big\{(u, n)\;:\;&u\in\mathcal C([0, T);H^2)\cap\mathcal C^1([0, T);L^2),\\
&n\in\mathcal C([0, T);H^1)\cap\mathcal C^1([0, T);L^2)\big\}.
\end{aligned}$$ Here in this paper we only consider positive times, however the same results are valid also for negative times.
We now introduce some basic preliminary results which will be useful later in the analysis.
First of all we consider the linear propagator related to , namely $$\label{f1.3}
i\d_tu=\alpha\nabla\times\nabla\times u-\nabla\operatorname{div}u.$$
Let $u$ solve with initial datum $u(0)=u_0$, then $$\label{eq:prop}
u(t)=U_Z(t)u_0=\left[U(\alpha t)\P+U(t)\Q\right]u_0,$$ where $U(t)=e^{it\Delta}$ is the Schrödinger evolution operator.
By taking the Fourier transform we have $$\begin{aligned}
i\d_t\hat u&=-\alpha\xi\times\xi\times\hat u+\xi(\xi\cdot\hat u)\\
&=|\xi|^2\left(\alpha\hat\P(\xi)+\hat\Q(\xi)\right)\hat u(\xi),
\end{aligned}$$ where $\hat\P(\xi), \hat\Q(\xi)$ are two $(3\times3)-$matrices defined by $\hat\Q(\xi)=\frac{\xi\otimes\xi}{|\xi|^2}$, $\hat\P(\xi)=\bold1-\hat\Q(\xi)$ where $\bold1$ is the identity matrix . Hence we may write $$\hat u(t)=e^{-i\alpha t|\xi|^2\hat\P(\xi)-it|\xi|^2\hat\Q(\xi)}\hat u_0(\xi).$$ It is straightforward to see that $\hat\Q(\xi)$ is a projection matrix, $0\leq\hat\Q(\xi)\leq1$, $\hat\Q(\xi)=\hat\Q^2(\xi)$, hence $\hat\P(\xi)$ is its orthogonal projection. Consequently we have $$\begin{aligned}
\hat u(t)&=e^{-i\alpha t|\xi|^2\hat\P(\xi)}e^{-it|\xi|^2\hat\Q(\xi)}\hat u_0(\xi)\\
&=\left(e^{-i\alpha t|\xi|^2}\hat\P(\xi)+\hat\Q(\xi)\right)\left(e^{-it|\xi|^2}\hat\Q(\xi)+\hat\P(\xi)\right)\hat u_0(\xi)\\
&=\left(e^{-i\alpha t|\xi|^2}\hat\P(\xi)+e^{-it|\xi|^2}\hat\Q(\xi)\right)\hat u_0(\xi).
\end{aligned}$$ By taking the inverse Fourier transform we find .
By the dispersive estimates for the standard Schrödinger evolution operator (see for example [@C],[@GV], [@Y]), we have $$\label{eq:disp_pq}
\begin{aligned}
\|U(t)\Q f\|_{L^p}&\lesssim|t|^{-3\left(\frac12-\frac1p\right)}\|\Q f\|_{L^{p'}}\\
\|U(\alpha t)\P f\|_{L^p}&\lesssim|\alpha t|^{-3\left(\frac12-\frac1p\right)}\|\P f\|_{L^{p'}},
\end{aligned}$$ for any $2\leq p\leq\infty$, $t\neq0$. These two estimates together give $$\|U_Z(t)f\|_{L^p_x}\lesssim|t|^{-3\left(\frac12-\frac1p\right)}\|f\|_{L^{p'}_x},$$ for $2\leq p<\infty$. Let us notice that the dispersive estimate for $p=\infty$ does not hold for $U_Z(t)$ anymore because the projection operators $\Q, \P$ are not bounded from $L^1$ into itself. Nevertheless by using the dispersive estimates in and the result in [@KT] we infer the whole set of Strichartz estimates for the irrotational and solenoidal part, separately. By summing them up we thus find the Strichartz estimates for the propagator in .
\[lemma:strich\] Let $(q, r)$, $(\gamma, \rho)$ be two arbitrary admissible pairs and let $\alpha\geq1$, then we have $$\begin{aligned}
\|U(\alpha t)\P f\|_{L^q_t(I;L^r_x)}&\leq C \alpha^{-\frac2q}\|f\|_{L^2_x},\label{e5.20}\\
\bigg\|\int_{0}^{t}U(\alpha(t-s))\P F(s)\,ds\bigg\|_{L^q_t(I;L^r_x)}&\leq C\alpha^{-\left(\frac{1}{q}+\frac{1}{\gamma}\right)}\|F\|_{L^{\gamma^\prime}_t(I;L^{\rho^\prime}_x)}.\notag\end{aligned}$$ and $$\begin{aligned}
\|U(t)\Q f\|_{L^q_t(I;L^r_x)}&\leq C \|f\|_{L^2_x},\\
\bigg\|\int_{0}^{t}U(t-s)\Q F(s)\,ds\bigg\|_{L^q_t(I;L^r_x)}&\leq C\|F\|_{L^{\gamma^\prime}_t(I;L^{\rho^\prime}_x)},
\end{aligned}$$ Consequently we also have $$\begin{aligned}
\label{eq2.10}
\|U_Z(t)g\|_{L^q_t(I;L^r_x)}&\leq C\|f\|_{L^2_x},\\\label{eq2.11}
\bigg\|\int_{0}^{t}U_Z(t-s)F(s)\,ds\bigg\|_{L^q_t(I;L^r_x)}&\leq C\|F\|_{L^{\gamma^\prime}_t(I;L^{\rho^\prime}_x)}.\end{aligned}$$
The following remarks are in order.
- From the estimates in the Lemma above it is already straightforward that, at least in the linear evolution, we can separate the fast and slow dynamics and that the fast one is asymptotically vanishing. This is somehow similar to what happens with rapidly varying dispersion management, see for example [@ASS].
- Let us notice that the constants in and are uniformly bounded for $\alpha\geq1$. This is straightforward but it is a necessary remark to infer that the existence time in the local well-posedness section is uniformly bounded from below for any $\alpha\geq1$.
Local existence theory. {#sect:LWP}
=======================
In this Section we study the local well-posedness of in the space $\mathcal{H}_2$. We are going to perform a fixed point argument in order to find a unique local solution in the time interval $[0, T]$, for some $0<T<\infty$. By standard arguments it is then possible to extend the solution up to a maximal time $T_{max}$ for which the blow-up alternative holds. However, due to the loss of derivatives on the term $|u|^2$, we cannot proceed in a straightforward way, thus we follow the approach in [@OT1] where the authors use an auxiliary system to overcome this difficulty. More precisely, let us define $v:=\d_tu$, then by differentiating the Schrödinger equation in with respect to time, we write the following system $$\label{ZM2}
\left\{ \begin{array}{lll}
i\partial_tv-\alpha\nabla\times\nabla\times v+\nabla\operatorname{div}v=nv+\partial_tn u \\
\partial_{tt}n-\Delta n=\Delta|u|^2 \\
iv-\alpha\nabla\times\nabla\times u+\nabla\operatorname{div}u=nu
\end{array} \right..$$ Differently from [@OT1], here we encounter a further difficulty. Indeed we have that the initial datum for $v$ is given by $$\label{eq:v0id}
v(0)=-i\alpha\nabla\times\nabla\times u_0+i\nabla\operatorname{div}u_0-in_0u_0,$$ which in general is not uniformly bounded in $L^2$ for $\alpha\geq1$. Hence the standard fixed point argument applied to the integral formulation of would give a local solution on a time interval $[0, T^\alpha]$, where $T^\alpha$ goes to zero as $\alpha$ goes to infinity. For this reason we introduce the alternative variable $$\label{eq:v_tilde}
\tilde v(t):=v(t)-U(\alpha t)\P(i\alpha\Delta u_0),$$ for which we prove that the existence time $T^\alpha$ is uniformly bounded from below for $\alpha\geq1$. The main result of this Section concerns the local well-posedness for .
\[prop:small\_times\] Let $(u_0, n_0, n_1)\in \mathcal{H}_2$ be such that $$M:=\|(u_0, n_0, n_1)\|_{\mathcal{H}_2}.$$ Then, for any $\alpha\geq1$ there exists $\tau=\tau(M)$ and a unique local solution $(u, n)\in\mathcal C([0, \tau]; \mathcal{H}_2)$ to such that $$\sup_{[0, \tau]}\|(u, n, \d_tn)(t)\|_{\mathcal{H}_2}\leq2M$$ and $$\|v\|_{L^2_tL^6_x}\leq CM,$$ where $C$ does not depend on $\alpha\geq1$.
By standard arguments we then extend the local solution in to a maximal existence interval where the standard blow-up alternative holds true.
\[frthm1\] Let $(u_0, n_0, n_1)\in \mathcal{H}_2$, then for any $\alpha\geq1$ there exists a unique maximal solution $(u^\alpha, v^\alpha, n^\alpha)$ to with initial data $(u_0, v(0), n_0, n_1)$, $v(0)$ given by , on the maximal existence interval $I_\alpha:=[0, T_{max}^\alpha)$, for some $T_{max}^\alpha>0$. The solution satisfies the following regularity properties:
- $u^{\alpha}\in\,\mathcal{C}(I_\alpha;H^2), \;u^\alpha\in S^2([0, T]),\;\forall\;0<T<T^\alpha_{max}$,
- $v^{\alpha}\in\,\mathcal C(I_\alpha;L^2),\,\;v^\alpha\in S^0([0, T]),\;\forall\;0<T<T^\alpha_{max}$,
- $n^{\alpha}\in\,\mathcal{C}(I_\alpha;H^1)\cap\mathcal C^1(I_\alpha; L^2)$.
Moreover, the following blow-up alternative holds true: $T^\alpha_{max}<\infty$ if and only if $$\lim_{t\to T^\alpha}\|(u^\alpha, n^\alpha)(t)\|_{\mathcal{H}_2}=\infty.$$ Finally, the map $\mathcal H_2\to\mathcal C([0, T_{max});\mathcal H_2)$ associating any initial datum to its solution is a continuous operator.
The blow-up alternative above also implies in particular that the family of maximal existence times $T^\alpha$ is strictly bounded from below by a positive constant, i.e. there exists a $T^\ast>0$ such that $T^\ast\leq T^\alpha$ for any $\a\geq1$.
yields in a straightforward way from above.
Let $(u^\alpha, v^\alpha, n^\alpha)$ be the solution to constructed in , then to prove the we only need to show that we identify $\d_tu^\alpha=v^\alpha$ in the distribution sense. Let us differentiate with respect to $t$ the equation $$(1-\alpha\Delta\P-\Delta\Q)u=iv-(n-1)\bigg(u_0+\int_{0}^tv(s)\,ds\bigg)$$ obtaining $$\label{e1.32}
(1-\alpha\Delta\P-\Delta\Q)\partial_tu=i\partial_tv-(n-1)v-\partial_tn\bigg(u_0+\int_{0}^t v(s)\,ds\bigg),$$ this equation holding in $H^{-2},$ while the first equation of gives us $$(1-\alpha\Delta\P-\Delta\Q)v=i\partial_tv-(n-1)v-\partial_tn\bigg(u_0+\int_{0}^tv(s)\,ds\bigg).$$ Also the equation above is satisfied in $H^{-2}$ and therefore in the same distributional sense we have $$\partial_tu=v.$$ Moreover from we get $$\notag
\partial_tu=(1-\alpha\Delta\P-\Delta\Q)^{-1}\bigg(i\partial_tv-(n-1)v-\partial_tn\bigg(u_0+\int_{0}^tv(s)\,ds\bigg)\bigg)\in\mathcal{C}(I;L^2)$$ therefore $u\in\mathcal{C}^1(I;L^2).$ It is straightforward that $u^\a(0,x)=u_0$ and so the proof is complete.
As discussed above, we are going to prove the result by means of a fixed point argument. Let us define the function $$\tilde v(t):=v(t)-U(\alpha t)\P(i\alpha\Delta u_0).$$ We look at the integral formulation for , namely $$\label{eq:duh_v}
\begin{aligned}
v(t)=U_Z(t)v(0)-i\int_0^tU_Z(t-s)\left(nv+\d_tnu\right)(s)\,ds\\
\end{aligned}$$ $$n(t)=\cos(t|\nabla|)n_0+\frac{\sin(t|\nabla|)}{|\nabla|}n_1+\int_0^t\frac{\sin((t-s)|\nabla|)}{|\nabla|}\Delta|u|^2\,ds,$$ with $u$ determined by the following elliptic equation $$-\alpha\nabla\times\nabla\times u+\nabla\operatorname{div}u=n\left(u_0+\int_0^tv(s)\,ds\right)-iv,$$ and $v(0)$ is given by . This implies that $\tilde v$ must satisfy the following integral equation $$\begin{aligned}
\tilde v(t)&=U(\alpha t)\P(-in_0u_0)+U(t)\Q(i\Delta u_0-in_0u_0)\\
&\phantom{\quad}-i\int_0^tU_Z(t-s)\left(\tilde vn+nU(\alpha\cdot)\P(i\alpha\Delta u_0)+\d_tnu\right)(s)\,ds.
\end{aligned}$$ Let us consider the space $$\begin{aligned}
X=\big\{(\tilde v,n):\,&\tilde v\in S^2([0, T]), n\in \mathcal W^1([0,T]),\\
&\|\tilde v\|_{S^2(I)}\leq M, \|n\|_{\mathcal W^1(I)}\leq M\big\},
\end{aligned}$$ endowed with the norm $$\|(\tilde v, n)\|_X:=\|\tilde v\|_{S^2(I)}+\|n\|_{\mathcal W^1(I)}.$$ Here $0<T\leq1, M>0$ will be chosen subsequently and $I:=[0,T]$. From the third equation in and the definition of $\tilde v$ we have $$\label{eq:102}
\begin{aligned}
-\alpha\nabla\times\nabla\times u+\nabla\operatorname{div}u&= -i \tilde{v}-iU(\a t)(i\a\Delta\P u_0)\\
&\phantom{\quad\,}-in\bigg(u_0+\int_0^t\tilde{v}(s)+U(\a s)(i\a\Delta\P u_0)\,ds\bigg),
\end{aligned}$$ thus it is straightforward to see that given $n, \tilde v$, then $u$ is uniquely determined. Furthermore, by applying the projection operators $\P, \Q$, respectively, to we obtain $$\a\Delta\P u=-i\P[\tilde{v}+U(\a t)\P(i\a\Delta u_0)]+\P\bigg[n \bigg(u_0+\int_0^t\tilde{v}(s)+U(\a s)\P(i\a\Delta u_0)\,ds \bigg)\bigg]$$ and $$\Delta \Q u=-i\Q\tilde{v}+\Q \bigg [n \bigg(u_0+\int_0^t\tilde{v}(s)+U(\a s)\P(i\a\Delta u_0)\,ds \bigg) \bigg].$$ We now estimate the irrotational and solenoidal parts of $\Delta u$ separately. Let us start with $\Q\Delta u:$ by Hölder inequality and Sobolev embedding we obtain $$\begin{aligned}
\|\Delta\Q u\|_{L^\infty_tL^2_x}&\lesssim\|\tilde{v}\|_{L^\infty_tL^2_x}+\|n\|_{L^\infty_tH^1_x}\|u_0\|_{H^2}+T^{1/2}\|n\|_{L^\infty_tH^1_x}\|\tilde v\|_{L^{2}_tL^6_x}\\
&\phantom{\quad\,}+T^{1/2}\|n\|_{L^\infty_tH^1_x}\|U(\alpha t)\P(i\alpha\Delta u_0)\|_{L^2_tL^6_x}.
\end{aligned}$$ To estimate the last term, we use the Strichartz estimate in ; let us notice that by choosing the admissible exponents $(q, r)=(2, 6)$ we obtain a factor $\alpha^{-1}$ in the estimate, which balances the term $\alpha$ appearing above. We thus have $$\|\Delta\Q u\|_{L^\infty_tL^2_x}\lesssim(\|u_0\|_{H^2}+1)M+M^2.$$ By similar calculations, we also obtain an estimate for $\P\Delta u$, $$\|\P\Delta u\|_{L^\infty_tL^2_x}\lesssim \|u_0\|_{H^2}^2+\|u_0\|_{H^2}M+M^2.$$ We then sum up the contributions given by the irrotational and solenoidal parts to get $$\label{eq:3.15}
\|u\|_{L^\infty_tH^2_x}\lesssim \|u_0\|_{H^2}^2+\|u_0\|_{H^2}M+M^2\leq C(\|u_0\|_{H^2})\big(1+M^2\big).$$
Similar calculations also give $$\begin{aligned}
\|u-u'\|_{L^\infty(I;H^2)}&\lesssim\|\tilde v-\tilde v'\|_{L^{\infty}_tL^2_x}+\|n-n'\|_{L^\infty_tH^1_x}\\
&\phantom{\quad\,}+M(\|n-n'\|_{L^\infty_tH^1_x}+\|\tilde v-\tilde v'\|_{L^{2}_tL^6_x})\\
&\leq C(1+M)\|(\tilde v, n)-(\tilde v', n')\|_X.
\end{aligned}$$
Given $(\tilde v, n)\in X$ we define the map $\Phi:X\to X$, $\Phi(\tilde v, n)=(\Phi_S, \Phi_W)(\tilde v, n)$ by $$\begin{aligned}
\Phi_S&=U(\alpha t)\P(-in_0u_0)+U(t)\Q(i\Delta u_0-in_0u_0)\label{eq:fixs}\\
&\phantom{\quad\,}-i\int_0^tU(\alpha(t-s))\P(\tilde vn+nU(\alpha\cdot)\P(i\alpha\Delta u_0)+\d_tnu)(s)\,ds\notag\\
&\phantom{\quad\,}-i\int_0^tU(t-s)\Q\left(n\tilde v+nU(\alpha\cdot)(i\alpha\Delta u_0)+\d_tnu\right)(s)\,ds\notag\\
\Phi_W&=\cos(t|\nabla|)n_0+\frac{\sin(t|\nabla|)}{|\nabla|}n_1+\int_0^t\frac{\sin((t-s)|\nabla|)}{|\nabla|}\Delta|u|^2(s)\,ds,\label{eq:fixw}\end{aligned}$$ where $u$ in the formulas above is given by and its $L^\infty_tH^2_x$ norm is bounded in . Let us first prove that, by choosing $T$ and $M$ properly, $\Phi$ maps $X$ into itself.
Let us first analyze the Schrödinger part , by the Strichartz estimates in , Hölder inequality and Sobolev embedding we have $$\|U(\alpha t)\P(-in_0u_0)+U(t)\Q(i\Delta u_0-in_0u_0)\|_{L^{q}L^r}\lesssim\|u_0\|_{H^2}+\|n_0\|_{H^1}\|u_0\|_{H^2}\\
$$ We treat the inhomogeneous part similarly, $$\begin{aligned}
\bigg\|\int_0^tU_Z(t-s)\left(n\tilde v+nU(\alpha s)(i\alpha\P \Delta u_0)\right)(s)\,ds\bigg\|_{L^q_tL^r_x}&\lesssim\|n\tilde v+nU(\alpha \cdot)(i\alpha\Delta\P u_0)\|_{L^1_tL^2_x}\\
\lesssim T^{1/2}\|n\|_{L^\infty_tH^1_x}(\|\tilde v\|_{L^2_tL^6_x}+\|U(\alpha t)\P(i\alpha\Delta u_0)\|_{L^2_tL^6_x})&\lesssim T^{1/2}M(M+\|u_0\|_{H^2}).
\end{aligned}$$ where in the last inequality we again used with $(2, 6)$ as admissible pair. Similarly, $$\begin{aligned}
\bigg\|\int_0^tU_Z(t-s)\left(\d_tnu\right)(s)\,ds\bigg\|_{L^q_tL^r_x}&\lesssim T\|\d_tn\|_{L^\infty_tL^2_x}\|u\|_{L^\infty_tH^2_x}\\
&\lesssim C(\|u_0\|_{H^2})TM\big(1+M^2\big),
\end{aligned}$$ where in the last line we use the bound . Collecting these estimates we get $$\label{eq:201}
\|\Phi_S(\tilde v, n)\|_{L^q_tL^r_x}\leq C(\|u_0\|_{H^2}, \|n_0\|_{L^2})+CT^{1/2}M(1+M).$$
For the wave component we use formula and Hölder inequality to obtain $$\begin{aligned}
\|\Phi_W(v, n)\|_{\mathcal W^1(I)}&\leq C(1+T)\|n_0\|_{H^1}+\|n_1\|_{L^2}+\|\Delta|u|^2\|_{L^1_tL^2_x}\\
&\leq C\left(\|n_0\|_{H^1}+\|n_1\|_{L^2}\right)+T\|u\|^2_{L^\infty_tH^2_x},
\end{aligned}$$ where we used the fact that $H^2(\R^3)$ is an algebra. From we infer $$\label{eq:202}
\|\Phi_W(v, n)\|_{\mathcal W^1(I)}\leq C(\|n_0\|_{H^1}, \|n_1\|_{L^2})+T\big(M+M^4\big).
$$ The bounds and together yield $$\|\Phi(\tilde v, n)\|_{X}\leq C(\|(u_0, n_0, n_1)\|_{\mathcal H_2})+CT^{1/2}M(1+M^3).$$ Let us choose $M$ such that $$\frac{M}{2}=C(\|(u_0, n_0, n_1)\|_{\mathcal H_2})$$ and $T$ such that $$CT^{1/2}(1+M^3)<\frac12,$$ we then obtain $\|\Phi(\tilde v, n)\|_X\leq M$. Hence $\Phi$ maps $X$ into itself. It thus remains to prove that $\Phi$ is a contraction. Arguing similarly to what we did before we obtain $$\begin{aligned}
\|\Phi_S(\tilde{v}, n)-\Phi_S(\tilde{v}', n')\|_{L^{q}_tL^r_x}&\leq C T^{1/2}(1+M)\|(\tilde v,n)-(\tilde v',n')\|_{L^{q}_tL^r_x}\\
\|\Phi_W(\tilde{v}, n)-\Phi_W(\tilde{v}', n')\|_{\mathcal W^1(I)}&\leq C T\big(1+M^3\big)\|(\tilde v,n)-(\tilde v',n')\|_{\mathcal W^1(I)}.
\end{aligned}$$ By possibly choosing a smaller $T>0$ such that $C T^{1/2}(1+M^3)<1$ then we see that $\Phi:X\to X$ is a contraction and consequently there exists a unique $(\tilde v, n)\in X$ which is a fixed point for $X$. Let us notice that the time $T$ depends only on $M$, hence $T=T(\|(u_0, n_0, n_1)\|_{\mathcal H_2})$. Furthermore from the definition of $\tilde v$ it follows that $(u, v, n)$ is a solution to , where $v=\tilde v+U(\alpha t)\P(i\alpha\Delta u_0)$. From we also see that the $L^\infty_tH^2_x$ norm of $u$ is uniformly bounded in $\alpha$.
Finally, from standard arguments we extend the solution on a maximal time interval, on which the standard blow-up alternative holds true and we can also infer the continuous dependence on the initial data.
Convergence of solutions. {#sect:conv}
=========================
Given the well-posedness results of the previous Section, we are now ready to study the electrostatic limit for the vectorial Zakharov system . In order to understand the effective dynamics we consider the system in its integral formulation, by splitting the Schrödinger linear propagator in its fast and slow dynamics, i.e. $U_Z(t)=U(\alpha t)\P+U(t)\Q$. In particular for $u^\alpha$ we have $$u^\alpha(t)=U(\alpha t)\P u_0+U(t)\Q u_0-i\int_0^tU( \alpha(t-s))\P(nu)(s)\,ds-i\int_0^tU(t-s)\Q(nu)(s)\,ds.$$ Due to fast oscillations, we expect that the terms of the form $U(\alpha t)f$ go weakly to zero as $\alpha\to0$. This fact can be quantitatively seen by using the Strichartz estimates in . However, while for the third term we can choose $(\gamma, \rho)$ in a suitable way such that it converges to zero in every Strichartz space, by the unitarity of $U(\alpha t)$ we see that $\|U(\alpha t)\P u_0\|_{L^\infty_tL^2_x}$ cannot converge to zero, while $\|U(\alpha t)\P u_0\|_{L^q_tL^r_x}\to0$ for any admissible pair $(q, r)\neq(\infty, 2)$.
This is indeed due to the presence of an initial layer for the electrostatic limit for when dealing with “ill-prepared” initial data. In general, for arbitrary initial data, the right convergence should be given by $$\tilde u^\alpha(t):=u^\alpha(t)-U(\alpha t)\P u_0\to u^\infty$$ in all Strichartz spaces, where $u^\infty$ is the solution to . Let us notice that $\tilde u^\alpha$ is related to the auxiliary variable $\tilde v^\alpha$ defined in and used to prove the local well-posedness results in , since we have $\tilde v^\alpha=\d_t\tilde u^\alpha$.
Our strategy to prove the electrostatic limit goes through studying the convergence of $(v^\alpha, n^\alpha, u^\alpha)$, studied in the previous Section, towards solutions to $$\label{eq4.5}
\left\{ \begin{array}{lll}
i\partial_tv^{\infty}+\Delta v^{\infty}=\Q(n^{\infty}v^{\infty}+\partial_tn^{\infty} u^{\infty}) \\
\partial_{tt}n^{\infty}-\Delta n^{\infty}=\Delta|u^{\infty}|^2 \\
iv^{\infty}+\Delta u^{\infty}=\Q(n^{\infty}u^{\infty}),
\end{array} \right.$$ which is the auxiliary system associated to . Again, we exploit such auxiliary formulations in order to overcome the difficulty generated by the loss of derivatives on the terms $|u^\alpha|^2$ and $|u^\infty|^2$.
Unfortunately our strategy is not suitable to study the limit in the presence of an initial layer. Indeed for ill-prepared data we should consider $\tilde u^\alpha$ and consequently $\tilde v^\alpha$ defined in for the auxiliary system. This means that when studying the auxiliary variable $v^\alpha$ the initial layer itself becomes singular. For this reason here we restrict ourselves to study the limit with well-prepared data. More specifically, we consider $(u_0^\alpha, n_0^\alpha, n_1^\alpha)\in \mathcal{H}_2$ such that $$\label{eq:conv_id}
\|(u_0^\alpha, n_0^\alpha, n_1^\alpha)-(u_0^\infty, n_0^\infty, n_1^\infty)\|_{\mathcal{H}_2}\to0$$ for some $(u_0^\infty, n_0^\infty, n_1^\infty)\in \mathcal{H}_2$ and $$\label{eq:wp}
\|\P u_0^\alpha\|_{H^2}\to0.$$ This clearly implies that the initial datum for the limit equation is irrotational, i.e. $\P u_0^\infty=0$.
In view of the above discussion, it is reasonable to think about studying the initial layer by considering the Cauchy problem for the Zakharov system in low regularity spaces, by exploiting recent results in [@BC; @GTV; @BH]. However this goes beyond the scope of our paper and it could be the subject of some future investigations.
To prove the convergence result stated in we will study the convergence from to . The main result of this Section is the following.
\[mainth\] Let $\alpha\geq1$ and let $(u_0^\alpha, n_0^\alpha, n_1^\alpha)$, $(u_0^\infty, n_0^\infty, n_1^\infty)\in \mathcal{H}_2$ be initial data such that and hold true. Let $(u^\alpha, v^\alpha, n^\alpha)$ be the maximal solution to with Cauchy data $(u_0^\alpha, n_0^\alpha, n_1^\alpha)$ given by and analogously let $(u^\infty, v^\infty, n^\infty)$ be the maximal solution to in the interval $[0, T^\infty_{max})$ accordingly to . Then for any $0<T<T^\infty_{max}$ we have $$\lim_{\alpha\to\infty}\|(u^\alpha, v^\alpha, n^\alpha)-(u^\infty, v^\infty, n^\infty)\|_{L^\infty(0, T;\mathcal H_2)}=0.$$
The proof of the Theorem above is divided in two main steps. First of all we prove in that, as long as the $\mathcal{H}_2$ norm of $(u^\alpha(T), n^\alpha(T),\d_tn^\a(T))$ is bounded, then the convergence holds true in $[0, T]$. The second one consists in proving that the $\mathcal{H}_2$ bound on $(u^\alpha(T), n^\alpha(T), \d_tn^\alpha(T))$ holds true for any $0<T<T^\infty_{max}$. A similar strategy of proof is already exploited in the literature to study the asymptotic behavior of time oscillating nonlinearities, see for example [@CS] where the authors consider a time oscillating nonlinearity or [@AW] where in a system of two nonlinear Schrödinger equations a rapidly varying linear coupling term is averaging out the effect of nonlinearities. We also mention [@CPS] where a similar strategy is also used to study a time oscillating critical Korteweg-de Vries equation.
\[lemma4.2\] Let $(u^\alpha, v^\alpha, n^\alpha)$, $(u^\infty, v^\infty, n^\infty)$ be defined as in the statement of and let us assume that for some $0<T_1<T^\infty_{max}$ we have $$\sup_{\alpha\geq1}\|(u^\alpha, n^\alpha, \d_tn^\alpha)\|_{L^\infty(0, T_1;\mathcal H_2)}<\infty.$$ It follows that $$\lim_{\alpha\to\infty}\left(\|u^\alpha-u^\infty\|_{L^\infty_tH^2_x}+\|v^\alpha-v^\infty\|_{L^2_tL^6_x}+\|n^\alpha-n^\infty\|_{\mathcal W^1}\right)=0,$$ where all the norms are taken in the space-time slab $[0, T_1]\times\R^3$. In particular we have $$\lim_{\alpha\to\infty}\|(u^\alpha, n^\alpha, \d_tn^\alpha)-(u^\infty, n^\infty, \d_tn^\infty)\|_{L^\infty(0, T_1;\mathcal H_2)}=0.$$
We assume for the moment that holds true, then we first show how this implies .
Let $0<T<T_{max}^\infty$ be fixed and let us define $$N:=2\|(u^\infty, n^\infty, \d_tn^\infty)\|_{L^\infty(0, T;\mathcal H_2)}.$$ From the local well-posedness theory, see , there exists $\tau=\tau(N)$ such that the solution $(u^\alpha, n^\alpha, \d_n^\alpha)$ to exists on $[0, \tau]$ and we have $$\|(u^\alpha, n^\alpha, \d_tn^\alpha)\|_{L^\infty(0, T_1;\mathcal H_2)}<\infty.$$ We observe that, because of what we said before, the choice $T_1=\tau$ is always possible. By the we infer that $$\lim_{\alpha\to\infty}\|(u^\alpha, n^\alpha, \d_tn^\alpha)-(u^\infty, n^\infty, \d_tn^\infty)\|_{L^\infty(0, T_1;\mathcal H_2)}=0.$$ On the other hand by the definition of $N$ we have that, for $\alpha\geq1$ large enough, $$\begin{aligned}
\|(u^\alpha, n^\alpha, \d_tn^\alpha)(T_1)\|_{\mathcal{H}_2}&\leq\|(u^\alpha, n^\alpha, \d_tn^\alpha)(T_1)-(u^\infty, n^\infty, \d_tn^\infty)(T_1)\|_{\mathcal{H}_2}\\
&\phantom{\quad\,}+\|(u^\infty, n^\infty, \d_tn^\infty)(T_1)\|_{\mathcal{H}_2}\leq N.
\end{aligned}$$ Consequently we can apply to infer that $(u^\alpha, n^\alpha)$ exists on a larger time interval $[0, T_1+\tau]$, provided $T_1+\tau\leq T$, and again $$\|(u^\alpha, n^\alpha, \d_tn^\alpha)\|_{L^\infty(0, T_1+\tau;\mathcal H_2)}\leq 2N.$$ We can repeat the argument iteratively on the whole interval $[0, T]$ to infer $$\|(u^\alpha, n^\alpha, \d_tn^\alpha)\|_{L^\infty(0, T;\mathcal H_2)}\leq 2N.$$ By using this proves the Theorem.
It only remains now to prove .
Let us fix $$M:=\sup_{\alpha}\sup_{[0, T_1]}\|(u^\alpha, n^\alpha, \d_tn^\alpha)(t)\|_{\mathcal{H}_2}.$$ By using the integral formulation for and we have $$\begin{aligned}
v^{\alpha}(t)-v^{\infty}(t)&=U(\alpha t)\P(\alpha\Delta u^{\alpha}_0-iu^{\alpha}_0n^{\alpha}_0)+U(t)\Q(v^{\alpha}_0-v^{\infty}_0)\\
&\phantom{\quad\,}-i\int_{0}^t U(\alpha(t-s))[\P (\partial_t(n^{\alpha}u^{\alpha}))](s)\,ds\\
&\phantom{\quad\,}-i\int_{0}^t U(t-s)[\Q (\partial_t(n^{\alpha}u^{\alpha})-\partial_t(n^{\infty}u^{\infty}))](s)\,ds.
\end{aligned}$$ Now we use the Strichartz estimates in to get $$\begin{aligned}
\|v^\alpha-v^\infty\|_{L^2_tL^6_x}&\lesssim\|\P u_0^\alpha\|_{H^2}+\alpha^{-1}\|n_0^\alpha\|_{H^1}\|u_0^\alpha\|_{H^2}+\|v^\alpha_0-v^\infty_0\|_{L^2}\\
&\phantom{\quad\,}+\alpha^{-1/2}\|n^\alpha v^\alpha+\d_tn^\alpha u^\alpha\|_{L^1_tL^2_x}\\
&\phantom{\quad\,}+\|n^\alpha v^\alpha-n^\infty v^\infty\|_{L^1_tL^2_x}+\|\d_tn^\alpha u^\alpha-\d_tn^\infty u^\infty\|_{L^1_tL^2_x}.
\end{aligned}$$ It is straightforward to check that, by Hölder inequality and Sobolev embedding, $$\begin{aligned}
\|n^\alpha v^\alpha+\d_tn^\alpha u^\alpha\|_{L^1_tL^2_x}&\leq C(T, M),\\
\|n^\alpha v^\alpha-n^\infty v^\infty\|_{L^1_tL^2_x}&\lesssim T^{1/2}(\|n^\alpha-n^\infty\|_{L^\infty_tH^1_x}+\|v^\alpha-v^\infty\|_{L^2_tL^6_x}),\\
\|\d_tn^\alpha u^\alpha-\d_tn^\infty u^\infty\|_{L^1_tL^2_x}&\lesssim T\left(\|\partial_tn^\alpha-\d_tn^\infty\|_{L^\infty_tL^2_x}+\|u^\alpha-u^\infty\|_{L^\infty_tH^2_x}\right).
\end{aligned}$$ By putting all the estimates together we obtain $$\begin{aligned}
\|v^\alpha-v^\infty\|_{L^2_tL^6_x}&\lesssim\|\P u_0^\alpha\|_{H^2}+\alpha^{-1}\|n_0^\alpha\|_{H^1}\|u_0^\alpha\|_{H^2}\|u_0^\alpha-u_0^\infty\|_{H^2}+\alpha^{-1/2}+\|n_0^\alpha-n_0^\infty\|_{H^1}\\
&\phantom{\quad\,}+T^{1/2}(\|u^\alpha-u^\infty\|_{L^\infty_tH^2_x}+\|v^\alpha-v^\infty\|_{L^2_tL^6_x}+\|n^\alpha-n^\infty\|_{\mathcal W^1}).
\end{aligned}$$ To estimate the wave part in and , we write $$\begin{aligned}
n^{\a}-n^{\i}&=\cos(t|\nabla|)(n^{\a}_0-n^{\i}_0)-\frac{\sin(t|\nabla|)}{|\nabla|}(n^{\a}_1-n^{\i}_1)\\
&\phantom{\quad\,}+\int_0^t\frac{\sin((t-s)|\nabla|)}{|\nabla|}\Delta(|u^{\a}|^2-|u^{\i}|^2)\,ds,
\end{aligned}$$ whence, by using again that $H^2(\R^3)$ is an algebra, $$\|n^\alpha-n^\infty\|_{\mathcal W^1}\lesssim\|n_0^\alpha-n_0^\infty\|_{H^1}+\|n_1^\alpha-n_1^\infty\|_{L^2}+T\|u^\alpha-u^\infty\|_{L^\infty_tH^2_x}.$$ The estimate for the difference $u^\alpha-u^\infty$ is more delicate. From the third equations in and we have $$-\alpha\nabla\times\nabla\times u^\alpha+\nabla\operatorname{div}(u^\alpha-u^\infty)=i(v^\alpha-v^\infty)-n^\alpha u^\alpha+\Q(n^\infty u^\infty).$$ Again, here we estimate separately the irrotational and solenoidal parts of the difference. For the solenoidal part we obtain $$\alpha\|\P\Delta u^\alpha\|_{L^\infty_tL^2_x}\lesssim\|v^\alpha\|_{L^\infty_tL^2_x}+\|n^\alpha u^\alpha\|_{L^\infty_tL^2_x}.$$ To estimate the $L^\infty_tL^2_x$ norm of $v^\alpha$ on the right hand side we use and Strichartz estimates to infer $$\|v^\alpha\|_{L^\infty_tL^2_x}\lesssim\alpha\|\P u_0^\alpha\|_{H^2}\|u_0^\alpha\|_{H^2}\|n_0^\alpha\|_{H^1}+1.$$ Hence $$\alpha\|\P\Delta u^\alpha\|_{L^\infty_tL^2_x}\lesssim\alpha\|\P u^\a_0\|_{H^2}+\|u_0^\alpha\|_{H^2}\|n_0\|_{H^1}+1.$$ For the irrotational part $$\label{eq:u_diff}
\|\Q\Delta(u^\alpha-u^\infty)\|_{L^\infty_tL^2_x}\lesssim\|\Q(v^\alpha-v^\infty)\|_{L^\infty_tL^2_x}+\|n^\alpha-u^\alpha-n^\infty u^\infty\|_{L^\infty_tL^2_x}.$$ By using , the analogue integral formulation for $v^\infty$ and by applying the Helmholtz projection operator $\Q$ to their difference we have that the first term on the right hand side is bounded by $$\begin{aligned}
\|\Q(v^\alpha-v^\infty)\|_{L^\infty_tL^2_x}&\lesssim\|u_0^\alpha-u_0^\infty\|_{H^2}+\|n_0^\alpha-n_0^\infty\|_{H^1}\\
&\phantom{\quad\,}+T^{1/2}\left(\|u^\alpha-u^\infty\|_{L^\infty_tH^2_x}+\|v^\alpha-v^\infty\|_{L^2_tL^6_x}+\|n^\alpha-n^\infty\|_{\mathcal W^1}\right).
\end{aligned}$$ The second term on the right hand side of is estimated by $$\begin{aligned}
\|n^\alpha u^\alpha-n^\infty u^\infty\|_{L^\infty_tL^2_x}&\lesssim\|n^\alpha-n^\infty\|_{L^\infty_tL^2_x}\|u^\alpha\|_{L^\infty_tH^2_x}\\
&\phantom{\quad\,}+\|n^\infty(u_0^\alpha-u_0^\infty)\|_{L^\infty_tL^2_x}+\bigg\|n^\infty\int_0^t(v^\alpha-v^\infty)\bigg\|_{L^\infty_tL^2_x}\\
&\lesssim\left(\|n_0^\alpha-n_0^\infty\|_{L^2}+T\|\d_tn^\alpha-\d_tn^\infty\|_{L^\infty_tL^2_x}\right)M\\
&\phantom{\quad\,}+\|n^\infty\|_{L^\infty_tL^2_x}\|u_0^\alpha-u_0^\infty\|_{H^2_x}\\
&\phantom{\quad\,}+T^{1/2}\|n^\infty\|_{L^\infty_tH^1_x}\|v^\alpha-v^\infty\|_{L^2_tL^6_x}.
\end{aligned}$$ By summing up the two contribution in we then get $$\begin{aligned}
\|\Q\Delta(u^\alpha-u^\infty)\|_{L^\infty_tL^2_x}\lesssim &\|u_0^\alpha-u_0^\infty\|_{H^2}+\|n_0^\alpha-n_0^\infty\|_{H^1}\\
&+T^{1/2}\left(\|u^\alpha-u^\infty\|_{L^\infty_tH^2_x}+\|v^\alpha-v^\infty\|_{L^2_tL^6_x}+\|n^\alpha-n^\infty\|_{\mathcal W^1}\right).
\end{aligned}$$ Finally, we notice that, by using the Schrödinger equations in and , we have $$\|u^\alpha-u^\infty\|_{L^\infty_tL^2_x}\lesssim T\left(\|n^\alpha-n^\infty\|_{L^\infty_tH^1_x}+\|u^\alpha-u^\infty\|_{L^\infty_tH^2_x}\right),$$ so that $$\begin{aligned}
\|u^\alpha-u^\infty\|_{L^\infty_tH^2_x}&\lesssim\|u_0^\alpha-u_0^\infty\|_{H^2}+\|n_0^\alpha-n_0^\infty\|_{H^1}+\|\P u^\a_0\|_{H^2}+\a^{-1}\\
&\phantom{\quad\,}+T^{1/2}\left(\|u^\alpha-u^\infty\|_{L^\infty_tH^2_x}+\|v^\alpha-v^\infty\|_{L^2_tL^6_x}+\|n^\alpha-n^\infty\|_{\mathcal W^1} \right).
\end{aligned}$$ Now we put everything together, we finally obtain $$\begin{aligned}
\|v^\alpha-v^\infty&\|_{L^2_tL^6_x}+\|n^\alpha-n^\infty\|_{\mathcal W^1}+\|u^\alpha-u^\infty\|_{L^\infty_tH^2_x}\lesssim\\
&\lesssim\|\P u_0^\alpha\|_{H^2}+\alpha^{-1}+\|u_0^\alpha-u_0^\infty\|_{H^2}+\|n_0^\alpha-n_0^\infty\|_{H^1}+\|n_1^\alpha-n_1^\infty\|_{L^2}\\
&\phantom{\quad\,}+T^{1/2}\left(\|u^\alpha-u^\infty\|_{L^\infty_tH^2_x}+\|v^\alpha-v^\infty\|_{L^2_tL^6_x}+\|n^\alpha-n^\infty\|_{\mathcal W^1}\right).
\end{aligned}$$ By choosing $T$ small enough depending on $M$ we can infer $$\begin{aligned}
\|v^\alpha-v^\infty&\|_{L^2_tL^6_x}+\|n^\alpha-n^\infty\|_{\mathcal W^1}+\|u^\alpha-u^\infty\|_{L^\infty_tH^2_x}\lesssim\\
&\lesssim\|\P u_0^\alpha\|_{H^2}+\alpha^{-1}+\|u_0^\alpha-u_0^\infty\|_{H^2}+\|n_0^\alpha-n_0^\infty\|_{H^1}+\|n_1^\alpha-n_1^\infty\|_{L^2}.
\end{aligned}$$ This proves the convergence in the time interval $[0, T]$, for $T>0$ small enough. Let now $0<T_1$ be as in the statement of Lemma, we can divide $[0, T_1]$ into many subintervals of length $T$ such that the convergence holds in any small interval. By gluing them together we prove the Lemma.
[A]{}
<span style="font-variant:small-caps;">Antonelli P., Saut J.-C., Sparber C.,</span> *Well-Posedness and averaging of NLS with time-periodic dispersion management*, Adv. Diff. Eqns. 18, no. 1/2 (2013), 49–68.
<span style="font-variant:small-caps;">Antonelli P., Weishaeupl R.,</span> *Asymptotic Behavior of Nonlinear Schrödinger Systems with Linear Coupling*, JHDE 11, no. 1 (2014), 159–183.
<span style="font-variant:small-caps;">Bourgain, J., Colliander, J.,</span> *On wellposedness of the Zakharov system*, International Mathematics Research Notices, 1996, no. 11.
<span style="font-variant:small-caps;">Bejenaru, I. Herr, S.,</span> *Convolutions of singular measures and applications to the Zakharov system,* J. Funct. Anal. 261 (2011), no. 2, 478-506.
<span style="font-variant:small-caps;">Carvajal X., Panthee M., Scialom M.,</span> *On the critical KdV equation with time-oscillating nonlinearity*, Diff. Int. Eqns. 24, no. 5/6 (2011), 541–567.
<span style="font-variant:small-caps;">Cazenave, T.,</span> *Semilinear Schrödinger equations*, Courant Lecture Notes in Mathematics, 10. Ams.
<span style="font-variant:small-caps;">Cazenave T., Scialom M.</span> *A Schrödinger equation with time-oscillating nonlinearity*, Rev. Mat. Comp. 23, no. 2 (2010), 321–339.
<span style="font-variant:small-caps;">Glangetas, L., Merle, F.,</span> *Existence of self-similar blowup solutions for Zakharov equations in dimension 2. Part I and II*, Commun. Math. Phys. 160, 173-215 (1994).
<span style="font-variant:small-caps;">Ginibre, J., Tsutsumi, Y., Velo, G.,</span> *On the Cauchy problem for the Zakharov system* J. Funct. Anal. 151 (1997), 384-436.
<span style="font-variant:small-caps;">Ginibre, J., Velo, G.,</span> *The global Cauchy problem for the nonlinear Schrödinger equation revisited,* Ann. Inst. H. Poincaré, Anal. Non Linéaire 2: 309-327 (1985).
<span style="font-variant:small-caps;">Kenig, C. E., Ponce, G. and Vega, L.,</span> *On the Zakharov and Zakharov-Schulman systems,* J. Funct. Anal. 127 (1995) no. 1, 204-234.
<span style="font-variant:small-caps;">Keel, M., Tao, T.,</span> *Endpoint Strichartz inequalities*, Amer. Jour. Math. 120 (1988) 955-980. <span style="font-variant:small-caps;">Merle, F.,</span> *Blow-up results of Viriel Type for Zakharov System*, Comm. Math. Phys. 175 (1996), 433-455.
<span style="font-variant:small-caps;">Masmoudi, N., Nakanishi, K.,</span> *Energy convergence for singular limits of Zakharov type systems,* Invent. Math. 172 (2008), no. 3, 535-583.
<span style="font-variant:small-caps;">Ozawa, T., Tsutsumi, Y.,</span> *Existence of smoothing effect of solutions for the Zakharov equations,* Publ. Res. Inst. Math. Sci. 28 (1992), no. 3, 329-361.
<span style="font-variant:small-caps;">Ozawa, T., Tsutsumi, Y.,</span> *Global existence and asymptotic behavior of solutions for the Zakharov equations in three space dimensions,* Adv. Math. Sci. Appl. 3 (1993/94), Special Issue, 301-334.
<span style="font-variant:small-caps;">Schochet, S.H., Weinstein, M.I.,</span> *The nonlinear Schrödinger limit of the Zakharov equations governing Langmuir turbulence,* Comm. Math. Phys., 106 (1986), 569-580.
<span style="font-variant:small-caps;">Sulem, C., Sulem, P.L.,</span> *The Nonlinear Schrödinger Equation: Self-Focusing and Wave Collapse,* Applied Mathematical Sciences, 139. Springer-Verlag, New York, 1999.
C. Sulem, P.L. Sulem, *Quelques résultats de régularité pour les équations de la turbulence de Langmuir*, C. R. Acad. Sci. Paris [**289**]{} (1979), 173–176.
<span style="font-variant:small-caps;">Thornhill S.G., ter Haar D.,</span> *Langmuir Turbulence and Modulational Instability*, Phys. Reports 43 (1978), 43-99.
<span style="font-variant:small-caps;">Yajima, K.,</span> *Existence of solutions for Schrödinger evolution equations,* Comm. Math. Phys. 110 (1987), no. 3, 415-426.
<span style="font-variant:small-caps;">Zakharov, V.E.,</span> *Collapse of Langmuir waves,* Sov. Phys. JETP, 35 (1972), 908-914.
|
---
abstract: 'Largely motivated by the development of highly sensitive gravitational-wave detectors, our understanding of merging compact binaries and the gravitational waves they generate has improved dramatically in recent years. Breakthroughs in numerical relativity now allow us to model the coalescence of two black holes with no approximations or simplifications. There has also been outstanding progress in our analytical understanding of binaries. We review these developments, examining merging binaries using black hole perturbation theory, post-Newtonian expansions, and direct numerical integration of the field equations. We summarize these approaches and what they have taught us about gravitational waves from compact binaries. We place these results in the context of gravitational-wave generating systems, analyzing the impact gravitational wave emission has on their sources, as well as what we can learn about them from direct gravitational-wave measurements.'
author:
- 'Scott A. Hughes'
bibliography:
- 'araa.bib'
title: |
Gravitational waves\
from merging compact binaries
---
epsf.tex psfig.sty
gravitational waves, compact objects, relativistic binaries
Introduction {#sec:intro}
============
History and motivation
----------------------
Most physics students learn to solve the Newtonian gravitational two-body problem early in their studies. An exact solution for point masses requires only a few equations and an elliptic integral. Coupling this simple solution to perturbation theory lets us include the effect of non-spherical mass distributions and the impact of additional bodies. We can then accurately model an enormous range of astrophysically interesting and important systems.
By contrast, no exact analytic solution describes binaries in general relativity (GR). GR is nonlinear (making analytic solutions difficult to find except for problems that are highly symmetric) and includes radiation (so that any bound solution will evolve as waves carry off energy and angular momentum). Indeed, for systems containing black holes, GR doesn’t so much have a “two-body” problem as it has a “one-spacetime” problem: One cannot even delineate where the boundaries (the event horizons) of the “bodies” are until the entire spacetime describing the binary has been constructed.
Such difficulties in describing binary systems in GR bedeviled the theoretical development of this topic. Many early discussions centered on the even more fundamental question of which motions would generate radiation and which would not. A particularly clear formulation of the confusion is expressed in attempts to answer the following question: [*If a charge falls freely in the Earth’s gravitational field, does it radiate?*]{} On one hand, in this non-relativistic limit, we should expect that our usual intuition regarding accelerating charges would hold, and a falling charge should radiate exactly as described in [[@jackson]]{} with an acceleration $\vec a = -g \vec e_r$. On the other hand, in GR a falling charge simply follows a geodesic of the Earth’s curved spacetime; it is not “accelerating” relative to freely falling frames, and so is not accelerating in a meaningful sense. The charge just follows the trajectory geometry demands it follows. In this picture, the falling charge appears to [*not*]{} radiate. John Wheeler once asked a group of relativity theorists to vote on whether the falling charge radiates or not; their responses were split almost precisely down the middle (@kennefick, p. 157)[^1].
Similar conceptual issues affect the general two-body problem in relativity. As recently as 1976, it was pointed out [[@ergh76]]{} that there had not yet been a fully self-consistent derivation for the energy loss from a binary due to gravitational-wave (GW) backreaction. A major motivation for this criticism was the discovery of the binary pulsar PSR 1913+16 [[@ht75]]{}. It was clear that this system would be a powerful laboratory for testing theories of gravity, including the role of GW emission. However, as Ehlers et al. spelled out, the theoretical framework needed for such tests was not in good shape. Various approaches to understanding the evolution of binary systems tended to be inconsistent in the nature of the approximations they made, often treating the motion of the members of the binary at a different level of rigor than they treated the solution for the spacetime. These discrepancies were most notable when the members of the binary were strongly self gravitating; a weak-field approach is ill-suited to a binary containing neutron stars or black holes. These calculations generally predicted that the system would, at leading order, lose energy at a rate related to the third time derivative of the source’s “quadrupole moment.” However, the precise definition of this moment for strong-field sources was not always clear.
The Ehlers et al. criticism served as a wake-up call, motivating the formulation of methods for modeling binary dynamics in a mathematically rigorous and self-consistent manner. Several independent approaches were developed; a concise and cogent summary of the principles and the leading results is given by [[@damour83]]{}. For the purpose of this review, a major lesson of the theoretical developments from this body of work is that the so-called “quadrupole formula” describing the loss of orbital energy and angular momentum due to GW emission is valid. Somewhat amazingly, one finds that the equations of motion are [*insensitive*]{} to much of the detailed structure of a binary’s members. Features such as the members’ size and compactness can be absorbed into the definition of their masses. This “principle of effacement” [[@damour_dt83]]{} tells us that the motions of two bodies with masses $m_1$ and $m_2$ can be predicted independent of whether those bodies are stars, neutron stars, or black holes[^2]. Over 30 years of study have since found extraordinary agreement between prediction and observation for the evolution of PSR 1913+16’s orbit [[@wt05]]{}. Additional inspiraling systems have been discovered; in all cases for which we have enough data to discern period evolution, the data agree with theory to within measurement precision (@stairs98, @nice05, @jacoby06, @kramerstairs08, @bhat08). At least one additional recently discovered system is likely to show a measurable inspiral in the next few years [[@kasian08]]{}. These measurements have validated our theory of GW generation, and are among our most powerful tests of GR.
Measuring GWs with the new generations of detectors will require even more complete models for their waveforms, and hence complete models of a binary’s evolution. Motivated by this, our theoretical understanding of merging compact binaries and their GWs has grown tremendously. The purpose of this review is to summarize what we have learned about these systems and their waves, focusing on theory but connecting it to current and planned observations. We examine the analytic and numerical toolkits that have been developed to model these binaries, discuss the different regimes in which these tools can be used, and summarize what has been learned about their evolution and waves. We begin by first examining compact binaries as astrophysical objects, turning next to how they are treated within the theory of GR.
Compact binaries: The astrophysical view {#sec:astroview}
----------------------------------------
From the standpoint of GR, all compact binary systems are largely the same until their members come into contact, modulo the value of parameters such as the members’ masses and spins and the orbital period at a given moment. This means that we only need one theoretical framework to model any compact binary that we encounter in nature. From the standpoint of astrophysics, though, all compact binary systems are [*not*]{} the same: A $1.4\,M_\odot - 1.4\,M_\odot$ neutron star binary forms in very different processes than those which form a $10^6\,M_\odot - 10^6\,M_\odot$ black hole binary. In this section, we summarize the astrophysical wisdom regarding the various compact binary systems that should be strong generators of GWs.
Compact binaries are organized most naturally by their masses. At the low end we have [*stellar-mass*]{} binaries, which include the binary pulsars discussed in the previous section. The data on binaries in this end are quite solid, thanks to the ability to tie models for the birth and evolution of these systems to observations. At least some fraction of short gamma-ray bursts are likely to be associated with the mergers of neutron star-neutron star (NS-NS) or black hole-neutron star (BH-NS) systems (@eichler89; @fox05). Gamma-ray telescopes may already be telling us about compact binary merger many times per year [[@nakar06]]{}.
There is also evidence that nature produces [*supermassive*]{} binaries, in which the members are black holes with $M \sim 10^6 -
10^8\,M_\odot$ such as are found at the centers of galaxies. As described in more detail below, theoretical arguments combining hierarchical galaxy growth scenarios with the hypothesis that most galaxies host black holes generically predict the formation of such binaries. We have now identified quite a few systems with properties indicating that they may host such binaries. The evidence includes active galaxies with double cores [[@komossa03; @maness04; @rodriguez06]]{}; systems with doubly-peaked emission lines ([@zhou04]{}, [@gerke07]{}); helical radio jets, interpreted as the precession or modulation of a jet due to binarity ([@bbr80]{}, [@cw95]{}, [@lr05]{}); and systems that appear to be periodic or semi-periodic, such as the blazar OJ287 [[@valtonen08]]{}. There are also sources which suggest the system hosted a binary that recently merged, leading to the spin flip of a radio jet [[@me02]]{} or to the interruption and later restarting of accretion activity [[@liu03]]{}. As surveys go deeper and resolution improves, we may expect the catalog of candidate supermassive black hole binaries to expand.
Turn now from the observational evidence to theoretical models. If we assume that our galaxy is typical, and that the inferred density of NS-NS systems in the Milky Way should carry over to similar galaxies (correcting for factors such as typical stellar age and the proportion of stars that form neutron stars), then we can estimate the rate at which binary systems merge in the universe. [@nps91]{} and [@phinney91]{} first performed such estimates, finding a “middle-of-the-road” estimate that about 3 binaries per year merge to a distance of 200 Mpc. More recent calculations based on later surveys and observations of NS-NS systems have amended this number somewhat; the total number expected to be measured by advanced detectors is in the range of several tens per year (see, e.g., [@kalogera07]{} for a detailed discussion of methodology, and [@kim06]{} for a summary).
Population synthesis gives us a second way to model the distribution of stellar mass compact binaries. Such calculations combine data on the observed distribution of stellar binaries with models for how stars evolve. Begin with a pair of main sequence stars. The more massive star evolves into a giant, transfers mass onto its companion, and then goes supernova, leaving a neutron star or black hole. After some time, the companion also evolves into a giant and transfers mass onto its compact companion (and may be observable as a high-mass x-ray binary). In almost all cases, the compact companion is swallowed by the envelope of the giant star, continuing to orbit the giant’s core. The orbiting compact body can then unbind the envelope, leaving the giant’s core behind to undergo a supernova explosion and form a compact remnant. See [[@tvdh03]]{}, especially Fig. 16.12, for further discussion.
An advantage of population synthesis is that one can estimate the rate of formation and merger for systems which we cannot at present observe, such as stellar mass black hole-black hole (BH-BH) binaries, or for which we have only circumstantial evidence, such as neutron star-black hole (NS-BH) binaries (which presumably form some fraction of short gamma ray bursts). A disadvantage is that the models of stellar evolution in binaries have many uncertainties. There are multiple branch points in the scenario described, such as whether the binary remains bound following each supernova, and whether the binary survives common envelope evolution. As a consequence, the predictions of calculations based on population synthesis can be quite diverse. Though different groups generally agree well with the rates for NS-NS systems (by design), their predictions for NS-BH and BH-BH systems differ by quite a bit (@ypz98, @pp99). New data are needed to clear the theoretical cobwebs.
Binaries can also form dynamically through many-body interactions in dense environments, such as globular clusters. In such a cluster, the most massive bodies will tend to sink through mass segregation [[@spitzer69]]{}; as such, the core of the cluster will become populated with the heaviest bodies, either stars which will evolve into compact objects, or the compact objects themselves. As those objects interact with one another, they will tend to form massive binaries; calculations show that the production of BH-BH binaries is particularly favored. It is thus likely that globular clusters will act as “engines” for the production of massive compact binaries (@pzm00, @oor07, @mwdg08).
As mentioned above, the hierarchical growth scenario for galaxies, coupled with the hypothesis that most galactic bulges host large black holes (@kg01, @ferrarese02) generically predicts the formation of supermassive binaries, especially at high redshifts when mergers were common. The first careful discussion of this scenario was by [@bbr80]{}. In order for the black holes to get close enough to merge with one another due to GW emission, the black holes hosted by the merging galaxies must sink, via dynamical friction, into the center of the newly merged object. The binary thus formed will typically be very widely separated, and only harden through interactions with field stars (ejecting them from the center). For some time, it was unclear whether there would be enough stars to bring the holes close enough that they would be strong GW emitters. It is now thought that, at least on the low end of the black hole mass function ($M \lesssim 10^6-10^7\,M_\odot$), this so-called “last parsec problem” is not such a problem. Quite a few mechanisms have been found to carry the binary into the regime where GWs can merge the binary (@an02, @mm05). It is now fairly common to assume that some mechanism will bring a binary’s members into the regime where they can merge.
Much theoretical activity in recent years has thus focused on the coevolution of black holes and galaxies in hierarchical scenarios (@mhn01, @yt02, @vhm03). Galaxy mergers appear to be a natural mechanism to bring “fuel” to one or both black holes, igniting quasar activity; the formation of a binary may thus be associated with the duty cycle of quasars (@hco04, @hckh08, @dcshs08). Such scenarios typically find that most black hole mergers come at fairly high redshift ($z \gtrsim
3$ or so), and that the bulk of a given black hole’s mass is due to gas it has accreted over its growth.
A subset of binaries in the supermassive range are of particular interest to the relativity theorist. These binaries form by the capture of a “small” ($1 - 100\,M_\odot$) compact object onto an orbit around the black hole in a galactic center. Such binaries form dynamically through stellar interactions in the core (@sr97, @ha05); the formation rate predicted by most models is typically $\sim 10^{-7}$ extreme mass ratio binaries per galaxy per year [@ha05]. If the inspiraling object is a white dwarf or star, it could tidally disrupt as it comes close to the massive black hole, producing an x-ray or gamma-ray flare (@rbb05, @mhk08, @rrh08). If the inspiraling object is a neutron star or black hole, it will be swallowed whole by the large black hole. As such, it will almost certainly be electromagnetically quiet; however, as will be discussed in more detail below and in Sec., its GW signature will be loud, and is a particularly interesting target for GW observers.
Compact binaries: The relativity view {#sec:relview}
-------------------------------------
Despite the diverse astrophysical paths to forming a compact binary, the end result always looks more-or-less the same from the standpoint of gravity. We now briefly outline the general features of binary evolution in GR. As described near the beginning of Sec., in GR a binary is not so much described by “two bodies” as by “one spacetime.” The methods used to describe this spacetime depend on the extent to which the two-body description is useful.
Although it is something of an oversimplification, it is useful to divide the evolution of a binary into two or three broad epochs, following [@fh98]{}. First is the binary’s [*inspiral*]{}, in which its members are widely separated and can be readily defined as a pair of distinct bodies. During the inspiral, the binary’s mean separation gradually decreases due to the secular evolution of its orbital energy and angular momentum by the backreaction of gravitational radiation. The bodies eventually come together, merging into a single highly dynamical and asymmetric object. We call the [*merger*]{} the final transition from two bodies into one. If the final state of the system is a black hole, then its last dynamics are given by a [*ringdown*]{} as that black hole settles down from the highly distorted post-merger state into a Kerr black hole as required by the “no hair” theorems of GR [[@carter71; @robinson75]]{}.
How we solve the equations of GR to model a binary and its GWs depends upon the epoch that describes it. When the binary is widely separated, the [*post-Newtonian*]{} (pN) expansion of GR works very well. In this case, the Newtonian potential $\phi \equiv GM/rc^2$ (where $M = m_1 + m_2$ is the total mass of a binary, and $r$ is the orbital separation) can be taken to be a small parameter. Indeed, we must [*always*]{} have $r \gtrsim (\mbox{a few}) \times GM/c^2$: The closest the members of the binary can come is set by their physical size, which has a lower bound given by the radius they would have if they were black holes. The pN expansion is what we get by iterating GR’s field equations from the Newtonian limit to higher order in $\phi$. We review the basic principles of the pN expansion and summarize important results in Sec. [\[sec:pn\]]{}.
Some binaries, such as the extreme mass ratio captures described in Sec. [\[sec:astroview\]]{}, will have $m_1 \ll m_2$. For these cases, the reduced mass ratio $\eta \equiv \mu/M = m_1 m_2/(m_1 +
m_2)^2$ can define a perturbative expansion. In this limit, one can treat the binary’s spacetime as an exact background solution with mass $M$ (e.g., that of a black hole) perturbed by a smaller mass $\mu$. By expanding GR’s field equations around the exact background, one can typically derive tractable equations describing the perturbation and its evolution; those perturbations encode the dynamical evolution of the binary and its evolution. We discuss perturbative approaches to binary modeling in Sec. [\[sec:pert\]]{}.
For some binaries, [*no*]{} approximation scheme is useful. Consider, for example, the last moments of two black holes coming together and fusing into a single body. In these moments, the spacetime can be highly dynamical and asymmetric; no obvious small parameter organizes our thinking about the spacetime. Instead, one must simply solve Einstein’s field equations as exactly as possible using numerical methods. The essential question this field of [*numerical relativity*]{} asks is how one can take a “slice” of spacetime (that is, a single 3-dimensional moment of time) and use the field equations to understand how that spacetime evolves into the future. This requires us to explicitly split “spacetime” into “space” and “time.” Progress in numerical relativity has exploded in recent years. Since 2005, practitioners have moved from being able to just barely model a single binary orbit for a highly symmetric system to modeling multiple orbits and the final coalescence for nearly arbitrary binary masses and spins. We summarize the major principles of this field and review the explosion of recent activity in Sec..
Roughly speaking, for comparable mass binaries, pN methods describe the inspiral, and numerical relativity describes the merger. The line dividing these regimes is fuzzy. A technique called the [*effective one-body*]{} (EOB) approximation blurs it even further by making it possible to extend pN techniques beyond their naive range of validity [[@damour_eob08]]{}, at least when both members of the binary are black holes. \[When at least one of the members is a neutron star, at some point the nature of the neutron star fluid will have an impact. Detailed numerical modeling will then surely be critical; see [@su06], [@etienne08], [@bgr08], and [@skyt09] for examples of recent progress.\] Detailed tests show that using EOB methods greatly extends the domain for which analytical waveform models can accurately model numerical relativity waveforms (@betal07, @dnhhb08). A brief discussion of these techniques is included in Secs. [\[sec:pn\]]{} and [\[sec:numrel\]]{}.
Finally, it’s worth noting that the ringdown waves that come from the last dynamics of a newly-born black hole can also be modeled using perturbation theory. The spacetime is accurately modeled as a black hole plus a small deviation. Perturbation theory teaches us that black holes “ring” in a series of modes, with frequencies and damping times that are controlled by the mass and the spin of the black hole [[@leaver85]]{}. Any deviation from an exact black hole solution is carried away by such modes, enforcing the black hole no-hair theorems (@price72a [@price72b]). We will not say much about the ringdown in this review, except to note that the last waves which come from numerical relativity simulations have been shown to agree excellently with these perturbative calculations.
Notation and conventions {#sec:notation}
------------------------
The underlying theory of GWs is general relativity (GR); we review its most crucial concepts in Sec. [\[sec:gr\]]{}. Because multiple conventions are often used in the GR literature, we first describe our notation and conventions.
Throughout this review, Greek indices denote [*spacetime*]{} components of tensors and vectors. These indices run over $(0,1,2,3)$, with $0$ generally denoting time $t$, and $(1,2,3)$ denoting spatial directions. Spacetime vectors are sometimes written with an overarrow: $$\vec A = \{A^\mu\} \doteq (A^0, A^1, A^2, A^3)\;.
\label{eq:vector_notation}$$ Equation (\[eq:vector\_notation\]) should be read as “The vector $\vec A$ has components $A^\mu$ whose values in a specified coordinate system are $A^0$, $A^1$, $A^2$, and $A^3$.” Lowercase Latin indices denote [*spatial*]{} components. Spatial vectors are written boldface: $${\bf a} = \{a^i\} \doteq (a^1, a^2, a^3)\;.
\label{eq:spatial_vector_notation}$$ We use the Einstein summation convention throughout, meaning that repeated adjacent indices in superscript and subscript positions are to be summed: $$A^\mu B_\mu \equiv \sum_{\mu = 0}^3 A^\mu B_\mu\;.
\label{eq:einstein_summation}$$ Indices are raised and lowered using the metric of spacetime (discussed in more detail in Sec. [\[sec:gr\]]{}) as a raising or lowering operator: $$A^\mu B_\mu = g_{\mu\nu} A^\mu B^\nu = A_\nu B^\nu = g^{\mu\nu} A_\mu
B_\nu\;.
\label{eq:raiselower}$$
When we discuss the linearized limit of GR (particularly in Sec.), it is useful to work in coordinates such that the spacetime metric can be written as that of special relativity plus a small perturbation: $$g_{\mu\nu} = \eta_{\mu\nu} + h_{\mu\nu}\;,$$ where $\eta_{\mu\nu} = {\rm diag}(-1,1,1,1)$. This means that the spatial part of the background metric is the Kronecker delta $\delta_{ij}$. For certain calculations in linearized theory, it is useful to abuse the Einstein summation convention and sum over repeated adjacent [*spatial*]{} indices regardless of position: $$\sum_{i = 1}^3 a_i b^i = a_i b^i = a_i b_i = a^i b^i\;.$$ This is allowable because using the Kronecker delta for the spatial part of the metric means $a^i = a_i$ to this order.
Throughout this review, we abbreviate the partial derivative $$\frac{\partial}{\partial x^\mu} \equiv \partial_\mu\;.$$ With this notation defined, we can write $\partial^\mu =
g^{\mu\nu}\partial_\nu$.
Finally, it is common in GR research to use units in which the gravitational constant $G$ and the speed of light $c$ are set to 1. This has the advantage that mass, space, and time have the same units, but can be confusing when applied to astrophysical systems. In deference to the astronomical audience of this review, we have put $G$s and $c$s back into the relativity formulas. An exception to the rule that $G = c = 1$ everywhere is [@shapteuk]{}, especially Chap. 15.
Synopsis of general relativity {#sec:gr}
==============================
GR describes gravity as geometry. The foundation of this is the [*metric*]{}, which provides a notion of spacetime distance. Suppose event A occurs at coordinate $x^\alpha$, and event B at $x^\alpha +
dx^\alpha$. The proper spacetime separation, $ds$, between A and B is given by $$ds^2 = g_{\alpha\beta} dx^\alpha dx^\beta\;.
\label{eq:metric}$$ The metric $g_{\alpha\beta}$ translates the information in coordinates, which can be arbitrary, into a “proper” quantity, which can be measured. In the limit of special relativity, $g_{\alpha\beta}$ becomes $\eta_{\alpha\beta}$ (defined in Sec.). The general spacetime metric is determined by the distribution of mass and energy; we describe how to compute it below. It will sometimes be useful to work with the inverse metric $g^{\alpha\beta}$, defined by $$g^{\alpha\beta}g_{\beta\gamma} = {\delta^\alpha}_\gamma\;.
\label{eq:metric_inverse}$$ The metric also takes inner products between vectors and tensors: $$\vec A\cdot \vec B = g_{\alpha\beta} A^\alpha B^\beta\;.$$ $\vec A$ is [*timelike*]{} if $\vec A\cdot\vec A < 0$, [*spacelike*]{} if $\vec A\cdot\vec A > 0$, and [*lightlike*]{} or [*null*]{} if $\vec
A\cdot\vec A = 0$.
Consider a [*worldline*]{} or spacetime trajectory $z^\mu(\tau)$, where $\tau$ is “proper time” (time as measured by an observer on that worldline). The vector $u^\mu \equiv dz^\mu/d\tau$ is the tangent to the worldline. If $u^\mu$ is timelike, it is the 4-velocity of an observer following the worldline, and is normalized $u^\mu u_\mu = -1$. Suppose the worldline extends from A to B. The total spacetime separation between these points is $$s = \int_{\rm A}^{\rm B} d\tau \sqrt{g_{\alpha\beta} u^\alpha u^\beta}\;.$$ We now extremize $s$: fix the endpoints, allow quantities under the integral to vary, but require the variation to be stationary (so that $\delta s = 0$). The $u^\alpha$ which extremizes $s$ is given by the [*geodesic equation*]{}: $$\frac{du^\alpha}{d\tau} + {\Gamma^\alpha}_{\beta\gamma} u^\beta
u^\gamma = 0\;.
\label{eq:geodesic}$$ We have introduced the “connection” ${\Gamma^\alpha}_{\beta\gamma}$; it is built from the metric by $${\Gamma^\alpha}_{\beta\gamma} = \frac{1}{2} g^{\alpha\mu}\left(
\partial_\gamma g_{\mu\beta} + \partial_\beta g_{\gamma\mu} -
\partial_\mu g_{\beta\gamma}\right)\;.
\label{eq:connection}$$ Geodesics are important for our discussion because [*freely falling bodies follow geodesics of spacetime in GR.*]{} Geodesics express the rule that “spacetime tells bodies how to move.”
Timelike geodesics describe material bodies. [*Null*]{} geodesics, for which $u^\mu u_\mu = 0$, describe massless bodies or light rays. Our discussion above describes null geodesics, with one modification: We cannot parameterize a null worldline with $\tau$, as proper time is not meaningful for a “speed of light” trajectory. Instead, one uses an [*affine parameter*]{} $\lambda$ which uniformly “ticks” along that trajectory. A convenient choice is to set $u^\alpha \equiv
dx^\alpha/d\lambda$ to be the 4-momentum of our radiation or massless body. With this choice, our discussion describes null trajectories just as well as timelike ones.
The connection also defines the [*covariant derivative*]{} of a vector or tensor: $$\begin{aligned}
\nabla_\alpha A^\beta &=& \partial_\alpha A^\beta + A^\mu
{\Gamma^\beta}_{\alpha\mu}\;,
\nonumber\\
\nabla_\alpha A_\beta &=& \partial_\alpha A_\beta - A_\mu
{\Gamma^\mu}_{\alpha\beta}\;,
\nonumber\\
\nabla_\alpha {A^\beta}_\gamma &=& \partial_\alpha {A^\beta}_\gamma +
{A^\mu}_\gamma {\Gamma^\beta}_{\alpha\mu} -
{A^\beta}_\mu {\Gamma^\mu}_{\alpha\gamma}\;.
\label{eq:covar}\end{aligned}$$ The pattern continues as we add indices. This derivative follows by comparing vectors and tensors that are slightly separated by [*parallel transporting*]{} them together to make the comparison; the connection encodes the twists of our curved geometry. Using the covariant derivative, the geodesic equation can be written $$u^\beta \nabla_\beta u^\alpha = 0\;.
\label{eq:geodesic2}$$
In curved spacetime, nearby geodesics diverge. Because a geodesic describes a freely-falling body, the rate of at which geodesics diverge describes [*tides*]{}. Let $\xi^\alpha$ be the displacement between geodesics. Then the rate of divergence is given by $$\frac{D^2\xi^\alpha}{d\tau^2} = {R^\alpha}_{\beta\gamma\delta}u^\beta
u^\gamma \xi^\delta\;.$$ We have introduced the [*Riemann curvature tensor*]{}, $${R^\alpha}_{\beta\gamma\delta} =
\partial_\gamma{\Gamma^\alpha}_{\beta\delta} -
\partial_\delta{\Gamma^\alpha}_{\beta\gamma} +
{\Gamma^\alpha}_{\mu\gamma}{\Gamma^\mu}_{\beta\delta} -
{\Gamma^\alpha}_{\mu\delta}{\Gamma^\mu}_{\beta\gamma}\;.
\label{eq:Riemann}$$ Some variants of Riemann are important. First, there is the Ricci curvature: $$R_{\alpha\beta} = {R^\mu}_{\alpha\mu\beta}\;.
\label{eq:ricci}$$ Ricci is the trace of Riemann. Taking a further trace gives us the Ricci scalar, $$R = {R^\mu}_{\mu}\;.$$ The Ricci tensor and Ricci scalar combine to produce the Einstein curvature: $$G_{\alpha\beta} = R_{\alpha\beta} - \frac{1}{2}g_{\alpha\beta}R\;.
\label{eq:einstein_curve}$$
The Riemann curvature satisfies the [*Bianchi identity*]{}, $$\nabla_\gamma R_{\alpha\beta\mu\nu} +
\nabla_\beta R_{\gamma\alpha\mu\nu} +
\nabla_\alpha R_{\beta\gamma\mu\nu} = 0\;.
\label{eq:bianchi}$$ By tracing over certain combinations of indices, the Bianchi identity implies $$\nabla^\alpha G_{\alpha\beta} = 0\;,
\label{eq:bianchi_contr}$$ a result sometimes called the “contracted” Bianchi identity.
So far, we have mostly described the mathematics of curved geometry. We must also introduce tools to describe matter and fields. The most important tool is the stress-energy tensor: $$T^{\mu\nu} \equiv \mbox{Flux of momentum $p^\mu$ in the $x^\nu$
direction.}
\label{eq:Tmunu_def}$$ An observer who uses coordinates $(t,x^i)$ to make local measurements interprets the components of this tensor as $$\begin{aligned}
T^{tt} &\equiv& \mbox{Local energy density}
\label{eq:energy_density}
\\
T^{ti} &\equiv& \mbox{Local energy flux (times $c$)}
\label{eq:energy_flux}
\\
T^{it} &\equiv& \mbox{Local momentum density (times $c$)}
\label{eq:momentum_density}
\\
T^{ij} &\equiv& \mbox{Local momentum flux (times $c^2$); $T^{ii}$
acts as pressure.}
\label{eq:momentum_flux}\end{aligned}$$ \[The factors of $c$ in Eqs. (\[eq:energy\_flux\]) – (\[eq:momentum\_flux\]) ensure that the components of $T^{\mu\nu}$ have the same dimension.\] Local conservation of energy and momentum is expressed by $$\nabla_\mu T^{\mu\nu} = 0\;.
\label{eq:local_en_cons}$$ In GR, we generally lose the notion of [*global*]{} energy conservation: We cannot integrate Eq. (\[eq:local\_en\_cons\]) over an extended region to “add up” the total energy and momentum. This is because $\nabla_\mu T^{\mu\nu}$ is a spacetime vector, and in curved spacetime one cannot unambiguously compare widely separated vectors.
Einstein’s hypothesis is that stress energy is the source of spacetime curvature. If $T^{\mu\nu}$ is our source, then the curvature must likewise be divergence free. The contracted Bianchi identity (\[eq:bianchi\_contr\]) shows us that the Einstein tensor is the curvature we need. This logic yields the [*Einstein field equation*]{}: $$G_{\mu\nu} = \frac{8\pi G}{c^4} T_{\mu\nu}\;.
\label{eq:einstein}$$ The factor $8\pi G/c^4$ guarantees that this equation reproduces Newtonian gravity in an appropriate limit. Note its value: $$\frac{G}{c^4} = 8.26\times 10^{-50}\frac{{\rm cm}^{-2}}{{\rm erg/cm}^3}\;.$$ It takes an [*enormous*]{} amount of energy density to produce spacetime curvature (measured in inverse length squared). Note that the reciprocal of this quantity, times $c$, has the dimensions of power: $$\frac{c^5}{G} = 3.63 \times 10^{59}\,{\rm erc/sec}\;.$$ This is the scale for the power that is generated by a GW source.
Gravitational-wave basics {#sec:gwbasics}
=========================
We now give a brief description of how gravitational waves arise in GR. Our purpose is to introduce the main ideas of this field, and also provide some results against which the more complete calculations we discuss later can be compared.
Leading waveform {#sec:waveform}
----------------
Begin with “weak” gravity, so that spacetime is nearly that of special relativity, $$g_{\alpha\beta} = \eta_{\alpha\beta} + h_{\alpha\beta}\;.$$ Take the correction to flat spacetime to be small, so that we can linearize about $\eta_{\alpha\beta}$. Consider, for example, raising and lowering indices: $$h^{\alpha\beta} \equiv g^{\alpha\mu}g^{\beta\nu} h_{\mu\nu} =
\eta^{\alpha\mu}\eta^{\beta\nu} h_{\mu\nu} + {\cal O}(h^2)\;.$$ Because we only keep quantities to first order in $h$, we will consistently use the flat metric to raise and lower indices for quantities related to the geometry.
Applying this logic repeatedly, we build the linearized Einstein tensor: $$G_{\alpha\beta} = \frac{1}{2} \left(\partial_\alpha\partial^\mu
h_{\mu\beta} + \partial_\beta\partial^\mu h_{\mu\alpha} -
\partial_\alpha\partial_\beta h - \Box h_{\alpha\beta} +
\eta_{\alpha\beta}\Box h - \eta_{\alpha\beta}\partial^\mu\partial^\nu
h_{\mu\nu}\right)\;,
\label{eq:lin_einstein1}$$ where $h \equiv \eta^{\alpha\beta}h_{\alpha\beta}$ is the trace of $h_{\alpha\beta}$, and $\Box \equiv \eta^{\alpha\beta}\partial_\alpha
\partial_\beta$ is the flat spacetime wave operator.
Equation (\[eq:lin\_einstein1\]) is rather messy. We clean it up in two steps. The first is pure sleight of hand: We introduce the [*trace-reversed*]{} metric perturbation $${\bar h}_{\alpha\beta} \equiv h_{\alpha\beta} -
\frac{1}{2}\eta_{\alpha\beta} h\;.$$ With this definition, Eq. (\[eq:lin\_einstein\]) becomes $$G_{\alpha\beta} = \frac{1}{2} \left(\partial_\alpha\partial^\mu \bar
h_{\mu\beta} + \partial_\beta\partial^\mu \bar h_{\mu\alpha} - \Box
\bar h_{\alpha\beta} - \eta_{\alpha\beta}\partial^\mu\partial^\nu
h_{\mu\nu}\right)\;.
\label{eq:lin_einstein2}$$ Next, we take advantage of the [*gauge-freedom*]{} of linearized gravity. Recall that in electrodynamics, if one adjusts the potential by the gradient of a scalar, $A_\mu \to A_\mu - \partial_\mu \Lambda$, then the field tensor $F_{\mu\nu} = \partial_\mu A_\nu - \partial_\nu
A_\mu$ is unchanged. In linearized GR, a similar operation follows by adjusting one’s coordinates: If one changes coordinates $x^\alpha \to
x^\alpha + \xi^\alpha$ (requiring $\partial_\mu\xi^\alpha \ll 1$), then $$h_{\mu\nu} \to h_{\mu\nu} - \partial_\mu\xi_\nu - \partial_\nu\xi_\mu\;.
\label{eq:gauge_lingrav}$$ One can easily show \[see, e.g., [@carroll], Sec. 7.1\] that changing gauge leaves the Riemann tensor (and thus all tensors derived from it) unchanged.
We take advantage of our gauge freedom to choose $\xi^\alpha$ so that $$\partial^\mu {\bar h}_{\mu\nu} = 0\;.$$ This is called “Lorenz gauge” in analogy with the electrodynamic Lorenz gauge condition $\partial^\mu A_\mu = 0$. This simplifies our Einstein tensor considerably, yielding $$G_{\alpha\beta} = -\frac{1}{2}\Box{\bar h}_{\alpha\beta}\;.
\label{eq:lin_einstein}$$ The Einstein equation for linearized gravity thus takes the simple form $$\Box{\bar h}_{\alpha\beta} = -\frac{16\pi G}{c^4} T_{\alpha\beta}\;.
\label{eq:lin_efe}$$
Any linear equation of this form can be solved using a radiative Green’s function \[e.g., [@jackson], Sec. 12.11\]. Doing so yields $${\bar h}_{\alpha\beta}({\bf x}, t) =
\frac{4G}{c^4}\int\frac{T_{\alpha\beta}({\bf x}', t - |{\bf x} -
{\bf x}'|/c)}{|{\bf x} - {\bf x}'|} d^3x'\;.
\label{eq:lin_soln}$$ In this equation, ${\bf x}$ is the “field point,” where $\bar
h_{\alpha\beta}$ is evaluated, and ${\bf x}'$ is a “source point,” the coordinate that we integrate over the source’s spatial extent. Notice that the solution at $t$ depends on what happens to the source at [*retarded time*]{} $t - |{\bf x} - {\bf x'}|/c$. Information must causally propagate from ${\bf x'}$ to ${\bf x}$.
Equation (\[eq:lin\_soln\]) is formally an exact solution to the linearized Einstein field equation. However, it has a serious problem: It gives the impression that [*every component*]{} of the metric perturbation is radiative. This is an unfortunate consequence of our gauge. Just as one can choose a gauge such that an isolated point charge has an oscillatory potential, the Lorenz gauge we have used makes [*all*]{} components of the metric appear radiative, even if they are static[^3].
Fortunately, it is not difficult to see that only a subset of the metric represents the truly radiative degrees of freedom in [*all*]{} gauges. We will only quote the result here; interested readers can find the full calculation in @fh05, Sec. 2.2: [*Given a solution $h_{\alpha\beta}$ to the linearized Einstein field equations, only the [**spatial**]{}, [**transverse**]{}, and [**traceless**]{} components $h^{\rm TT}_{ij}$ describe the spacetime’s gravitational radiation in a gauge-invariant manner.*]{} (The other components can be regarded as “longitudinal” degrees of freedom, much like the Coulomb potential of electrodynamics.) Traceless means $$\delta_{ij} h^{\rm TT}_{ij} = 0\;;
\label{eq:traceless}$$ “transverse” means $$\partial_i h^{\rm TT}_{ij} = 0\;.
\label{eq:transverse}$$ This condition tells us that $h^{\rm TT}_{ij}$ is orthogonal to the direction of the wave’s propagation. Expanding $h^{\rm TT}_{ij}$ in Fourier modes shows that Eq. (\[eq:transverse\]) requires $h^{\rm
TT}_{ij}$ to be orthogonal (in space) to each mode’s wave vector ${\bf
k}$.
Conditions (\[eq:traceless\]) and (\[eq:transverse\]) make it simple to construct $h^{\rm TT}_{ij}$ given some solution $h_{ij}$ to the linearized field equations. Let $n_i$ denote components of the unit vector along the wave’s direction of propagation. The tensor $$P_{ij} = \delta_{ij} - n_in_j$$ projects spatial components orthogonal to ${\bf n}$. It is then simple to verify that $$h^{\rm TT}_{ij} = h_{kl}\left(P_{ki}P_{lj} -
\frac{1}{2}P_{kl}P_{ij}\right)
\label{eq:projected_hTT}$$ represents the transverse and traceless metric perturbation.
The simplest example solution to Eq. (\[eq:projected\_hTT\]) can be built by going back to Eq. (\[eq:lin\_soln\]) and focusing on the spatial components: $${\bar h}_{ij} = \frac{4G}{c^4}\int\frac{T_{ij}({\bf x}', t - |{\bf x}
- {\bf x}'|/c)}{|{\bf x} - {\bf x}'|} d^3x'\;.$$ Consider a very distant source, putting $|{\bf x} - {\bf x}'| \simeq
R$: $${\bar h}_{ij} \simeq \frac{4}{R}\frac{G}{c^4}\int T_{ij}({\bf x}', t -
R/c) d^3x'\;.$$ To proceed, we invoke an identity. Using the fact that $\nabla^\mu
T_{\mu\nu} = 0$ goes to $\partial^\mu T_{\mu\nu} = 0$ in linearized theory and in our chosen coordinates, we have $$\partial^t T_{tt} + \partial^j T_{jt} = 0\;,\qquad
\partial^t T_{tj} + \partial^i T_{ij} = 0\;.$$ Combine these identities with the fact that $\partial^t =
-\partial_t$; use integration by parts to convert volume integrals to surface integrals; discard those integrals by taking the surface outside our sources. We then find $$\int T_{ij}({\bf x'}, t)\,d^3x' = \frac{1}{2}\frac{d^2}{dt^2}\int
x^{i'}x^{j'} T_{tt}({\bf x}', t)\,d^3x' \equiv \frac{1}{2}
\frac{d^2}{dt^2} I_{ij}(t)\;.$$ We have introduced here the [*quadrupole moment*]{} $I_{ij}$. This allows us to at last write the transverse-traceless waveform as $$h^{\rm TT}_{ij} =
\frac{2}{R}\frac{G}{c^4}\frac{d^2I_{kl}}{dt^2}\left(P_{ik}P_{jl} -
\frac{1}{2}P_{kl}P_{ij}\right)\;.
\label{eq:quadrupole1}$$ It is straightforward to show that the trace $I \equiv I_{ii}$ does not contribute to Eq. (\[eq:quadrupole1\]), so it is common to use the “reduced” quadrupole moment, $${\cal I}_{ij} = I_{ij} - \frac{1}{3}\delta_{ij}I\;.$$ The waveform then takes the form in which it is usually presented, $$h^{\rm TT}_{ij} =
\frac{2}{R}\frac{G}{c^4}\frac{d^2{\cal I}_{kl}}{dt^2}\left(P_{ik}P_{jl} -
\frac{1}{2}P_{kl}P_{ij}\right)\;,
\label{eq:quadrupole}$$ the [*quadrupole formula*]{} for GW emission.
A more accurate approximation than $|{\bf x} - {\bf x}'| \simeq R$ is $$|{\bf x} - {\bf x}'| \simeq R - {\bf n}\cdot{\bf x}'\;,$$ where ${\bf n}$ is the unit vector pointing from ${\bf x}'$ to ${\bf
x}$. Revisiting the calculation, we find that Eq.(\[eq:quadrupole\]) is the first term in a [*multipolar expansion*]{} for the radiation. Detailed formulae and notation can be found in [@thorne80]{}, Sec. IVA. Schematically, the resulting waveform can be written $$h^{\rm TT} = \frac{1}{R}\frac{G}{c^2}\sum_{l = 2}^\infty \left\{
\frac{{\cal A}_l}{c^l}\frac{d^l{\cal I}_l}{dt^l} + \frac{{\cal
B}_l}{c^{l+1}}\frac{d^l{\cal S}_l}{dt^l}\right\}^{\rm STF}\;.
\label{eq:multipolar_form}$$ We have hidden various factorials of $l$ in the coefficients ${\cal
A}_l$ and ${\cal B}_l$; the superscript “STF” means to symmetrize the result on any free indices and remove the trace.
The symbol ${\cal I}_l$ stands for the $l$th [*mass moment*]{} of the source. @thorne80 precisely defines ${\cal I}_l$; for our purposes, it is enough to note that it represents an integral of $l$ powers of length over the source’s mass density $\rho$: $${\cal I}_l \sim \int \rho(x') (x')^l\,d^3x'\;.$$ The mass moment plays a role in gravitational radiation similar to that played by the electric charge moment in electrodynamics. The symbol ${\cal S}_l$ describes the $l$th [*mass-current moment*]{}; it represents an integral of $l$ powers of length over the source’s mass-current ${\bf J} = \rho{\bf v}$, where ${\bf v}$ describes a source’s internal motions: $${\cal S}_l \sim \int \rho(x') v(x') (x')^l\,d^3x'\;.$$ ${\cal S}_l$ plays a role similar to the magnetic moment.
The similarity of the multipolar expansion (\[eq:multipolar\_form\]) for GWs to that for electromagnetic radiation should not be a surprise; after all, our linearized field equation (\[eq:lin\_efe\]) is very similar to Maxwell’s equation for the potential $A_\mu$ (modulo an extra index and a factor of four). One should be cautious about taking this analogy too far. Though electromagnetic intuition applied to linearized theory works well to compute the GWs from a source, one must go beyond this linearized form to understand deeper aspects of this theory. In particular, one must go to higher order in perturbation theory to see how energy and angular momentum are carried from a radiating source. We now sketch how this works in GR.
Leading energy loss {#sec:energyloss}
-------------------
Electromagnetic radiation carries a flux of energy and momentum described by the Poynting vector, ${\bf S} = (c/4\pi){\bf E}\times{\bf
B}$. Likewise, electromagnetic fields generate stresses proportional to $(|{\bf E}|^2 + |{\bf B}|^2)/8\pi$. The lesson to take from this is that the energy content of radiation should be [*quadratic*]{} in wave amplitude. Properly describing the energy content of radiation requires second-order perturbation theory. In this section, we will discuss the key concepts and ideas in this analysis, which was first given by [[@isaacson68]]{}.
Begin by writing the spacetime $$g_{\alpha\beta} = {\hat g}_{\alpha\beta} + \epsilon h_{\alpha\beta}
+ \epsilon^2 j_{\alpha\beta}\;.$$ We have introduced a parameter $\epsilon$ whose formal value is 1; we use it to gather terms that are of the same order. Note that now we do not restrict the background to be flat. This introduces a conceptual difficulty: Measurements directly probe only the [*total*]{} spacetime $g_{\alpha\beta}$ (or a derived surrogate like curvature), so how do we distinguish background $\hat g_{\alpha\beta}$ from perturbation? The answer is to use [*separation of lengthscales*]{}. Our background will only vary on “long” lengthscales and timescales ${\cal L}, {\cal T}$; our perturbation varies on “short” lengthscales and timescales $\lambda, \tau$. We require ${\cal L} \gg \lambda$ and ${\cal T} \gg \tau$. Let $\langle f
\rangle$ denote a quantity $f$ averaged over long scales; this averaging is well-defined even for tensors up to errors ${\cal
O}(\lambda^2/{\cal L}^2)$. Then, to first order in $\epsilon$, $$\begin{aligned}
\hat g_{\alpha\beta} &=& \langle g_{\alpha\beta} \rangle
\\
h_{\alpha\beta} &=& g_{\alpha\beta} - \hat g_{\alpha\beta}\;.\end{aligned}$$ The second-order contribution will be of order $h^2$, and (as we’ll see below) will have contributions on both long and short scales.
Begin by expanding the Einstein tensor in $\epsilon$. The result can be written $$G_{\mu\nu}(g_{\alpha\beta}) = G^0_{\mu\nu}({\hat g}_{\alpha\beta}) +
\epsilon G^1_{\mu\nu}(h_{\alpha\beta}) + \epsilon^2
[G^2_{\mu\nu}(h_{\alpha\beta}) + G^1_{\mu\nu}(j_{\alpha\beta})]\;.
\label{eq:einstein_expanded}$$ For simplicity, take the spacetime to be vacuum — there are no non-gravitational sources of stress and energy in the problem. We then require Einstein’s equation, $G_{\mu\nu} = 0$, to hold at each order. The zeroth order result, $$G^0_{\mu\nu}({\hat g}_{\alpha\beta}) = 0\;,$$ is just a statement that we assume our background to be a vacuum solution.
Expanding $G^1_{\mu\nu}(h_{\alpha\beta}) = 0$, we find $$-\frac{1}{2}{\hat\Box}{\bar h}_{\alpha\beta} - {\hat
R}_{\alpha\mu\beta\nu}{\bar h}^{\mu\nu} = 0\;,
\label{eq:waveeqn_curvedbackground}$$ where $\hat\Box \equiv {\hat g}^{\mu\nu} {\hat\nabla}_\mu
{\hat\nabla}_\nu$ is the wave operator for ${\hat g}_{\mu\nu}$ (with $\hat\nabla_\mu$ the covariant derivative on $\hat g_{\mu\nu}$), ${\hat R}_{\alpha\mu\beta\nu}$ is the Riemann curvature built from $\hat g_{\mu\nu}$, and ${\bar h}_{\mu\nu} = h_{\mu\nu} - (1/2){\hat
g}_{\mu\nu} h$. Equation (\[eq:waveeqn\_curvedbackground\]) is just the wave equation for radiation propagating on a curved background. The coupling between $h$ and the background Riemann tensor describes a correction to the “usual” geometric optics limit.
Next, consider second order: $$G^1_{\mu\nu}(j_{\alpha\beta}) = -G^2_{\mu\nu}(h_{\alpha\beta})\;.$$ To make sense of this, invoke separation of scales. The second-order perturbation has contributions on both scales: $$j_{\alpha\beta} = \langle j_{\alpha\beta} \rangle + \delta
j_{\alpha\beta}\;,$$ where $\langle j_{\alpha\beta}\rangle$ varies on long scales and $\delta j_{\alpha\beta}$ is oscillatory and varies on short scales. The second-order metric can now be written $$g_{\alpha\beta} = g^{\cal L}_{\alpha\beta} + \epsilon h_{\alpha\beta}
+ \epsilon^2 \delta j_{\alpha\beta}\;,$$ where $$g^{\cal L}_{\alpha\beta} \equiv \hat g_{\alpha\beta} + \epsilon^2
\langle j_{\alpha\beta} \rangle\;,$$ is a “corrected” background which includes all pieces that vary on long scales.
Now return to the Einstein equation (\[eq:einstein\_expanded\]), but consider its [*averaged*]{} value. Thanks to linearity, we can take the averaging inside the operator $G^1$: $$\langle G^1_{\mu\nu}(h_{\alpha\beta})\rangle = G^1_{\mu\nu}(\langle
h_{\alpha\beta}\rangle) = 0\;.$$ We have used here $\langle h_{\alpha\beta} \rangle = 0$. Putting all this together, we find $$G^0_{\mu\nu}({\hat g}_{\alpha\beta}) + \epsilon^2 [\langle
G^2_{\mu\nu}(h_{\alpha\beta})\rangle + G^1_{\mu\nu}(\langle
j_{\alpha\beta}\rangle )] = 0\;.$$ Let us rewrite this as $$G_{\mu\nu}({\hat g}_{\alpha\beta} + \epsilon^2 \langle j_{\alpha\beta}
\rangle) = -\epsilon^2 \langle
G^2_{\mu\nu}(h_{\alpha\beta})\rangle\;,$$ or, putting $\epsilon = 1$, $$G_{\mu\nu}(g^{\cal L}_{\alpha\beta}) = -\langle
G^2_{\mu\nu}(h_{\alpha\beta})\rangle\;.
\label{eq:2ndorder_averaged_einstein}$$ Equation (\[eq:2ndorder\_averaged\_einstein\]) says that the second-order averaged Einstein tensor acts as a source for the long lengthscale background spacetime. This motivates the definition $$T_{\mu\nu}^{\rm GW} = -\frac{c^4}{8\pi G}
\left\langle G^2_{\mu\nu}(h_{\alpha\beta})\right\rangle\;.$$ Choosing a gauge so that $\hat\nabla^\mu\bar h_{\mu\nu} = 0$, $T_{\mu\nu}^{\rm GW}$ takes a very simple form: $$T_{\mu\nu}^{\rm GW} = \frac{c^4}{32\pi G} \langle \hat\nabla_\mu
h_{\alpha\beta} \hat\nabla_\nu h^{\alpha\beta} \rangle\;.
\label{eq:isaacson_tmunu}$$ This quantity is known as the Isaacson stress-energy tensor [@isaacson68].
The “Newtonian, quadrupole” waveform {#sec:newtonian_waves}
------------------------------------
A useful exercise is to consider a binary with Newtonian orbital dynamics that radiates GWs according to Eq. (\[eq:quadrupole\]). Further, allow the binary to slowly evolve by energy and angular momentum carried off in accordance with Eq.(\[eq:isaacson\_tmunu\]).
Begin by considering such a binary with its members in circular orbit of separation $R$. This binary is characterized by orbital energy $$E^{\rm orb} = \frac{1}{2}m_1 v_1^2 + \frac{1}{2}m_2 v_2^2 -
\frac{Gm_1m_2}{R} = -\frac{G\mu M}{2R}\;,$$ (where $M = m_1 + m_2$ and $\mu = m_1 m_2/M$) and orbital frequency $$\Omega_{\rm orb} = \sqrt{\frac{GM}{R^3}}\;.$$
Next consider the energy that GWs carry from the binary to distant observers. When evaluated far from a source, Eq.(\[eq:isaacson\_tmunu\]) gives a simple result for energy flux: $$\frac{dE}{dAdt} = \frac{c^4}{32\pi G}\langle \partial_t h^{\rm TT}_{ij}
\partial_k h^{\rm TT}_{ij}\rangle n^k\;.$$ Plugging in Eq. (\[eq:quadrupole\]) and integrating over the sphere, we find $$\frac{dE}{dt}^{\rm GW} = \int dA\,\frac{dE}{dAdt} =
\frac{G}{5c^5}\left\langle \frac{d^3{\cal I}_{ij}}{dt^3} \frac{d^3{\cal
I}_{ij}}{dt^3}\right\rangle\;.
\label{eq:quadrupole_edot}$$ For the Newtonian binary, $${\cal I}_{ij} = \mu \left(x_i x_j -
\frac{1}{3}R^2\delta_{ij}\right)\;;$$ we choose coordinates for this binary such that the components of the separation vector are $x_1 = R\cos\Omega_{\rm orb} t$, $x_2 =
R\sin\Omega_{\rm orb} t$, $x_3 = 0$. Inserting into Eq.(\[eq:quadrupole\_edot\]), we find $$\frac{dE}{dt}^{\rm GW} = \frac{32}{5}\frac{G}{c^5} \mu^2 R^4
\Omega^6\;.
\label{eq:circbin_edot}$$ We now assert that the binary evolves quasi-statically, meaning that any radiation carried off by GWs is accounted for by the evolution of its orbital energy: $$\frac{dE}{dt}^{\rm orb} + \frac{dE}{dt}^{\rm GW} = 0\;.
\label{eq:energy_balance}$$ We evaluate $dE^{\rm orb}/dt$ by allowing the orbital radius to slowly change in time, $$\frac{dE}{dt}^{\rm orb} = \frac{dE}{dR}^{\rm orb}\frac{dR}{dt}\;.
\label{eq:eorb_dot}$$ Combining Eqs. (\[eq:circbin\_edot\]), (\[eq:energy\_balance\]), and (\[eq:eorb\_dot\]), we find $$R(t) = \left[\frac{256 G^3 \mu M^2(t_c - t)}{5c^5}\right]^{1/4}\;.
\label{eq:rorb_of_time}$$ This in turn tells us that the orbital frequency changes according to $$\Omega_{\rm orb}(t) = \left[\frac{5c^5}{256(G{\cal M})^{5/3}(t_c -
t)}\right]^{3/8}\;.
\label{eq:Newt_quad}$$ We have introduced the binary’s [*chirp mass*]{} ${\cal M} \equiv
\mu^{3/5}M^{2/5}$, so called because it sets the rate at which the binary sweeps upward in frequency, or “chirps.” We have also introduced the coalescence time $t_c$, which formally describes when the separation goes to zero (equivalently, when the frequency goes to infinity). By rearranging Eq. (\[eq:rorb\_of\_time\]), we find the time remaining for a circular binary of radius $R$ to coalesce due to GW emission: $$\begin{aligned}
T_{\rm remaining} &=& \frac{5}{256}\frac{c^5}{G^3}\frac{R^4}{\mu M^2}
\nonumber\\
&=& 3\times10^6\,{\rm years}\left(\frac{2.8\,M_\odot}{M}\right)^3
\left(\frac{R}{R_\odot}\right)^4
\nonumber\\
&=& 2\,{\rm months}\left(\frac{2\times10^6\,M_\odot}{M}\right)^3
\left(\frac{R}{{\rm AU}}\right)^4
\nonumber\\
&=& 3\times10^8{\rm years}\left(\frac{2\times10^6\,M_\odot}{M}\right)^3
\left(\frac{R}{0.001\,{\rm pc}}\right)^4\;.
\label{eq:coal_time}\end{aligned}$$ The fiducial numbers we show here are for equal mass binaries, so $\mu
= M/4$.
Given the substantial eccentricity of many binaries, restriction to circular orbits may not seem particularly realistic. Including eccentricity means that our binary will have two evolving parameters (semi-major axis $a$ and eccentricity $e$) rather than just one (orbital radius $R$). To track their evolution, we must separately compute the radiated energy and angular momentum (@pm63 [@peters64]): $$\frac{dE}{dt}^{\rm GW} = \frac{G}{5c^5}\langle\frac{d^3{\cal
I}_{ij}}{dt^3} \frac{d^3{\cal I}_{ij}}{dt^3}\rangle =
\frac{32}{5}\frac{G}{c^5} \mu^2 a^4 \Omega^6 f(e)\;,
\label{eq:edot_eccentric}$$ $$\frac{dL_z}{dt}^{\rm GW} =
\frac{2G}{5c^5}\epsilon_{zjk}\langle\frac{d^2{\cal I}_{jm}}{dt^2}
\frac{d^3{\cal I}_{km}}{dt^3}\rangle =
\frac{32}{5}\frac{G}{c^5} \mu^2 a^4 \Omega^5 g(e)\;.
\label{eq:ldot_eccentric}$$ Because the binary is in the $x-y$ plane, the angular momentum is purely along the $z$ axis. The eccentricity corrections $f(e)$ and $g(e)$ are given by $$f(e) = \frac{1 + \frac{73}{24}e^2 + \frac{37}{96}e^4}{(1 - e^2)^{7/2}}\;,
\qquad
g(e) = \frac{1 + \frac{7}{8}e^2}{(1 - e^2)^2}\;.$$ Using standard definitions relating the semi-major axis and eccentricity to the orbit’s energy $E$ and angular momentum $L_z$, Eqs. (\[eq:edot\_eccentric\]) and (\[eq:ldot\_eccentric\]) imply [@peters64] $$\frac{da}{dt} = -\frac{64}{5}\frac{G^3}{c^5}\frac{\mu M^2}{a^3} f(e)\;,
\label{eq:adot}$$ $$\frac{de}{dt} = -\frac{304}{15}e\frac{G^3}{c^5}\frac{\mu M^2}{a^4}
\frac{1 + \frac{121}{304}e^2}{(1 - e^2)^{5/2}}\;.
\label{eq:edot}$$ It is then simple to compute the rate at which an eccentric binary’s orbital period changes due to GW emission. This result is compared with data in investigations of GW generating binary pulsars. Because eccentricity tends to enhance a system’s energy and angular momentum loss, the timescales given in Eq. (\[eq:coal\_time\]) can be significant [*under*]{}estimates of a binary’s true coalescence time.
Notice that a binary’s eccentricity decreases: GWs tend to circularize orbits. Many binaries are expected to be essentially circular by the time their waves enter the sensitive band of many GW detectors; the circular limit is thus quite useful. Exceptions are binaries which form, through capture processes, very close to merging. The extreme mass ratio inspirals discussed in Sec. [\[sec:pert\]]{} are a particularly interesting example of this.
We conclude this section by writing the gravitational waveform predicted for quadrupole emission from the Newtonian, circular binary. Evaluating Eq. (\[eq:quadrupole\]), we find that $h_{ij}$ has two polarizations. These are traditionally labeled “plus” and “cross” from the lines of force associated with their tidal stretch and squeeze. Taking our binary to be a distance $D$ from Earth, its waveform is written $$\begin{aligned}
h_+ &=& -\frac{2G{\cal M}}{c^2D}\left(\frac{\pi G{\cal
M}f}{c^3}\right)^{2/3}(1 + \cos^2\iota) \cos2\Phi_N(t)\;,
\nonumber\\
h_\times &=& -\frac{4G{\cal M}}{c^2D}\left(\frac{\pi G{\cal
M}f}{c^3}\right)^{2/3}\cos\iota \sin2\Phi_N(t)\;,
\label{eq:h_NQ}\end{aligned}$$ where the phase $$\Phi_N(t) = \int \Omega_{\rm orb}\,dt = \Phi_c - \left[\frac{c^3(t_c
- t)}{5G{\cal M}}\right]^{5/8}\;,
\label{eq:phi_NQ}$$ and where $f = (1/\pi)d\Phi_N/dt$ is the GW frequency. The system’s inclination $\iota$ is just the projection of its orbital angular momentum, ${\bf L}$, to the wave’s direction of propagation ${\bf n}$: $\cos\iota = \hat{\bf L}\cdot{\bf n}$ (where $\hat{\bf L} = {\bf
L}/|{\bf L}|$). We show fiducial values for the GW amplitudes when we briefly describe GW measurement in Sec. [\[sec:gwastro\]]{}.
In later sections, we will use Eq. (\[eq:h\_NQ\]) as a reference to calibrate how effects we have neglected so far change the waves. Note that $h_+$ and $h_\times$ depend on, and thus encode, the chirp mass, distance, the position on the sky (via the direction vector ${\bf
n}$), and the orientation of the binary’s orbital plane (via $\hat{\bf
L}$).
Nonlinear description of waves {#sec:nonlinear}
------------------------------
The nonlinear nature of GR is one of its most important defining characteristics. By linearizing, perhaps we are throwing out the baby with the bathwater, failing to characterize important aspects of gravitational radiation. Fortunately, we can derive a wave equation that fully encodes all nonlinear features of GR. This was apparently first derived by [[@penrose60]]{}; [[@ryan74]]{} gives a very nice discussion of this equation’s history and derivation. Begin by taking an additional derivative $\nabla^\gamma$ of the Bianchi identity (\[eq:bianchi\]), obtaining $$\Box_g R_{\alpha\beta\mu\nu} = -\nabla^\gamma \nabla_\beta
R_{\gamma\alpha\mu\nu} - \nabla^\gamma \nabla_\alpha
R_{\beta\gamma\mu\nu} \;,
\label{eq:nonlinwave1}$$ where $\Box_g \equiv g^{\gamma\delta}\nabla_\gamma\nabla_\delta$ is a covariant wave operator. Next, use the fact that the commutator of covariant derivatives generates a Riemann: $$\left[\nabla_\gamma,\nabla_\delta\right]p_\mu \equiv
\left(\nabla_\gamma\nabla_\delta -
\nabla_\delta\nabla_\gamma\right)p_\mu =
-{R^\sigma}_{\mu\gamma\delta}p_{\sigma}\;,$$ $$\left[\nabla_\gamma,\nabla_\delta\right]p_{\mu\nu} =
-{R^\sigma}_{\mu\gamma\delta}p_{\sigma\nu}
-{R^\sigma}_{\nu\gamma\delta}p_{\mu\sigma}
\;.$$ Extension to further indices is hopefully obvious. Manipulating (\[eq:nonlinwave1\]) yields a wave equation for Riemann in which $(\mbox{Riemann})^2$ acts as the source term. This is the Penrose wave equation; see [@ryan74] for more details.
If spacetime is vacuum \[$T_{\mu\nu} = 0$, so that via the Einstein equation (\[eq:einstein\]) $R_{\mu\nu} = 0$\], the Penrose wave equation simplifies quite a bit, yielding $$\Box_g R_{\alpha\beta\mu\nu} =
2R_{\mu\sigma\beta\tau} {{{R_\nu}^\sigma}_\alpha}^\tau -
2R_{\mu\sigma\alpha\tau} {{{R_\nu}^\sigma}_\beta}^\tau +
R_{\mu\sigma\tau\sigma} {R^{\tau\sigma}}_{\alpha\beta}\;.
\label{eq:nonlinwave}$$ A variant of Eq. (\[eq:nonlinwave\]) underlies much of black hole perturbation theory, our next topic.
Perturbation theory {#sec:pert}
===================
Perturbation theory is the first technique we will discuss for modeling strong-field merging compact binaries. The basic concept is to assume that the spacetime is an exact solution perturbed by a small orbiting body, and expand to first order in the binary’s mass ratio. Some of the most interesting strong-field binaries have black holes, so we will focus on black hole perturbation theory. Perturbation theory analysis of binaries has two important applications. First, it can be a limiting case of the pN expansion: Perturbation theory for binaries with separations $r \gg GM/c^2$ should give the same result as pN theory in the limit $\mu \ll M$. We return to this point in Sec. [\[sec:pn\]]{}. Second, perturbation theory is an ideal tool for extreme mass ratio captures, binaries created by the scattering of a stellar mass ($m \sim 1 - 100\,M_\odot$) body onto a strong-field orbit of a massive ($M \sim 10^5 - 10^7\,M_\odot$) black hole.
Basic concepts and overview of formalism {#sec:perturb_formalism}
----------------------------------------
At its most basic, black hole perturbation theory is developed much like the weak gravity limit described in Sec. [\[sec:waveform\]]{}, replacing the flat spacetime metric $\eta_{\alpha\beta}$ with the spacetime of a black hole: $$g_{\mu\nu} = g_{\mu\nu}^{\rm BH} + h_{\mu\nu}\;.$$ For astrophysical scenarios, one uses the Schwarzschild (non-rotating black hole) or Kerr (rotating) solutions for $g_{\mu\nu}^{\rm BH}$. It is straightforward (though somewhat tedious) to then develop the Einstein tensor for this spacetime, keeping terms only to first order in the perturbation $h$.
This approach works very well when the background is non-rotating, $$(ds^2)^{\rm BH} = g_{\mu\nu}^{\rm BH} dx^\mu dx^\nu = -\left(1 -
\frac{2\hat M}{r}\right)dt^2 + \frac{dr^2}{\left(1 - 2\hat M/r\right)} +
r^2d\Omega^2\;,$$ where $d\Omega^2 = d\theta^2 + \sin^2\theta d\phi^2$ and $\hat M =
GM/c^2$. We consider this special case in detail; our discussion is adapted from [[@rezzolla03]]{}. Because the background is spherically symmetric, it is useful to decompose the perturbation into spherical harmonics. For example, under rotations in $\theta$ and $\phi$, $h_{00}$ should transform as a scalar. We thus put $$h_{00} = \sum_{lm} a_{lm}(t,r) Y_{lm}(\theta,\phi)\;.$$ The components $h_{0i}$ transform like components of a 3-vector under rotations, and can be expanded in vector harmonics; $h_{ij}$ can be expanded in tensor harmonics. One can decompose further with parity: Even harmonics acquire a factor $(-1)^l$ when $(\theta,\phi) \to (\pi
- \theta, \phi + \pi)$; odd harmonics acquire a factor $(-1)^{l+1}$.
By imposing these decompositions, choosing a particular gauge, and requiring that the spacetime satisfy the vacuum Einstein equation $G_{\mu\nu} = 0$, we find an equation that governs the perturbations. Somewhat remarkably, the $t$ and $r$ dependence for all components of $h_{\mu\nu}$ for given spherical harmonic indices $(l,m)$ can be constructed from a function $Q(t,r)$ governed by the simple equation $$\frac{\partial^2 Q}{\partial t^2} - \frac{\partial^2 Q}{\partial
r_*^2} - V(r)Q = 0\;,
\label{eq:schwarz_pert}$$ where $r_* = r + 2 \hat M \ln(r/2\hat M - 1)$. The potential $V(r)$ depends on whether we consider even or odd perturbations: $$V_{\rm even}(r) = \left(1 - \frac{2\hat M}{r}\right) \left[\frac{2q(q +
1)r^3 + 6q^2\hat M r^2 + 18 q\hat M^2 r + 18\hat M^3} {r^3\left(qr
+ 3\hat M\right)^2}\right]\;,
\label{eq:zerilli_pot}$$ where $q = (l - 1)(l + 2)/2$; and $$V_{\rm odd}(r) = \left(1 - \frac{2\hat M}{r}\right)
\left[\frac{l(l+1)}{r^2} - \frac{6\hat M}{r^3}\right]\;.
\label{eq:reggewheeler_pot}$$ For even parity, Eq. (\[eq:schwarz\_pert\]) is the [*Zerilli equation*]{} [[@zerilli70]]{}; for odd, it is the [*Regge-Wheeler equation*]{} [[@rw57]]{}. For further discussion, including how gauge is chosen and how to construct $h_{\mu\nu}$ from $Q$, see [@rezzolla03]{}. Finally, note that when the spacetime perturbation is due to a body orbiting the black hole, these equations acquire a source term. One can construct the full solution for the waves from an orbiting body by using the source-free equation to build a Green’s function, and then integrating over that source.
How does this procedure fare for rotating holes? The background spacetime, $$\begin{aligned}
(ds^2)^{\rm BH} &=& -\left(1 - \frac{2\hat Mr}{\rho^2}\right)dt^2 -
\frac{4 a\hat M r\sin^2\theta}{\rho^2}dt d\phi +
\frac{\rho^2}{\Delta}dr^2 + \rho^2d\theta^2
\nonumber\\
& & + \left(r^2 + a^2 + \frac{2\hat Mr
a^2\sin^2\theta}{\rho^2}\right)d\phi^2\;,
\label{eq:kerr_metric}\end{aligned}$$ where $$a = \frac{|\vec S|}{c M}\;,\qquad
\rho^2 = r^2 + a^2\cos^2\theta\;,\qquad
\Delta = r^2 - 2\hat M r + a^2\;,$$ is now markedly nonspherical. \[We have used “Boyer-Lindquist” coordinates, but the nonspherical nature is independent of coordinate choice. The spin parameter $a$ has the dimension of mass, and must satisfy $a \le M$ in order for Eq. (\[eq:kerr\_metric\]) to represent a black hole.\] So, the decomposition into spherical harmonics is not useful. One could in principle simply expand $G_{\mu\nu} = 0$ to first order in $h_{\mu\nu}$ and obtain a partial differential equation in $t$, $r$, and $\theta$. (The metric is axially symmetric, so we can easily separate the $\phi$ dependence.) This author is unaware of such a formulation[^4]. One issue is that the gauge used for the perturbation must be specified; this may be complicated in the general case. More important historically, the equations so developed do not appear to separate. As we’ll see in a moment, a different approach [*does*]{} yield separable equations, which were preferred for much of the history of this field.
Rather than expanding the metric of the black hole, [[@teuk73]]{} examined perturbations of its curvature: $$R_{\alpha\mu\beta\nu} =
R^{\rm BH}_{\alpha\mu\beta\nu} +
\delta R_{\alpha\mu\beta\nu}\;.$$ The curvature tensor is invariant to first-order gauge transformations, an attractive feature. This tensor also obeys the nonlinear wave equation (\[eq:nonlinwave\]). By expanding that equation to linear order in $\delta R_{\alpha\mu\beta\nu}$, Teukolsky showed that perturbations to Kerr black holes are governed by the equation $$\begin{aligned}
&&
\!\!\!\!\!\!\!
\left[\frac{(r^2 + a^2)^2 }{\Delta} - a^2\sin^2\theta\right]
\partial^2_{t}\Psi - 4\left[r + ia\cos\theta - \frac{\hat M(r^2 -
a^2)}{\Delta}\right]\partial_t\Psi
\nonumber\\
&&
\!\!\!\!\!\!\!
+\frac{4i \hat M a m r}{\Delta}\partial_t\Psi -
\Delta^{2}\partial_r\left(\Delta^{-1}\partial_r\Psi\right)
- \frac{1}{\sin\theta}\partial_\theta
\left(\sin\theta\partial_\theta\Psi\right)
\nonumber\\
&&
\!\!\!\!\!\!\!
- \left[\frac{a^2}{\Delta} - \frac{1}{\sin^2\theta}\right]m^2 \Psi +
4im \left[\frac{a (r - \hat M)}{\Delta} + \frac{i
\cos\theta}{\sin^2\theta} \right]\Psi - \left(4\cot^2\theta +
2\right) \Psi = {\cal T}\;. \nonumber\\
\label{eq:teukolsky}\end{aligned}$$ $\Psi$ is a complex quantity built from a combination of components of $\delta R_{\alpha\mu\beta\nu}$, and describes spacetime’s radiation; see [[@teuk73]]{} for details. (We have assumed $\Psi \propto
e^{im\phi}$.) Likewise, ${\cal T}$ describes a source function built from the stress-energy tensor describing a small body orbiting the black hole. Somewhat amazingly, Eq. (\[eq:teukolsky\]) separates: putting $$\Psi = \int d\omega \sum_{lm} R_{lm}(r)S_{lm}(\theta)e^{im\phi -
i\omega t}
\label{eq:teuk_decomp}$$ and applying a similar decomposition to the source ${\cal T}$, we find that $S_{lm}(\theta)$ is a “spin-weighted spheroidal harmonic” (a basis for tensor functions in a non-spherical background), and that $R_{lm}(r)$ is governed by a simple ordinary differential equation. $\Psi$ characterizes Kerr perturbations in much the same way that $Q$ \[cf. Eq. (\[eq:schwarz\_pert\])\] characterizes them for Schwarzschild. It’s worth noting that, although the perturbation equations are often solved numerically, analytic solutions are known (@mst96, @fiziev09), and can be used to dramatically improve one’s scheme for solving for black hole perturbations.
Whether one uses this separation or solves Eq. (\[eq:teukolsky\]) directly, solving for perturbations of black holes is now a well-understood enterprise. We now discuss how one uses these solutions to model compact binaries.
Binary evolution in perturbation theory {#sec:evolve_perturbation}
---------------------------------------
How do we describe the motion of a small body about a black hole? The most rigorous approach is to enforce $\nabla^\mu T_{\mu\nu} = 0$, where $T_{\mu\nu}$ describes the small body in the spacetime of the large black hole. If we neglect the small body’s perturbation to the spacetime, this exercise produces the geodesic equation $u^\mu
\nabla_\mu u^\nu = 0$, where $u^\mu$ is the small body’s 4-velocity. Geodesic black hole orbits have been studied extensively; see, for example, [[@mtw]]{}, Chapter 33. A key feature of these orbits is that they are characterized (up to initial conditions) by three conserved constants: energy $E$, axial angular momentum $L_z$, and “Carter’s constant” $Q$. If the black hole does not rotate, Carter’s constant is related to the orbit’s total angular momentum: $Q(a = 0) = {\bf L}\cdot{\bf L} - L_z^2$. When the black hole rotates rapidly, $Q$ is not so easy to interpret, but the idea that it is essentially the rest of the orbit’s angular momentum can be useful.
Now take into account perturbations from the small body. Enforcing $\nabla^\mu T_{\mu\nu} = 0$, we find that the small body follows a “forced” geodesic, $$u^\mu \hat\nabla_\mu u^\nu = f^\nu\;,
\label{eq:selfforce_eom}$$ where $\hat\nabla_\mu$ is the covariant derivative in the background black-hole spacetime. The novel feature of Eq.(\[eq:selfforce\_eom\]) is the [*self force*]{} $f^\nu$, a correction to the motion of order the small body’s spacetime perturbation. The self force is so named because it arises from the body’s interaction with its own spacetime correction.
Self forces have a long pedigree. @dirac38 showed that a self force arises from the backreaction of an electromagnetic charge on itself, and causes radiative damping. Computing the gravitational self force near a black hole is an active area of current research. It is useful to break the self force into a [*dissipative*]{} piece, $f^\nu_{\rm diss}$, which is asymmetric under time reversal, and a [*conservative*]{} piece, $f^\nu_{\rm cons}$, which is symmetric. These contributions have very different impact on the orbit. Dissipation causes the “conserved” quantities $(E, L_z, Q)$ to decay, driving inspiral of the small body. [[@qw99]]{} have shown that the rate at which $E$ and $L_z$ change due to $f^\nu_{\rm diss}$ is identical to what is found when one computes the fluxes of energy and angular momentum encoded by the Isaacson tensor (\[eq:isaacson\_tmunu\]).
The conservative self force, by contrast, does not cause orbit decay. “Conserved” constants are still conserved when we include this force; but, the orbits are different from background geodesics. This reflects the fact that, even neglecting dissipation, the small body’s motion is determined by the full spacetime, not just the background black hole. When conservative effects are taken into account, one finds that the orbital frequencies are shifted by an amount $$\delta\Omega_x \sim \Omega_x \times (\mu/M)$$ \[where $x \in (\phi,\theta,r)$\]. Because the GWs have spectral support at harmonics of the orbital frequencies, these small but non-negligible frequency shifts are directly encoded in the waves that the binary generates. Good discussion and a toy model can be found in [[@ppn05]]{}.
To date, not a large amount of work is published regarding self forces and conservative effects for Kerr orbits. There has, however, been enormous progress for the case of orbits around non-rotating holes. [[@bs07]]{} have completed an analysis of the full self force for circular orbits about a Schwarzschild black hole; generalization to eccentric orbits is in progress (L. Barack, private communication). An independent approach developed by [[@det08]]{} has been found to agree with Barack and Sago extremely well; see [[@sbd08]]{} for detailed discussion of this comparison.
Gravitational waves from extreme mass ratio binaries {#sec:emri_waves}
----------------------------------------------------
In this section, we discuss the properties of GWs and GW sources as calculated using perturbation theory. As discussed in Sec.(\[sec:relview\]), these waves most naturally describe extreme mass ratio capture sources. There is also an important overlap with the pN results discussed in Sec. [\[sec:pn\]]{}: By specializing to circular, equatorial orbits and considering the limit $r \gg GM/c^2$, results from perturbation theory agree with pN results for $\mu/M \ll
1$.
Our goal here is to highlight features of the generic Kerr inspiral waveform. As such, we will neglect the conservative self force, which is not yet understood for the Kerr case well enough to be applied to these waves. When conservative effects are neglected, the binary can be regarded as evolving through a sequence of geodesics, with the sequence determined by the rates at which GWs change the “constants” $E$, $L_z$, and $Q$. Modeling compact binaries in this limit takes three ingredients: First, a description of black hole orbits; second, an algorithm to compute GWs from the orbits, and to infer how the waves’ backreaction evolves us from orbit to orbit; and third, a method to integrate along the orbital sequence to build the full waveform. A description of this method is given in [[@hetal05]]{}; we summarize the main results of these three ingredients here.
### Black hole orbits. {#sec:bhorbits}
Motion in the vicinity of a black hole can be conveniently written in the Boyer-Lindquist coordinates of Eq. (\[eq:kerr\_metric\]) as $r(t)$, $\theta(t)$, and $\phi(t)$. Because $t$ corresponds to time far from the black hole, this gives a useful description of the motion as measured by distant observers. [*Bound*]{} black hole orbits are confined to a region near the hole. They have $r_{\rm min} \le r(t)
\le r_{\rm max}$ and $\theta_{\rm min} \le \theta(t) \le \pi -
\theta_{\rm min}$. Bound orbits thus occupy a torus in the 3-space near the hole’s event horizon; an example is shown in Fig., taken from [[@dh06]]{}. Selecting the orbital constants $E$, $L_z$, and $Q$ fully determines $r_{\rm min/max}$ and $\theta_{\rm min}$. It is useful for some discussions to reparameterize the radial motion, defining an eccentricity $e$ and a semi-latus rectum $p$ via $$r_{\rm min} = \frac{p}{1 + e}\;,\qquad
r_{\rm max} = \frac{p}{1 - e}\;.$$ For many bound black hole orbits, $r(t)$, $\theta(t)$, and $\phi(t)$ are periodic ([@schmidt02]{}; see also [@dh04]{}). (Exceptions are orbits which plunge into the black hole; we discuss these below.) Near the hole, the time to cover the full range of $r$ becomes distinct from the time to cover the $\theta$ range, which becomes distinct from the time to cover $2\pi$ radians of azimuth. One can say that spacetime curvature splits the Keplerian orbital frequency $\Omega$ into $\Omega_r$, $\Omega_\theta$, and $\Omega_\phi$. Figure [\[fig:freqs\]]{} shows these three frequencies, plotted as functions of semi-major axis $A$ for fixed values of $e$ and $\theta_{\rm min}$. Notice that all three approach $\Omega \propto A^{-3/2}$ for large $A$.
![The geometry of a generic Kerr black hole orbit \[taken from [[@dh06]]{}\]. This orbit is about a black hole with spin parameter $a = 0.998M$ (recall $a \le M$, so this represents a nearly maximally spinning black hole). The range of its radial motion is determined by $p = 7GM/c^2$ ($G$ and $c$ are set to 1 in the figure) and $e = 1/3$; $\theta$ ranges from $60^\circ$ to $120^\circ$. The left panel shows the torus in coordinate space this torus occupies. The right panel illustrates how a generic orbit ergodically fills this torus.[]{data-label="fig:torus"}](torus.eps){width="5.3in"}
![Orbital frequencies for generic Kerr black hole orbits. We vary the orbits’ semilatus rectum $p$, but fix eccentricity $e = 0.5$ and inclination parameter $\theta_{\rm min} = 75^\circ$. Our results are plotted as a function of semimajor axis $A = p/\sqrt{1 - e^2}$. All three frequencies asymptote to the Keplerian value $\Omega =
\sqrt{GM/A^3}$ in the weak field, but differ significantly from each other in the strong field.[]{data-label="fig:freqs"}](freqs.eps){width="5in"}
### Gravitational radiation from orbits. {#sec:orbitwaves}
Because their orbits are periodic, GWs from a body orbiting a black hole will have support at harmonics of the orbital frequencies. One can write the two polarizations $$h_+ + i h_\times = \sum H_{mkn}
e^{i\omega_{mkn}t}\;,\qquad\mbox{where}
\label{eq:fd_waveform}$$ $$\omega_{mkn} = m\Omega_\phi + k\Omega_\theta + n\Omega_r\;.
\label{eq:harmonics}$$ The amplitude $H_{mkn}$ can be found by solving the Teukolsky equation (\[eq:teukolsky\]) using the decomposition (\[eq:teuk\_decomp\]); details for the general case can be found in [@dh06]{}. An example of a wave from a geodesic orbit is shown in Fig.. Note the different timescales apparent in this wave; they are due to the three distinct frequencies of the underlying geodesic orbit (and their harmonics).
![Waveform arising from a generic geodesic black hole orbit, neglecting orbital evolution due to backreaction. This orbit has $p =
8GM/c^2$ and $e = 0.5$, corresponding to motion in the range $16GM/3c^2 \le r(t) \le 16GM/c^2$; it also has $\theta_{\rm min} =
60^\circ$. The large black hole has a spin parameter $a = 0.9M$. Note that the wave has structure at several timescales, corresponding to the three frequencies $\Omega_r$, $\Omega_\theta$, and $\Omega_\phi$ (cf. Fig. [\[fig:freqs\]]{}).[]{data-label="fig:genwave"}](genwave.eps){width="5in"}
The expansion (\[eq:fd\_waveform\]) does not work well for orbits that plunge into the black hole; those orbits are not periodic, and cannot be expanded using a set of real frequencies. A better way to calculate those waves is to solve the Teukolsky equation (\[eq:teukolsky\]) [*without*]{} introducing the decomposion (\[eq:teuk\_decomp\]). Results for waves from plunging orbits in the language of perturbation theory was first given by [[@ndt07]]{}; [[@sundararajan08]]{} has recently extended the cases that we can model to full generality.
As mentioned in Sec. [\[sec:evolve\_perturbation\]]{}, it is fairly simple to compute the flux of energy $\dot E$ and angular momentum $\dot L_z$ from the Isaacson tensor, Eq. (\[eq:isaacson\_tmunu\]), once the waves are known. Recent work [[@ganz07]]{} has shown that a similar result describes $\dot Q$. Once $\dot E$, $\dot L_z$, and $\dot Q$ are known, it is straightforward to evolve the orbital elements $r_{\rm min/max}$ and $\theta_{\rm min}$, specifying the sequence of orbits through which gravitational radiation drives the system. Figure [\[fig:circ\_evol\]]{} gives an example of how orbits evolve when their eccentricity is zero.
![The evolution of circular orbits ($e = 0$) about a black hole with $a = 0.8M$; taken from [[@hughes00]]{}. The inclination angle $\iota$ is given by $\iota \simeq \pi/2 - \theta_{\rm min}$; the equality is exact for $\theta_{\rm min} = 0$ and for $a = 0$. In the general case, this relation misestimates $\iota$ by $\lesssim 3\%$; see [[@hughes00]]{} for detailed discussion. The dotted line is this hole’s “last stable orbit”; orbits to the left are unstable to small perturbations, those to the right are stable. Each arrow shows how radiation tends to evolve an orbit; length indicates how strongly it is driven. These orbits are driven to smaller radius and to (very slightly) larger inclination. The extremely long arrow at $\iota
\simeq 120^\circ$, $r = 7 GM/c^2$ lies very close to the last stable orbit. As such, a small push from radiation has a large impact.[]{data-label="fig:circ_evol"}](circ_evol.eps){width="5in"}
### Evolving through an orbital sequence. {#sec:evolve}
It is not too difficult to compute the sequence of orbits \[parameterized as $E(t)$, $L_z(t)$, $Q(t)$ or $r_{\rm min/max}(t)$, $\theta_{\rm min}(t)$\] that an inspiraling body passes through before finally plunging into its companion black hole. Once these are known, it is straightforward to build the worldline that a small body follows as it spirals into the black hole. From the worldline, we can build a source function ${\cal T}(t)$ for Eq. (\[eq:teukolsky\]) and compute the evolving inspiral waves. Figure [\[fig:td\_teuk\]]{} gives an example of a wave arising from the inspiral of a small body into a black hole; see [[@pkhd08]]{} for details of how these waves are computed.
![Plus polarization of wave generated by a small body spiraling into a massive black hole; this figure is adapted from [[@pkhd08]]{}. The amplitude is scaled to the source’s distance $D$ and the small body’s mass $\mu$; time is measure in units $c^3/GM$. For this calculation, the binary’s initial parameters are $p =
10GM/c^2$, $e = 0.5$, and $\theta_{\rm min} \simeq 61^\circ$; the binary’s mass ratio is fixed to $\mu/M = 0.016$, and the larger black hole’s spin parameter is $a = 0.5M$. The insets show spans of length $\Delta t \sim 1000 GM/c^3$ early and late in the inspiral. Note the substantial evolution of the wave’s frequencies as the orbit shrinks.[]{data-label="fig:td_teuk"}](td_teuk.eps){width="5.3in"}
Post-Newtonian theory {#sec:pn}
=====================
Suppose we cannot use mass ratio as an expansion parameter. For instance, if the members of the binary are of equal mass, then $\eta
\equiv m_1 m_2/(m_1 + m_2)^2 = 0.25$. This is large enough that neglect of ${\cal O}(\eta^2)$ and higher terms is problematic. The techniques discussed in Sec. [\[sec:pert\]]{} will not be appropriate.
If the mass ratio is not a good expansion parameter, the potential $\phi \equiv GM/rc^2$ may be. The [*post-Newtonian*]{} (pN) expansion of GR results when we use $\phi$ as our expansion parameter. We now summarize the main concepts which underlie the pN formalism, turning next to a discussion of the pN waveform and its interesting features. Much of our discussion is based on [[@blanchet06]]{}.
Basic concepts and overview of formalism {#sec:pn_formal}
----------------------------------------
One typically begins the pN expansion by examining the Einstein field equations in [*harmonic*]{} or deDonder coordinates (e.g., [@weinberg72]{}, Sec. 7.4). In these coordinates, one defines $$h^{\mu\nu} \equiv \sqrt{-g}g^{\mu\nu} - \eta^{\mu\nu}\;,
\label{eq:h_harmonic}$$ where $g$ is the determinant of $g_{\mu\nu}$. This looks similar to the flat spacetime perturbation defined in Sec. [\[sec:waveform\]]{}; however, we do not assume that $h$ is small. We next impose the gauge condition $$\partial_\alpha h^{\alpha\beta} = 0\;.$$ With these definitions, the [*exact*]{} Einstein field equations are $$\Box h^{\alpha\beta} = \frac{16\pi G}{c^4}\tau^{\alpha\beta}\;,
\label{eq:pn_efe}$$ where $\Box = \eta^{\alpha\beta}\partial_\alpha\partial_\beta$ is the [*flat*]{} spacetime wave operator. The form of Eq.(\[eq:pn\_efe\]) means that the radiative Green’s function we used to derive Eq. (\[eq:lin\_soln\]) can be applied here as well; the solution is simply $$h^{\alpha\beta} = -\frac{4G}{c^4}\int \frac{\tau_{\alpha\beta}({\bf x}',
t - |{\bf x} - {\bf x}'|/c)}{|{\bf x} - {\bf x}'|}d^3x'\;.
\label{eq:pn_formal_soln}$$
Formally, Eq. (\[eq:pn\_formal\_soln\]) is exact. We have swept some crucial details under the rug, however. In particular, we never defined the source $\tau^{\alpha\beta}$. It is given by $$\tau^{\alpha\beta} = (-g)T^{\alpha\beta} +
\frac{c^4\Lambda^{\alpha\beta}}{16\pi G}\;,
\label{eq:pn_tau_def}$$ where $T^{\alpha\beta}$ is the usual stress energy tensor, and $\Lambda^{\alpha\beta}$ encodes much of the nonlinear structure of the Einstein field equations: $$\begin{aligned}
\Lambda^{\alpha\beta} &\equiv& 16\pi(-g)t^{\alpha\beta}_{\rm LL} +
\partial_\nu h^{\alpha\mu}\partial_\mu h^{\beta\nu} -
\partial_\mu \partial_\nu h^{\alpha\beta} h^{\mu\nu}
\\
&=& N^{\alpha\beta}[h,h] + M^{\alpha\beta}[h,h,h]
+ L^{\alpha\beta}[h,h,h,h] + {\cal O}(h^5)\;.\end{aligned}$$ On the first line, $t^{\alpha\beta}_{\rm LL}$ is the Landau-Lifshitz pseudotensor, a quantity which (in certain gauges) allows us to describe how GWs carry energy through spacetime (@ll75, Sec.96). On the second line, the term $N^{\alpha\beta}[h,h]$ means a collection of terms quadratic in $h$ and its derivatives, $M^{\alpha\beta}[h,h,h]$ is a cubic term, etc. Our solution $h^{\alpha\beta}$ appears on both the left- and right-hand sides of Eq. (\[eq:pn\_formal\_soln\]). Such a structure can be handled very well [*iteratively*]{}. We write $$h^{\alpha\beta} = \sum_{n = 1}^\infty G^n h_n^{\alpha\beta}\;.
\label{eq:iteration}$$ The $n = 1$ term is essentially the linearized solution from Sec.. To go higher, let $\Lambda_n^{\alpha\beta}$ denote the contribution of $\Lambda^{\alpha\beta}$ to the solution $h_n^{\alpha\beta}$. We find $$\Lambda_2^{\alpha\beta} = N^{\alpha\beta}[h_1,h_1]\;,$$ $$\Lambda_3^{\alpha\beta} = M^{\alpha\beta}[h_1,h_1,h_1] +
N^{\alpha\beta}[h_2,h_1] + N^{\alpha\beta}[h_1,h_2]\;,$$ etc.; higher contributions to $\Lambda^{ab}$ can be found by expanding its definition and gathering terms. By solving the equations which result from this procedure, it becomes possible to build the spacetime metric and describe the motion of the members of a binary and the radiation that they emit.
Features of the post-Newtonian binary waveform {#sec:pn_waveform}
----------------------------------------------
The features of the pN binary waveform are most naturally understood by first considering how we describe the motion of the members of the binary. Take those members to have masses $m_1$ and $m_2$, let their separation be $r$, and let $\mathbf{\hat r}$ point to body 1 from body 2. Then, in the harmonic gauge used for pN theory, the acceleration of body 1 due to the gravity of body 2 is $${\bf a} = {\bf a}_0 + {\bf a}_2 + {\bf a}_4 + {\bf a}_5 + {\bf a}_6 +
{\bf a}_7 \ldots \;.
\label{eq:pNorbitaccel}$$ The zeroth term, $${\bf a}_0 = -\frac{G m_2}{r^2} \mathbf{\hat r},$$ is just the usual Newtonian gravitational acceleration. Each ${\bf
a}_n$ is a pN correction of order $(v/c)^n$. The first such correction is $${\bf a}_2 = \left[\frac{5G^2m_1m_2}{r^3} + \frac{4G^2m_2^2}{r^3} +
\frac{Gm_2}{r^2} \left(\frac{3}{2}({\mathbf{\hat r}}\cdot{\bf v_2})^2
- v_1^2 + 4{\bf v_1}\cdot{\bf v_2} -
2v_2^2\right)\right]\frac{\mathbf{\hat r}}{c^2}\;.
\label{eq:pN_a2}$$ For the acceleration of body 2 due to body 1, exchange labels 1 and 2 and replace $\mathbf{\hat r}$ with $-\mathbf{\hat r}$. Note that ${\bf a}_2$ changes the dependence of the acceleration with respect to orbital separation. It also shows that the acceleration of body 1 depends on its mass $m_1$. This is a pN manifestation of the “self force” discussed in Sec. [\[sec:evolve\_perturbation\]]{}. So far, the pN acceleration has been computed to order $(v/c)^7$. As we go to high order, the expressions for ${\bf a}_n$ become quite lengthy. An excellent summary is given in [[@blanchet06]]{}, Eq. (131) and surrounding text. (Note that the expression for ${\bf a}_6$ fills over two pages in that paper!)
At higher order, we also find a distinctly non-Newtonian element to binary dynamics: its members spins [*precess*]{} due to their motion in the binary’s curved spacetime. If the spins are ${\bf S}_1$ and ${\bf S}_2$, one finds [[@th85]]{} $$\frac{d{\bf S}_1}{dt} = \frac{G}{c^2r^3}\left[\left(2 +
\frac{3}{2}\frac{m_2}{m_1}\right)\mu\sqrt{M r}\hat{\bf L}\right]
\times{\bf S}_1 + \frac{G}{c^2r^3}\left[\frac{1}{2}{\bf S}_2 -
\frac{3}{2}({\bf S}_2\cdot\hat{\bf L})\hat{\bf L}\right] \times{\bf
S}_1\;,
\label{eq:dS1dt}$$ $$\frac{d{\bf S}_2}{dt} = \frac{G}{c^2r^3}\left[\left(2 +
\frac{3}{2}\frac{m_1}{m_2}\right)\mu\sqrt{M r}\hat{\bf L}\right]
\times{\bf S}_2 + \frac{G}{c^2r^3}\left[\frac{1}{2}{\bf S}_1 -
\frac{3}{2}({\bf S}_1\cdot\hat{\bf L})\hat{\bf L}\right] \times{\bf
S}_2\;.
\label{eq:dS2dt}$$
We now discuss the ways in which aspects of pN binary dynamics color a system’s waves.
### Gravitational-wave amplitudes. {#sec:pn_amplitude}
Although a binary’s [*dominant*]{} waves come from variations in its mass quadrupole moment, Eq. (\[eq:multipolar\_form\]) shows us that higher moments also generate GWs. In the pN framework, these moments contribute to the amplitude of a binary’s waves beyond the quadrupole form, Eq. (\[eq:h\_NQ\]). Write the gravitational waveform from a source as $$h_{+,\times} = \frac{2G{\cal M}}{c^2D}\left(\frac{\pi G{\cal
M}f}{c^3}\right)^{2/3} \left[H^0_{+,\times} +
v^{1/2}H^{1/2}_{+,\times} + v H^1_{+,\times} + \ldots\right] \;,
\label{eq:hpn_sum}$$ where $v \equiv (\pi G M f/c^3)^{1/3}$ is roughly the orbital speed of the binary’s members (normalized to $c$). The contributions $H^0_{+,\times}$ reproduce the waveform presented in Eq.(\[eq:h\_NQ\]). The higher-order terms $H^{1/2}_{+,\times}$ and $H^1_{+,\times}$ can be found in [@blanchet06], his Eqs. (237) through (241). For our purposes, the key point to note is that these higher-order terms introduce new dependences on the binary’s orbital inclination and its masses. As such, measurement of these terms can provide additional constraints to help us understand the system’s characteristics. Figure [\[fig:hpn\]]{} illustrates the three contributions $H_0$, $H_{1/2}$, and $H_1$ to a binary’s GWs.
![The first three contributions to the $+$ GW polarization, and their sum. In all panels, we plot $(c^2D/G\mu)h_+$ versus $c^3t/GM$. The upper left panel gives the sum \[cf. Eq. (\[eq:hpn\_sum\])\] arising from $H^0_+$, $H^{1/2}_+$, and $H^1_+$; the other panels show the individual contributions from those $H^n_+$. Although subdominant, the terms other than $H^0_+$ make a substantial contribution to the total, especially at the end of inspiral (here normalized to $c^3t/GM = 1$).[]{data-label="fig:hpn"}](hpn.eps){width="5.3in"}
### Orbital phase. {#sec:pn_phase}
The motion of a binary’s members about each other determines the orbital phase. Specializing to circular orbits, we can determine the orbital frequency from the acceleration of the binary’s members; integrating up this frequency, we define the binary’s phase $\Phi(t)$. The first few terms of this phase are given by [[@bdiww95]]{} $$\begin{aligned}
\Phi &=& \Phi_c - \left[\frac{c^3(t_c - t)}{5G{\cal M}}\right]^{5/8}
\left[1 + \left(\frac{3715}{8064} +
\frac{55}{96}\frac{\mu}{M}\right)\Theta^{-1/4} -\frac{3}{16}\left[4\pi
- \beta(t)\right]\Theta^{-3/8} \right. \nonumber\\ & & \left.+
\left(\frac{9275495}{14450688} + \frac{284875}{258048}\frac{\mu}{M} +
\frac{1855}{2048}\frac{\mu^2}{M^2} + \frac{15}{64}\sigma(t)\right)
\Theta^{-1/2}\right]\;,
\label{eq:2pnPhase}\end{aligned}$$ where $$\Theta = \frac{c^3\eta}{5 G M}(t_c - t)\;.$$ Notice that the leading term is just the Newtonian quadrupole phase, Eq. (\[eq:phi\_NQ\]). Each power of $\Theta$ connects to a higher order in the pN; Eq. (\[eq:2pnPhase\]) is taken to “second post-Newtonian” order, which means that corrections of $(v/c)^4$ are included. Corrections to order $(v/c)^6$ are summarized in [[@blanchet06]]{}. In addition to the chirp mass ${\cal M}$, the reduced mass $\mu$ enters $\Phi$ when higher order terms are included. The high pN terms encode additional information about the binary’s masses. At least in principle, including higher pN effects in our wave model makes it possible to determine both chirp mass and reduced mass, fully constraining the binary’s masses.
Equation (\[eq:2pnPhase\]) also depends on two parameters, $\beta$ and $\sigma$, which come from the binary’s spins and orbit orientation. The “spin-orbit” parameter $\beta$ is $$\beta = \frac{1}{2}\sum_{i = 1}^2\left[113\left(\frac{m_i}{M}\right)^2 +
75\eta\right]\frac{\hat{\bf L}\cdot{\bf S}_i}{m_i^2}\;;
\label{eq:beta_def}$$ the “spin-spin” parameter $\sigma$ is $$\sigma = \frac{\eta}{48m_1^2m_2^2}\left[721(\hat{\bf L}\cdot{\bf S}_1)
(\hat{\bf L}\cdot{\bf S}_2) - 247{\bf S}_1\cdot{\bf S}_2\right]\;
\label{eq:sigma_def}$$ [[@bdiww95]]{}. As we’ll see in Sec. [\[sec:gwastro\]]{}, these parameters encode valuable information, especially when spin precession is taken into account.
The $\mu \ll M$ limit of Eq. (\[eq:2pnPhase\]) can be computed with black hole perturbation theory (Sec. [\[sec:pert\]]{}) evaluated for circular orbits with $r \gg GM/c^2$. The orbital phase is found by integrating the orbital frequency. By changing variables, one can relate this to the orbital energy and the rate at which GWs evolve this energy: $$\Phi = \int\Omega^{\rm orb}\,dt = \int \frac{dE^{\rm
orb}/d\Omega}{dE^{\rm GW}/dt} \Omega\,d\Omega\;.$$ The orbital energy $E^{\rm orb}$ is simple to calculate and to express as a function of orbital frequency. For example, for orbits of non-rotating black holes, we have $$E^{\rm orb} = \mu c^2\frac{1 - 2v^2/c^2}{\sqrt{1 -
3v^2/c^2}}\;,$$ where $v \equiv r\Omega$. For circular, equatorial orbits, [[@msstt97]]{} [*analytically*]{} solve the Teukolsky equation as an expansion in $v$, calculating $dE^{\rm GW}/dt$ to ${\cal
O}[(v/c)^{11}]$. This body of work confirms, in a completely independent way, all of the terms which do not depend on the mass ratio $\mu/M$ in Eq. (\[eq:2pnPhase\]). The fact that these terms are known to such high order is an important input to the effective one-body approach described in Sec. [\[sec:eff\_one\_body\]]{}.
### Spin precession. {#sec:pn_precession}
Although the spin vectors ${\bf S}_1$ and ${\bf S}_2$ wiggle around according to the prescription of Eqs. (\[eq:dS1dt\]) and (\[eq:dS2dt\]), the system must preserve a notion of [*global*]{} angular momentum. Neglecting for a moment the secular evolution of the binary’s orbit due to GW emission, pN encodes the notion that the total angular momentum $${\bf J} = {\bf L} + {\bf S}_1 + {\bf S}_2$$ must be conserved. This means ${\bf L}$ must oscillate to compensate for the spins’ dynamics, and guarantees that, when spin precession is accounted for in our evolutionary models, the phase parameters $\beta$ and $\sigma$ become time varying. Likewise, the inclination angle $\iota$ varies with time. Precession thus leads to phase and amplitude modulation of a source’s GWs. Figure [\[fig:prec\]]{} illustrates precession’s impact, showing the late inspiral waves for binaries that are identical aside from spin.
![Illustration of precession’s impact on a binary’s waves. The top panels show $h_+$ and $h_\times$ for a binary that contains nonspinning black holes; the lower panels show the waveforms for a binary with rapid rapidly rotating ($a = 0.9M$) holes. The strong amplitude modulation is readily apparent in this figure. Less obvious, but also included, is the frequency modulation that enters through the spin-dependent orbital phase parameters $\beta$ and $\sigma$ \[cf. Eq. (\[eq:2pnPhase\])\].[]{data-label="fig:prec"}](prec.eps){width="5.3in"}
The effective one-body approach {#sec:eff_one_body}
-------------------------------
Because pN techniques are based on an expansion in $\phi = GM/rc^2$, it had been thought that they would only apply for $r \gtrsim
10GM/c^2$, and that numerical relativity would be needed to cover the inspiral past that radius, through the final plunge and merger. This thinking was radically changed by [[@bd99]]{}, which introduced the [*effective one-body*]{} approach to two-body dynamics. This technique has proved to be an excellent tool for describing the late inspiral, plunge, and merger of two black holes. We now describe the key ideas of this approach; our description owes much to the helpful introductory lectures by [[@damour_eob08]]{}.
As the name suggests, the key observation of this approach is that the motion of two bodies $(m_1, m_2)$ about one another can be regarding as the motion of a single test body of mass $\mu = m_1m_2/(m_1 + m_2)$ in some spacetime. One begins by examining the Hamiltonian which gives the conservative contribution to the equations of motion. Let the binary’s momenta be ${\bf p}_{1,2}$ and its generalized positions ${\bf q}_{1,2}$. If we work in the center of mass frame, then the Hamiltonian can only be a function of the [*relative*]{} position, ${\bf q} \equiv {\bf q}_1 - {\bf q}_2$, and can only depend on the momentum ${\bf p} \equiv {\bf p}_1 = -{\bf p}_2$. For example, the conservative motion can be described to second-post-Newtonian order \[i.e., ${\cal O}(v^4/c^4)$\] with the Hamiltonian $$H({\bf q}, {\bf p}) = H_0({\bf p}, {\bf q}) + \frac{1}{c^2}H_2({\bf
p}, {\bf q}) + \frac{1}{c^4}H_4({\bf p}, {\bf q})\;,
\label{eq:eob_hamiltonian}$$ where $H_0 = |{\bf p}|^2/2\mu + GM\mu/|{\bf q}|$ encodes the Newtonian dynamics, and $H_{2,4}$ describes pN corrections to that motion. A binary’s energy and angular momentum can be found from this Hamiltonian without too much difficulty.
The next step is to write down an effective one-body metric, $$ds^2 = -A(R) c^2 dT^2 + B(R)dR^2 + R^2(d\theta^2 +
\sin^2\theta\,d\phi^2)\;,
\label{eq:eob_metric}$$ where $A(R) = 1 + \alpha_1(GM/Rc^2) + \alpha_2(GM/Rc^2)^2 + \ldots$; a similar expansion describes $B(R)$. The coefficients $\alpha_i$ depend on reduced mass ratio, $\eta = \mu/M$. The effective problem is then to describe the motion of a test body in the spacetime (\[eq:eob\_metric\]). By asserting a correspondance between certain action variables in the pN framework and in the effective framework, the coefficients $\alpha_i$ are completely fixed. For example, one finds that, as $\eta \to 0$, the metric (\[eq:eob\_metric\]) is simply the Schwarzschild spacetime. The effective problem can thus be regarded as the motion of a test body around a “deformed” black hole, with $\eta$ controlling the deformation. See [[@damour_eob08]]{} and references therein for further discussion.
One must also describe radiation reaction in the effective one-body approach. A key innovation introduced by Damour and coworkers \[see [[@dis98]]{} for the original presentation of this idea\] is to [*re-sum*]{} the pN results for energy loss due to GWs in order to obtain a result that is good into the strong field. In more detail, we put $$\frac{dp_\phi}{dt} = -{\cal F}_\phi\;.$$ The function ${\cal F}_\phi$ is known to rather high order in orbital velocity $v$ by a combination of analyses in both pN theory \[see, e.g., [[@blanchet06]]{} for a review\] and to analytic expansion of results in perturbation theory [[@msstt97]]{}. It can be written $${\cal F}(v) = \frac{32G}{5c^5}\eta r^4\Omega^5 F(v)\;,$$ where $$F(v) = 1 + \sum a_n\left(\frac{v}{c}\right)^{n/2} + \sum b_n
\log(v/c)\left(\frac{v}{c}\right)^{n/2}\;.$$ Post-Newtonian theory allows us to compute $a_n$ including contributions in $\mu/M$, up to $n = 7$, and shows that $b_n \ne 0$ for $n = 6$ \[@blanchet06, Eq. (168)\]. Perturbation theory \[@msstt97, Eq. (4.18)\] gives us the ${\cal O}[(\mu/M)^0]$ contributions for $a_n$ up to $n = 11$, and shows that $b_n \ne 0$ for $n = 8, 9, 10, 11$.
The resummation introduced by Damour, Iyer & Sathyaprakash requires factoring out a pole at $v = \hat v$ in the function $F(v)$ and then reorganizing the remaining terms using a [*Padé approximant*]{}: $$F^{\rm rs}(v) = \left(1 - v/\hat v\right)^{-1}
P\left[(1 - v/\hat v)F(v)\right]\;.
\label{eq:Pade_form}$$ The approximant $P$ converts an $N$-th order polynomial into a ratio of $N/2$-th order polynomials whose small $v$ expansion reproduces the original polynomial: $$P\left[1 + \sum_{n = 1}^N c_n (v/c)^n\right] = \frac{1 + \sum_{n =
1}^{N/2} d_n (v/c)^n}{1 + \sum_{n = 1}^{N/2} e_n (v/c)^n}\;.$$ Using this approach to define the evolution of a system due to GW backreaction, it is not so difficult to compute the waves that a binary generates as its members spiral together. Indeed, by augmenting these waves with the “ringdown” that comes once the spacetime is well-described by a single black hole, the effective one-body approach has recently had great success in matching to the waveforms that are produced by numerical relativity simulations. We defer a discussion of this matching until after we have described numerical relativity in more detail, in order that the effectiveness of this comparison can be made more clear.
Numerical relativity {#sec:numrel}
====================
Numerical relativity means the direct numerical integration of the Einstein field equations, evolving from an “initial” spacetime to a final state. This requires rethinking some of our ideas about GR. As a prelude, consider Maxwell’s equations, written in somewhat non-standard form: $$\begin{aligned}
\nabla\cdot{\bf E} = 4\pi\rho\;,&\qquad&
\nabla\cdot{\bf B} = 0\;;
\label{eq:div_eqs}\\
\frac{\partial{\bf B}}{\partial t} = -c\nabla\times{\bf E}\;,
&\qquad&
\frac{\partial{\bf E}}{\partial t} = 4\pi{\bf J} - c\nabla\times{\bf
B}\;.
\label{eq:curl_eqs}\end{aligned}$$ These equations tell us how ${\bf E}$ and ${\bf B}$ are related throughout spacetime. Notice that Eqs. (\[eq:div\_eqs\]) and (\[eq:curl\_eqs\]) play very different roles here. The divergence equations contain no time derivatives; if we imagine “slicing” spacetime into a stack of constant time slices, then Eq.(\[eq:div\_eqs\]) tells us how ${\bf E}$ and ${\bf B}$ are [*constrained*]{} on each slice. By constrast, the curl equations do include time operators, and so tell us how ${\bf E}$ and ${\bf B}$ are related as we [*evolve*]{} from slice to slice. We turn now to developing the Einstein equations into a form appropriate for evolving from an initial form; our discussion has been heavily influenced by the nicely pedagogical presentation of [@baumshap03].
Overview: From geometric equations to evolution equations {#sec:nr_overview}
---------------------------------------------------------
How do we use Eq. (\[eq:einstein\]) to evolve spacetime from some initial state? The Einstein field equations normally treat space and time democratically — no explicit notion of time is built into Eq.(\[eq:einstein\]). The very question requires us to change our thinking: “evolving” from an “initial” state requires some notion of time.
Suppose that we have chosen a time coordinate, defining a way to slice spacetime into space and time. We must reformulate the Einstein field equations using quantities defined solely on a given time slice. Figure [\[fig:slices\]]{} illustrates how two nearby time slices may be embedded in spacetime. Once time is set, we can freely choose spatial coordinates in each slice; we show $x^i$ on both slices.
![Slicing of “spacetime” into “space” and “time.” The vector $\vec n$ is normal to the slice at $t$. An observer who moves along $\vec n$ experiences an interval of proper time $d\tau =
\alpha\,dt$, where $\alpha$ is the [*lapse*]{} function. The [*shift*]{} $\vec\beta$ describes the translation of spatial coordinates $x^i$ from slice-to-slice as seen by that normal observer.[]{data-label="fig:slices"}](slices.eps){width="5.2in"}
Let $\vec n$ be normal to the bottom slice. The [*lapse*]{} $\alpha
\equiv d\tau/dt$ sets the proper time experienced by an observer who moves along $\vec n$; the [*shift*]{} $\beta^i$ tells us by how much $x^i$ is displaced (“shifted”) on the second slice relative to the normal observer. We will soon see that $\alpha$ and $\beta^i$ are completely unconstrained by Einstein’s equations. They let us set coordinates as conveniently as possible, generalizing the gauge generator $\xi^\mu$ used in linearized theory (cf. Sec.\[sec:gwbasics\]) to the strong field.
The proper spacetime separation of $x^i$ and $x^i + dx^i$ is then $$ds^2 = -\alpha^2 dt^2 + g_{ij}(dx^i + \beta^i dt)(dx^j + \beta^j
dt)\;.$$ (In this section, we will put $c = 1$; various factors become rather unwieldy otherwise.) We now have a form for the metric of spacetime, a notion of constant time slices, and the normal to a slice $\vec n$. Because we are interested in understanding quantities which “live” in a given slice (i.e., orthogonal to the normal $\vec n$), we build the projection tensor $\gamma_{\mu\nu} = g_{\mu\nu} + n_\mu n_\nu$. This tensor is just the metric for the geometry in each slice. We can choose coordinates so that $\gamma_{tt} = \gamma_{ti} = 0$, and $\gamma_{ij} = g_{ij}$; we will assume these from now on.
We now have enough pieces to see how to build the field equations in this formalism: We take Eq. (\[eq:einstein\]) and project components parallel and orthogonal to $\vec n$. Consider first the component that is completely parallel to $\vec n$: $$G_{\alpha\beta} n^\alpha n^\beta = 8\pi G T_{\alpha\beta} n^\alpha
n^\beta \quad\longrightarrow\quad
R + K^2 - K_{ij} K^{ij} = 16\pi G\rho\;.
\label{eq:hamilton}$$ \[See [[@baumshap03]]{} for a detailed derivation of Eq.(\[eq:hamilton\]).\] In this equation, $R$ is the Ricci scalar for the 3-metric $\gamma_{ij}$, $\rho = T_{\alpha\beta} n^\alpha n^\beta$, and $$\begin{aligned}
K_{ij} &\equiv& -{\gamma_i}^\alpha{\gamma_j}^\beta \nabla_\alpha
n_\beta
\nonumber\\
&=& \frac{1}{2\alpha}\left(-\partial_t\gamma_{ij} + D_i\beta_j +
D_j\beta_i\right)
\label{eq:extrinsic}\end{aligned}$$ is the [*extrinsic curvature*]{}. (The operator $D_i$ is a covariant derivative for the metric $\gamma_{ij}$.) It describes the portion of the curvature which is due to the way that each constant time slice is embedded in the full spacetime. Equation (\[eq:hamilton\]) is known as the [*Hamiltonian constraint*]{}. \[See [[@baumshap03]]{} for details of how to go from the first line of (\[eq:hamilton\]), which is a definition, to the second line, which is more useful here.\] Notice that it contains no time derivatives of $K_{ij}$. This equation is thus a [*constraint*]{}, relating data on a given timeslice.
Next, components parallel to $\vec n$ on one index and orthogonal on the other: $$G_{\alpha\beta} n^\alpha {\gamma_i}^\beta = 8\pi G T_{\alpha\beta}
n^\alpha {\gamma_i}^\beta \quad \longrightarrow \quad D_j{K^j}_i - D_i
K = 8\pi G j_i
\label{eq:momentum}$$ The matter current $j_i = -T_{\alpha\beta}n^\alpha{\gamma_i}^\beta$. Equation (\[eq:momentum\]) is the [*momentum constraint*]{}; notice it also has no time derivatives of $K_{ij}$.
Finally, project completely orthogonal to $\vec n$: $$\begin{aligned}
G_{\alpha\beta} {\gamma_i}^\alpha {\gamma_j}^\beta &=& 8\pi G
T_{\alpha\beta} {\gamma_i}^\alpha {\gamma_j}^\beta \quad
\longrightarrow
\nonumber\\
\partial_t K_{ij} &=& -D_i D_j\alpha + \alpha\left[R_{ij} - 2K_{ik}{K^k}_j +
K K_{ij} - 8\pi G\alpha(\mbox{matter})\right]
\nonumber\\
& + & \beta^k D_k K_{ij} + K_{ik} D_j\beta^k + K_{kj} D_i\beta^k\;.
\label{eq:evolution}\end{aligned}$$ \[We have abbreviated a combination of projections of the stress-energy tensor as “matter.” Interested readers can find more details in [[@baumshap03]]{}.\] Combining Eqs. (\[eq:evolution\]) with (\[eq:extrinsic\]) gives us a full set of [*evolution equations*]{} for the metric and the extrinsic curvature, describing how the geometry changes as we evolve from time slice to time slice.
The field equations sketched here are the [*ADM*]{} equations [[@adm62]]{}. Today, most groups work with modified versions of these equations; a particularly popular version is the [*BSSN*]{} system, developed by [[@bs99]]{}, building on foundational work by [[@sn95]]{}. In BSSN, one rewrites the spacetime metric as $${\tilde\gamma}_{ij} = e^{-4\phi}\gamma_{ij}\;,
\label{eq:conformal}$$ where $\phi$ is chosen so that $e^{12\phi} = {\rm det}(\gamma_{ij})$. With this choice, ${\rm det}({\tilde\gamma}_{ij}) = 1$. Roughly speaking, the decomposition (\[eq:conformal\]) splits the geometry into “transverse” and “longitudinal” degrees of freedom (encapsulated by ${\tilde\gamma}_{ij}$ and $\phi$, respectively.) One similarly splits the extrinsic curvature into “longitudinal” and “transverse” parts by separately treating its trace and its trace-free parts: $$A_{ij} = K_{ij} - \frac{1}{3}\gamma_{ij}K\;.$$ It is convenient to conformally rescale $A_{ij}$, using ${\tilde
A}_{ij} = e^{-4\phi}A_{ij}$. One then develops evolution equations for $\phi$, ${\tilde\gamma}_{ij}$, $K$, and ${\tilde A}_{ij}$. See [[@bs99]]{} for detailed discussion.
The struggles and the breakthrough {#sec:nr_breakthrough}
----------------------------------
Having recast the equations of GR into a form that lets us evolve forward in time, we might hope that simulating the merger of two compact bodies would now not be too difficult. In addition to the equations discussed above, we need a few additional pieces:
1. [*Initial data*]{}; i.e., a description of the metric and extrinsic curvature of a binary at the first moment in a simulation. Ideally, we might hope that this initial data set would be related to an earlier inspiral of widely separated bodies, (e.g., [@samaya06]{}, and [@ytob06]{}). However, any method which can produce a bound binary with specified masses $m_1$, $m_2$ and spins ${\bf S}_1$, ${\bf S}_2$ should allow us to simulate a binary (although it may be “contaminated” by having the wrong GW content at early times).
2. [*Gauge or coordinate conditions*]{}; i.e., an algorithm by which the lapse $\alpha$ and shift $\beta^i$ are selected. Because these functions are not determined by the Einstein field equations but are instead freely specified, they can be selected in a way that is as convenient as possible. Such wonderful freedom can also be horrible freedom, as one could choose gauge conditions which obscure the physics we wish to study, or not facilitate a stable simulation.
3. [*Boundary conditions.*]{} If the simulation contains black holes, then they should have an event horizon from which nothing comes out. Unfortunately, we cannot know where horizons are located until the full spacetime is built (though we have a good estimate, the “apparent horizon,” which can be computed from information on a single time slice). They also contain singularities. Hopefully, event horizons will prevent singular fields from contaminating the spacetime. The computation will also have an outer boundary. Far from the binary, the spacetime should asymptote to a flat (or Robertson-Walker) form, with a gentle admixture of outgoing GWs.
How to choose these ingredients has been active research for many years. Early on, some workers were confident it was just a matter of choosing the right combination of methods and the pieces would fall into place. The initial optimism was nicely encapsulated by the following quote from [[@300yrs]]{}, p. 379:
> $\ldots$ numerical relativity is likely to give us, in the next five years or so, a detailed and highly reliable picture of the final coalescence and the wave forms it produces, including the dependence on the holes’ masses and angular momenta.
For many years, Thorne’s optimism seemed misplaced. Despite having a detailed understanding of the principles involved, it seemed simply not possible to evolve binary systems for any interesting length of time. In most cases, the binary’s members could complete a fraction of an orbit and then the code would crash. As Joan Centrella has emphasized in several venues, by roughly 2004 “People were beginning to say ‘numerical relativity cannot be done’[”]{} (J. Centrella, private communication).
A major issue with many of these simulations appeared to be [*constraint violating modes*]{}. These are solutions of the system $(\partial_t\gamma_{ij}, \partial_t K_{ij})$ that do not satisfy the constraint equations (\[eq:hamilton\]) and (\[eq:momentum\]). As with the Maxwell equations, one can prove that a solution which satisfies the constraints initially will continue to satisfy them at later times [*in the continuum limit*]{} \[cf. Sec. IIIC of [[@pretorius07]]{}\]. Unfortunately, numerical relativity does not work in the continuum limit; and, even more unfortunately, constraint violating modes generically tend to be unstable [[@ls02]]{}. This means that small numerical errors can “seed” the modes, which then grow unstably and swamp the true solution. The challenge was to keep the seed instabilities as small as possible and then to prevent them from growing. In this way, [[@btj04]]{} were able to compute one full orbit of a black hole binary before their simulation crashed.
It was thus something of a shock when [[@pretorius05]]{} demonstrated a binary black hole simulation that executed a full orbit, followed by merger and ringdown to a Kerr black hole, with no crash apparent. Pretorius used a formulation of the Einstein equations based on coordinates similar to the de Donder coordinates described in Sec. [\[sec:pn\]]{}. These are known as “generalized harmonic coordinates”; see [[@pretorius07]]{} for detailed discussion. Because his success came with such a radically different system of equations, it was suspected that the ADM-like equations might have to be abandoned. These concerns were allayed by near simultaneous discoveries by numerical relativity groups then at the University of Texas in Browsnville [[@campanellietal06]]{} and the Goddard Space Flight Center [[@bakeretal06]]{}. The Campanelli et al. and Baker et al. groups both used the BSSN formalism described at the end of Sec. [\[sec:nr\_overview\]]{}. To make this approach work, they independently discovered a gauge condition that allows the black holes in their simulations to move across their computational grid. (Earlier calculations typically fixed the coordinate locations of the black holes, which means that the coordinates effectively co-rotated with the motion of the binary.) It was quickly shown that this approach yielded results that agreed excellently with Pretorius’ setup [[@bcpz07]]{}.
Implementing the so-called “moving puncture” approach was sufficiently simple that the vast majority of numerical relativity groups were able to immediately apply it to their codes and begin evolving binary black holes. These techniques have also revolutionized our ability to model systems containing neutron stars (@su06, @etienne08, @bgr08, @skyt09). In the past four years, we have thus moved from a state of being barely able to model a single orbit of a compact binary system in full GR to being able to model nearly arbitrary binary configurations. Though Thorne’s 1987 prediction quoted above was far too optimistic on the timescale, his prediction for how well the physics of binary coalescence would be understood appears to be exactly correct.
GWs from numerical relativity and effective one body {#sec:nr_eob}
----------------------------------------------------
Prior to this breakthrough, the effective one-body approach gave the only strong-field description of GWs from the coalescence of two black holes. Indeed, these techniques made a rather strong prediction: The coalescence waveform should be fairly “boring,” in the sense that we expect the frequency and amplitude to chirp up to the point at which the physical system is well modeled as a single deformed black hole. Then, it should rapidly ring down to a quiescent Kerr state.
Such a waveform is indeed exactly what numerical simulations find, at least for the cases that have been studied so far. It has since been found that predictions from the effective one-body formalism give an outstanding description of the results from numerical relativity. There is some freedom to adjust how one matches effective one-body waves to the numerical relativity output \[e.g., the choice of pole $\hat v$ in Eq. (\[eq:Pade\_form\])\]; [[@betal07]]{} and [[@dnhhb08]]{} describe how to do this matching.
![Left panel: Comparison between the numerical relativity computed frequency and phase and the effective one-body frequency and phase. Right panel: Gravitational waveform computed by those two methods. These plots are for the coalescence of two non-spinning black holes with a mass ratio $m_2/m_1 = 4$. Figure kindly provided to the author by Alessandra Buonanno, taken from [@betal07]. Note that $G = c = 1$ in the labels to these figures.[]{data-label="fig:eobnr"}](eob1.eps "fig:"){width="2.55in"} ![Left panel: Comparison between the numerical relativity computed frequency and phase and the effective one-body frequency and phase. Right panel: Gravitational waveform computed by those two methods. These plots are for the coalescence of two non-spinning black holes with a mass ratio $m_2/m_1 = 4$. Figure kindly provided to the author by Alessandra Buonanno, taken from [@betal07]. Note that $G = c = 1$ in the labels to these figures.[]{data-label="fig:eobnr"}](eob2.eps "fig:"){width="2.55in"}
Figure [\[fig:eobnr\]]{}, taken from Buonanno et al., gives an example of how well the waveforms match one another. Over the entire span computed, the two waveforms differ in phase by only a few hundredths of a cycle. The agreement is so good that one can realistically imagine “calibrating” the effective one-body waveforms with a relatively small number of expensive numerical relativity computations, and then densely sampling the binary parameter space using the effective one-body approach.
Gravitational-wave recoil {#sec:recoil}
=========================
That GWs carry energy and angular momentum from a binary, driving its members to spiral together as described in Sec., is widely appreciated. Until recently, it was not so well appreciated that the waves can carry [*linear*]{} momentum as well. If the binary and its radiation pattern are asymmetric, then that radiation carries a net flux of momentum given by $$\frac{dp^i}{dt} = \frac{R^2}{c}\int d\Omega\, T^{00}\,n^i\;,$$ where $T^{00}$ is the energy-flux component of the Isaacson tensor (\[eq:isaacson\_tmunu\]), $n^i$ is the $i$-th component of the radial unit vector, and the integral is taken over a large sphere ($R \to
\infty$) around the source. Recent work has shown that the contribution to this “kick” from the final plunge and merger of coalescing black holes can be particularly strong. We now summarize the basic physics of this effect, and survey recent results.
[[@bekenstein73]]{} appears to have first appreciated that the momentum flux of GWs could have interesting astrophysical consequences. [[@fitchett83]]{} then estimated the impact this flux could have on a binary. An aspect of the problem which Fitchett’s analysis makes very clear is that the recoil comes from the beating of different multipolar contributions to the recoil: If one only looks at the quadrupole part of the GWs \[cf. Eq. (\[eq:multipolar\_form\])\], the momentum flux is zero. Fitchett’s analysis included octupole and current-quadrupole radiation, and found $$v_{\rm kick} \simeq 1450\,{\rm km/sec}\;\frac{f(q)}{f_{\rm max}}\,
\left(\frac{GM_{\rm tot}/c^2}{R_{\rm term}}\right)^4\;,
\label{eq:kickmag}$$ where $f(q) = q^2(1 - q)/(1 + q)^5$ gives the dependence on mass ratio $q = m_1/m_2$. This function has a maximum at $q \simeq 0.38$. The radius $R_{\rm term}$ describes when wave emission cuts off; for systems containing black holes this will scale with the total mass. Thus, the recoil does not depend on total mass, just mass ratio.
Fitchett’s analysis is similar to our discussion in Sec. in that Newtonian dynamics are supplemented with multipolar wave emission. Because the effect is strongest when $R_{\rm term}$ is smallest, it was long clear that a proper relativistic analysis was needed to get this kick correct. Indeed, a prescient analysis by [@rr89] suggested that binaries containing rapidly spinning black holes were likely to be especially interesting; as we shall see in a few moments, they were absolutely correct.
[[@fhh04]]{} provided the first estimates of recoil which did the strong-field physics more-or-less correctly. They used the perturbative techniques described in Sec. [\[sec:pert\]]{}, arguing that one can extrapolate from the small mass ratio regime to $q \sim
0.2$ or so with errors of a few tens of percent. Unfortunately, their code at that time did not work well with plunging orbits, so they had large error bars. They did find, however, that the maximum recoil probably fell around $v_{\rm kick} \simeq (250 \pm 150)$ km/sec, at least if no more than one body spins, and if the spin and orbit are aligned. [[@bqw05]]{} revisited this treatment, with a particular eye on the final plunge waves, using the pN methods outlined in Sec.. Their results were consistent with Favata et al., but reduced the uncertainty substantially, finding a maximum recoil $v_{\rm kick} \simeq (220 \pm 50)$ km/sec.
These numbers stood as the state-of-the-art in black hole recoil for several years, until numerical relativity’s breakthrough (Sec.) made it possible to study black hole mergers without any approximations. The Favata et al. and Blanchet et al.numbers turn out to agree quite well with predictions for the merger of non-spinning black holes; [[@gsbhh07]]{} find the maximum kick for non-spinning merger comes when $q = 0.38$, yielding $v_{\rm kick}
= (175 \pm 11)$ km/sec. When spin is unimportant, kicks appear to be no larger than a few hundred km/sec.
When spin [*is*]{} important, the kick can be substantially larger. Recent work by [[@ghsbh07]]{} and [[@clz07]]{} shows that when the holes have large spins and those spins are aligned just right (equal in magnitude, antiparallel to each other, and orthogonal to the orbital angular momentum), the recoil can be a few [*thousand*]{} km/sec. Detailed parameter exploration is needed to assess how much of this maximum is actually achieved; early work on this problem is finding that large kicks (many hundreds to a few thousand of km/sec) can be achieved for various spin orientations as long as the spins are large (@tm07, @pollney_etal07); recent work by [@bkn08] shows how the recoil can depend on spin and spin orientation, suggesting a powerful way to organize the calculation to see how generic such large kicks actually are. That the maximum is so much higher than had been appreciated suggests that substantial recoils may be more common than the Favata et al. and Blanchet et al. calculations led us to expect [[@sb07]]{}.
Many recent papers have emphasized that kicks could have strong astrophysical implications, ranging from escape of the black hole from its host galaxy to shocks in material accreting onto the large black hole. The first possible detection of black hole recoil was recently announced [[@kzl08]]{}. As that claim is assessed, we anticipate much activity as groups continue to try to identify a signature of a recoiling black hole.
Astronomy with gravitational waves {#sec:gwastro}
==================================
Direct GW measurement is a major motivator for theorists seeking to understand how binary systems generate these waves. The major challenge one faces is that GWs are extremely weak; as we derive in detail in this section, a wave’s strain $h$ sets the change in length $\Delta L$ per length $L$ in the arms of a GW detector. Referring to Eq. (\[eq:h\_NQ\]), we now estimate typical amplitudes for binary sources: $$\begin{aligned}
h_{\rm amp} &\simeq& \frac{2G{\cal M}}{c^2 D}\left(\frac{\pi G {\cal
M} f}{c^3}\right)^{2/3}
\nonumber\\
&\simeq& 10^{-23}\,\left(\frac{2.8\,M_\odot}{M}\right)^{5/3}
\left(\frac{f}{100\,{\rm Hz}}\right)^{2/3} \left(\frac{200\,{\rm
Mpc}}{D}\right)
\nonumber\\
&\simeq& 10^{-19}\,\left(\frac{2\times10^6\,M_\odot}{M}\right)^{5/3}
\left(\frac{f}{10^{-3}\,{\rm Hz}}\right)^{2/3} \left(\frac{5\,{\rm
Gpc}}{D}\right)\;.
\label{eq:h_fiducial}\end{aligned}$$ On the last two lines, we have specialized to a binary whose members each have mass $M/2$, and have inserted fiducial numbers corresponding to targets for ground-based detectors (second line) and space-based detectors (third line).
In the remainder of this section, we summarize the principles behind modern interferometric GW detectors. Given that these principles are likely to be novel for much of the astronomical community, we present this material in some depth. Note that [@finn08] has recently flagged some important issues in the “standard” calculation of an interferometer’s response to GWs (which, however, do not change the final results). For the sake of brevity, we omit these issues and recommend his article for those wishing a deeper analysis. We then briefly describe existing and planned detectors, and describe how one measures a binary’s signal with these instruments. This last point highlights why theoretical modeling has been so strongly motivated by the development of these instruments.
Principles behind interferometric GW antennae {#sec:interferometers}
---------------------------------------------
As a simple limit, treat the spacetime in which our detector lives as flat plus a simple GW propagating down our coordinate system’s $z$-axis: $$ds^2 = -c^2dt^2 + (1 + h)dx^2 + (1 - h)dy^2 + dz^2\;,
\label{eq:detector_spacetime}$$ where $h = h(t - z)$. We neglect the influence of the earth (clearly important for terrestrial experiments) and the solar system (which dominates the spacetime of space-based detectors). Corrections describing these influences can easily be added to Eq.(\[eq:detector\_spacetime\]); we neglect them, as they represent influences that vary on much longer timescales than the GWs.
![Schematic of an interferometer that could be used to detect GWs. Though real interferometers are vastly more complicated, this interferometer topology contains enough detail to illustrate the principle by which such measurements are made.[]{data-label="fig:interf"}](interferometer.eps){width="5in"}
Figure [\[fig:interf\]]{} sketches an interferometer that can measure a GW. Begin by examining the geodesics describing the masses at the ends of the arms, and the beam splitter at the center. Take these objects to be initially at rest, so that $(dx^\mu/d\tau)_{\rm before}
\doteq (c,0,0,0)$. The GW shifts this velocity by an amount of order the wave strain: $(dx^\mu/d\tau)_{\rm after} = (dx^\mu/d\tau)_{\rm
before} + {\cal O}(h)$. Now examine the geodesic equation: $$\frac{d^2x^j}{d\tau^2} + {\Gamma^j}_{\alpha\beta}
\frac{dx^\alpha}{d\tau}\frac{dx^\beta}{d\tau} = 0\;.$$ All components of the connection are ${\cal O}(h)$. Combining this with our argument for how the GW affects the various velocities, we have $$\frac{d^2x^j}{d\tau^2} + {\Gamma^j}_{00}
\frac{dx^0}{d\tau}\frac{dx^0}{d\tau} + {\cal O}(h^2) = 0\;.$$ Now, $${\Gamma^j}_{00} = \frac{1}{2}g^{jk}\left(\partial_0 g_{k0} +
\partial_0 g_{0k} - \partial_k g_{00}\right) = 0$$ as the relevant metric components are constant. Thus, $$\frac{d^2x^j}{d\tau^2} = 0 \;.$$ [*The test masses are unaccelerated to leading order in the GW amplitude $h$.*]{}
This seems to say that the GW has no impact on the masses. However, the geodesic equation describes motion [*with respect to a specified coordinate system*]{}. These coordinates are effectively “comoving” with the interferometer’s components. This is convenient, as the interferometer’s components remain fixed in our coordinates. Using this, we can show that the [*proper*]{} length of the arms does change. For instance, the $x$-arm has a proper length $$D_x = \int_0^L \sqrt{g_{xx}}\,dx = \int_0^L \sqrt{1 + h}\,dx \simeq
\int_0^L \left(1 + \frac{h}{2}\right)dx = L\left(1 +
\frac{h}{2}\right)\;.
\label{eq:proper_length}$$ Likewise, the $y$-arm has a proper length $D_y = L(1 - h/2)$.
This result tells us that the armlengths as measured by a ruler will vary with $h$. One might worry, though, that the ruler will vary, cancelling the measurement. This does not happen because rulers are not made of freely-falling particles: The elements of the ruler are [*bound*]{} to one another and act against the GW. The ruler will feel [*some*]{} effect from the GW, but it will be far smaller than the variation in the separation.
The ruler used by the most sensitive current and planned detectors is based on laser interferometry. We now briefly outline how a GW imprints itself on the phase of the interferometer sketched in Fig.. For further discussion, we recommend [@faraoni07]{}; we also recommend [@finn08] for more detailed discussion. In the interferometer shown, laser light enters from the left, hits the beam splitter, travels down both the $x$- and $y$-arms, bounces back, and is recombined at the beam splitter. Begin with light in the $x$-arm, and compute the phase difference between light that has completed a round trip and light that is just entering. The phase of a wavefront is constant as it follows a ray through spacetime, so we write this difference as $$\Delta\Phi_x \equiv \Phi(T_{\rm round-trip}) - \Phi(0) = \omega_{\rm
proper} \times(\mbox{proper round-trip travel time})\;,
\label{eq:phase_change_schematic}$$ where $\omega_{\rm proper}$ is the laser frequency measured by an observer at the beam splitter. Detecting a GW is essentially precision timing, with the laser acting as our clock.
Consider first the proper frequency. Light energy as measured by some observer is $E = -\vec p\cdot\vec u$, where $\vec p$ is the light’s 4-momentum and $\vec u$ is the observer’s 4-velocity. We take this observer to be at rest, so $$E = -p_t = -g_{tt}p^t = p^t\;.$$ Put $p^\mu = \hat p^\mu + \delta p^\mu$, where $\hat p^\mu$ describes the light in the absence of a GW, and $\delta p^\mu$ is a GW shift. In the $x$-arm, $\hat p^\mu \doteq \hbar\omega(1,\pm1,0,0)$; the signs correspond to before and after bounce. We compute the shift $\delta
p^\mu$ using the geodesic equation (\[eq:geodesic\]): $$\frac{d\delta p^\mu}{d\chi} + {\Gamma^\mu}_{\alpha\beta}\hat p^\alpha
\hat p^\beta = 0\;.$$ (We use $d\hat p^\mu/d\chi = 0$ in the background to simplify.) Focus on the $\mu = t$ component. The only nontrivial connection is ${\Gamma^t}_{xx} = \partial_t h/2$. Using in addition the facts that $p^\mu = dx^\mu/d\chi$ plus $\hat p^t = \pm \hat p^x$ reduces the geodesic equation to $$\frac{d\delta p^t}{dt} = -\frac{1}{2}\hat p^t \partial_t h\;.
\label{eq:nullgeod}$$ Integrating over a round trip, we find $$\delta p^t = -\frac{\hat p^t}{2}\left[h(T_{\rm round-trip}) - h(0)\right]\;.
\label{eq:deltapt}$$ So, we finally find the proper frequency: $$\begin{aligned}
\omega_{\rm proper} \equiv E/\hbar &=&\omega\left(1 +
\frac{1}{2}\left[h(0) - h(T_{\rm round-trip})\right]\right)
\nonumber\\ &\simeq& \omega\;.
\label{eq:omega_proper}\end{aligned}$$ On the second line of Eq. (\[eq:omega\_proper\]), we take $T_{\rm
round-trip}$ to be much smaller than the wave period. This is an excellent approximation for ground-based interferometers; the exact result must be used for high-frequency response in space.
Turn next to the proper round-trip time. The metric (\[eq:detector\_spacetime\]) shows us that proper time measured at fixed coordinate is identical to the coordinate time $t$. For light traveling in the $x$-arm, $0 = -c^2dt^2 + (1 + h)dx^2$, so $$dx = \pm c\,dt\left(1 - \frac{h}{2}\right) + {\cal O}(h^2)\;.
\label{eq:dtdx}$$ Now integrate over $x$ from $0$ to $L$ and back, and over $t$ from $0$ to $T_{\rm round-trip}$: $$\begin{aligned}
2L &=& c T_{\rm round-trip} - \frac{c}{2}\int_0^{\rm T_{\rm
round-trip}}h\,dt
\nonumber\\
&=& c T_{\rm round-trip} - \frac{c}{2}\int_0^{2L/c}h\,dt + {\cal
O}(h^2)\;.\end{aligned}$$ We thus find $$\begin{aligned}
T_{\rm round-trip} &=& \frac{2L}{c} + \frac{1}{2}\int_0^{2L/c}h\,dt
\nonumber\\
&\simeq& \frac{2L}{c}\left(1 + \frac{1}{2}h\right)\;.
\label{eq:roundtrip}\end{aligned}$$ The second line describes a wave which barely changes during a round trip.
The total phase change is found by combining Eqs.(\[eq:phase\_change\_schematic\]), (\[eq:omega\_proper\]) and (\[eq:roundtrip\]): $$\begin{aligned}
\Delta\Phi_x &=& \omega\left(\frac{2L}{c} + \frac{1}{2}\int_0^{2L/c}h
dt\right)\left(1 + \frac{1}{2}\left[h(0) - h(2L/c)\right]\right)
\nonumber\\
&\simeq& \frac{2\omega L}{c}\left(1 + \frac{1}{2}h\right)\;.
\label{eq:DeltaPhi_x}\end{aligned}$$ The second line is for a slowly varying wave, and the first is exact to order $h$. We will use the slow limit in further calculations. Repeating for the $y$-arm yields $$\Delta\Phi_y = \frac{2\omega L}{c}\left(1 - \frac{1}{2}h\right)\;.$$ Notice that this GW acts [*antisymmetrically*]{} on the arms. By contrast, any laser phase noise will be [*symmetric*]{}: because the same laser state is sent into the arms by the beam splitter, we have $\Delta\Phi_x^{\rm Noise} = \Phi_y^{\rm Noise}$. We take advantage of this by reading out light produced by [*destructive*]{} interference at the beamsplitter: $$\begin{aligned}
\Delta\Phi^{\rm Read-out} &=& \Delta\Phi_x - \Delta\Phi_y
= \left(\Delta\Phi_x^{\rm GW} + \Delta\Phi_x^{\rm Noise}\right) -
\left(\Delta\Phi_y^{\rm GW} + \Delta\Phi_y^{\rm Noise}\right)
\nonumber\\
&=& 2\Delta\Phi_x^{\rm GW}\;.\end{aligned}$$ [*An L-shaped interferometer is sensitive only to the GW, not to laser phase noise.*]{} This is the major reason that this geometry is used; even if an incident wave is oriented such that the response of the arms to the GW is not asymmetric, one is guaranteed that phase noise will be cancelled by this configuration.
From basic principles we turn now to a brief discussion of current and planned detectors. Our goal is not an in-depth discussion, so we refer readers interested in these details to excellent reviews by @hr00 (which covers in detail the characteristics of the various detectors) and @td05 (which covers the interferometry used for space-based detectors).
Existing and planned detectors {#sec:detectors}
------------------------------
When thinking about GW detectors, a key characteristic is that the frequency of peak sensitivity scales inversely with armlength. The ground-based detectors currently in operation (and undergoing or about to undergo upgrades) are sensitive to waves oscillating at 10s – 1000s of Hertz. Planned space-based detectors will have sensitivities at much lower frequencies, ranging from $10^{-4}$ – 0.1 Hz (corresponding to waves with periods of tens of seconds to hours).
The ground-based detectors currently in operation are LIGO ([*Laser Interferometer Gravitational-wave Observatory*]{}), with antennae in Hanford, Washington and Livingston, Louisiana; Virgo near Pisa, Italy; and GEO near Hanover, Germany. The LIGO interferometers each feature 4-kilometer arms, and have a peak sensitivity near 100 Hz. Virgo is similar, with 3-kilometer arms and sensitivity comparable to the LIGO detectors. GEO (or GEO600) has 600-meter arms; as such, its peak sensitivity is at higher frequencies than LIGO and Virgo. By using advanced interferometry techniques, it is able to achieve sensitivity competitive with the kilometer-scale instruments. All of these instruments will be upgraded over the course of the next few years, installing more powerful lasers, and reducing the impact of local ground vibrations. The senstivity of LIGO should be improved by roughly a factor of ten, and the bandwidth increased as well. See [[@fritschel03]]{} for detailed discussion.
There are also plans to build additional kilometer-scale instruments. The detector AIGO ([*Australian International Gravitational Observatory*]{}) is planned as a detector very similar to LIGO and Virgo, but in Western Australia [[@mcc02]]{}. This location, far from the other major GW observatories, has great potential to improve the ability of the worldwide GW detector network to determine the characteristics of GW events [[@ssmf06]]{}. In particular, AIGO should be able to break degeneracies in angles that determine a source’s sky position and polarization, greatly adding to the astronomical value of GW observations. The Japanese GW community, building on their experience with the 300-meter TAMA interferometer, hopes to build a 3-kilometer [*underground*]{} instrument. Dubbed LCGT ([*Large-scale Cryogenic Gravitational-wave Telescope*]{}), the underground location takes advantage of the fact that local ground motions tend to decay fairly rapidly as we move away from the earth’s surface. They plan to use cryogenic cooling to reduce noise from thermal vibrations.
In space, the major project is LISA ([*Laser Interferometer Space Antenna*]{}), a 5-million kilometer interferometer under development as a joint NASA-ESA mission. LISA will consist of three spacecraft placed in orbits such their relative positions form an equilateral triangle whose centroid lags the earth by $20^\circ$, and whose plane is inclined to the ecliptic by $60^\circ$; see Fig.. Because the spacecraft are free, they do not maintain this constellation precisely; however, the variations in armlength occur on a timescale far longer than the periods of their target waves, and so can be modeled out without too much difficulty. The review by [@td05]{} discusses in great detail how one does interferometry on such a baseline with time-changing armlengths. LISA is being designed to target waves with periods of several hours to several seconds, a particularly rich band for signals involving black holes that have $10^5\, M_\odot \lesssim M \lesssim 10^7\, M_\odot$; the LISA [*Pathfinder*]{}, a testbed for some of the mission’s components, is scheduled for launch in the very near future [[@vitale05]]{}.
![Schematic of the LISA constellation in orbit about the sun. Each arm of the triangle is $5\times10^6$ km; the centroid of the constellation lags the Earth by $20^\circ$, and its plane is inclined to the ecliptic by $60^\circ$. Note that the spacecraft orbit freely; there is no formation flying in the LISA configuration. Instead, each spacecraft is in a slightly eccentric, slightly inclined orbit; their individual motions preserve the near-equilateral triangle pattern with high accuracy for a timescale of decades.[]{data-label="fig:lisa_orb"}](lisa_orb.eps){width="5.3in"}
Somewhat smaller than LISA, The Japanese GW community has proposed DECIGO ([*DECI-hertz Gravitational-wave Observatory*]{}), a space antenna to target a band at roughly $0.1$ Hz. This straddles the peak sensitivities of LISA and terrestrial detectors, and may thus act as a bridge for signals that evolve from one band to the other. See [[@decigo]]{} for further discussion.
It’s worth nothing that, in addition to the laser interferometers discussed here, there have been proposals to measure GWs using atom interferometry. A particularly interesting proposal has been developed by [@dimo08]. Sources of noise in such experiments are quite different than in the case of laser interferometers, and may usefully complement the existing suite of detectors in future applications.
Measuring binary signals {#sec:measurement}
------------------------
The central principle guiding the measurement of GWs is that their weakness requires [*phase coherent*]{} signal measurement. This is similar to how one searches for a pulsar in radio or x-ray data. For pulsars, one models the signal as a sinusoid with a phenomenological model for frequency evolution: $$\Phi(t ; \Phi_0, f_0, \dot f_0, \ddot f_0, \ldots) = \Phi_0 +
2\pi\left(f_0 t + \frac{1}{2}\dot f_0 t^2 + \frac{1}{6}\ddot f_0 t^3 +
\ldots \right)\;.$$ The cross correlation of a model, $\cos[\Phi]$, with data is maximized when the parameters $(\Phi_0,f_0,\dot f_0,\ddot f_0,\ldots)$ accurately describe a signal’s phase. For a signal that is $N$ cycles long, it is not too difficult to show that the cross-correlation is enhanced by roughly $\sqrt{N}$ when the “template” matches the data.
For binary GWs, one similarly cross-correlates models against data, looking for the template which maximizes the correlation. \[Given the signal weakness implied by Eq. (\[eq:h\_fiducial\]), the cross-correlation enhancement is sure to be crucial for measuring these signals in realistic noise.\] Imagine, for example, that the rule given by Eq. (\[eq:phi\_NQ\]) accurately described binary orbits over the band of our detectors. We would then expect a model based on $$\Phi(t;\Phi_c, t_c, {\cal M}) = \Phi_c -
\left[\frac{c^3(t_c - t)}{5G{\cal M}}\right]^{5/8}$$ to give a large correlation when the coalescence phase $\Phi_c$, coalescence time $t_c$, and chirp mass ${\cal M}$ are chosen well. As we have discussed at length, Eq. (\[eq:phi\_NQ\]) is not a good model for strong-field binaries. The need to faithfully track what nature throws at us has been a major motivation for the developments in perturbation theory, pN theory, and numerical relativity discussed here.
When one determines that some set of parameters maximizes the correlation, that set is an estimator for the parameters of the binary. More formally, the cross-correlation defines a [*likelihood function*]{}, which gives the probability of measuring some set of parameters from the data [[@finn92]]{}. By examining how sharply the likelihood falls from this maximum, one can estimate how accurately data determines parameters. For large cross-correlation (large signal-to-noise ratio), this is simply related to how the wave models vary with parameters. Let $\theta^a$ represent the $a$-th parameter describing a waveform $h$. If $\langle h | h \rangle$ denotes the cross-correlation of $h$ with itself, then define the covariance matrix $$\Sigma^{ab} = \left(\left\langle\frac{\partial h}{\partial\theta^a}
\biggl| \frac{\partial h}{\partial\theta^b}\right\rangle\right)^{-1}
\label{eq:covariance_matrix}$$ (where the $-1$ power denotes matrix inverse). Diagonal elements of this matrix are $1-\sigma$ parameter errors; off-diagonal elements describe parameter correlations. See [[@finn92]]{} for derivations and much more detailed discussion. In the discussion that follows, we use Eq. (\[eq:covariance\_matrix\]) to drive the discussion of how model waveforms are used to understand how well observations will be able to determine the properties of GW-generating systems.
What we learn by measuring binary GWs {#sec:observables}
-------------------------------------
### Overview
Given all that we have discussed, what can we learn by observing compact binary merger in GWs? To set the context, consider again the “Newtonian” waveform: $$\begin{aligned}
h_+ &=& -\frac{2G{\cal M}}{c^2D}\left(\frac{\pi G{\cal
M}f}{c^3}\right)^{2/3}(1 + \cos^2\iota) \cos2\Phi_N(t)\;,
\nonumber\\
h_\times &=& -\frac{4G{\cal M}}{c^2D}\left(\frac{\pi G{\cal
M}f}{c^3}\right)^{2/3}\cos\iota \sin2\Phi_N(t)\;;
\nonumber\\
\Phi_N(t) &=& \Phi_c - \left[\frac{c^3(t_c - t)}{5G{\cal
M}}\right]^{5/8}\;.
\label{eq:h_NQ2}\end{aligned}$$ A given interferometer measures a combination of these two polarizations, with the weights set by the interferometers antenna response functions: $$h_{\rm meas} = F_+(\theta,\phi,\psi)h_+ +
F_\times(\theta,\phi,\psi)h_\times\;.
\label{eq:h_meas}$$ The angles $(\theta,\phi)$ give a sources position on the sky; the angle $\psi$ (in combination with $\iota$) describes the orientation of the binary’s orbital plane with respect to a detector. See [@300yrs], Eqs. (103) – (104) for further discussion.
Imagine that Eq. (\[eq:h\_NQ2\]) accurately described GWs in nature. By matching phase with the data, measurement of the GW would determine the chirp mass ${\cal M}$. Calculations using Eq.(\[eq:covariance\_matrix\]) to estimate measurement error [[@fc93]]{} show that ${\cal M}$ should be determined with exquisite accuracy, $\Delta{\cal M}/{\cal M} \propto 1/N_{\rm cyc}$, where $N_{\rm cyc}$ is the number of GW cycles measured in our band.
The amplitude of the signal is determined with an accuracy $\Delta{\cal A}/{\cal A} \sim 1/\mbox{SNR}$. This means that, for a given GW antenna, a combination of the angles $\theta$, $\phi$, $\iota$, $\psi$, and the source distance $D$ are measured with this precision; however, those parameters are [*not*]{} invidually determined. A single interferometer cannot break the distance-angle correlations apparent in Eqs. (\[eq:h\_NQ2\]) and (\[eq:h\_meas\]). Multiple detectors (which will each have their own response functions $F_+$ and $F_\times$) are needed to measure these source characteristics. This is one reason that multiple detectors around the globe are being built. (For LISA, the constellation’s motion around the sun makes $F_+$ and $F_\times$ effectively time varying. The modulation imposed by this time variation means that the single LISA antenna can break these degeneracies, provided that a source is sufficiently long-lived for the antenna to complete a large fraction of an orbit.)
As we have discussed, Eq. (\[eq:h\_NQ2\]) does not give a good description of GWs from strong-field binaries. Effects which this “Newtonian gravity plus quadrupole waves” treatment misses come into play. Consider the pN phase function, Eq. (\[eq:2pnPhase\]). Not only does the chirp mass ${\cal M}$ influence the phase; so too does the binary’s reduced mass $\mu$ and its “spin-orbit” and “spin-spin” parameters $\beta$ and $\sigma$. The pN phasing thus encodes more detail about the binary’s characteristics. Unfortunately, these parameters may be highly correlated. For example, [@cf94] show that when precession is neglected, errors in a binary’s reduced mass and spin-orbit parameter are typically $90\%$ or more correlated with each other. This is because the time dependence of their contributions to the phase is not very different.
These correlations can be broken when we make our models more complete. As we’ve discussed, the spin precession equations (\[eq:dS1dt\]) and (\[eq:dS2dt\]) cause $\beta$ and $\sigma$ to oscillate. This modulates the waves’ phase; as first demonstrated by [@v04] and then examined in greater depth by [@lh06], the modulations break parameter degeneracies and improve our ability to characterize the system whose waves we measure. Figure (\[fig:mucomp\]), taken from [@lh06], shows this effect for LISA measurements of coalescing binary black holes. Accounting for precession improves the accuracy with which the reduced mass is measured by roughly two orders of magnitude. We similarly find that the members’ spins can be determined with excellent accuracy. GW measurements will be able to map the mass and spin distributions of coalescing binaries.
![Accuracy with which reduced mass $\mu$ is measured by LISA for binaries at $z = 1$ with masses $m_1 = 3 \times 10^5\,M_\odot$, $m_2 = 10^6\,M_\odot$. The two curves come from a Monte-Carlo simulation in which the sky is populated with $10^4$ binaries whose positions, orientations, and spins have been randomly chosen. Horizontal axis is the logarithmic error $\Delta\mu/\mu$; vertical axis is the number of binaries that fall in an error bin. The dashed line neglects spin precession effects; note that the distribution peaks at an error $\Delta\mu/\mu \simeq 0.03$. The solid line includes spin precession; note that the peak error is smaller by roughly two orders of mangnitude.[]{data-label="fig:mucomp"}](mucomp.eps){width="5.3in"}
Equation (\[eq:h\_NQ2\]) is also deficient in that only the leading quadrupole harmonic is included. As the discussion in Sec. demonstrates, that is just one harmonic among many that contribute to a binary’s GWs. Recent work (@aissv07, @ts08, @pc08) has looked at how our ability to characterize a source improves when those “higher harmonics” are included. Typically, one finds that these harmonics improve our ability to determine a binary’s orientation $\iota$. This is largely because each harmonic has a slightly different functional dependence on $\iota$, so each encodes that information somewhat differently than the others. The unique functional dependence of each harmonic on $\iota$ in turn helps break degeneracies between that angle and the source distance $D$.
### “Bothrodesy”: Mapping black hole spacetimes {#sec:bothros}
Extreme mass ratio captures may allow a unique GW measurement: We may use them to “map” the spacetimes of large black holes and test how well they satisfy the (rather stringent) requirements of GR. As discussed in Sec. [\[sec:pert\]]{}, an extreme mass ratio inspiral is essentially a sequence of orbits. Thanks to the mass ratio, the small body moves through this sequence slowly, spending a lot of time “close to” any orbit in the sequence. Also thanks to the mass ratio, each orbit’s properties are mostly determined by the larger body. In analogy to [*geodesy*]{}, the mapping of earth’s gravity with satellite orbits, one can imagine [*bothrodesy*]{}[^5], the mapping of a black hole’s gravity by studying the orbits of inspiraling “satellites.”
In more detail, consider first Newtonian gravity. The exterior potential of a body of radius $R$ can be expanded in a set of multipole moments: $$\Phi_N = -\frac{GM}{r} + G\sum_{l = 2}^\infty
\left(\frac{R}{r}\right)^{l + 1} M_{lm} Y_{lm}(\theta,\phi)\;.
\label{eq:earth_pot}$$ Studying orbits allows us to map the potential $\Phi_N$, and thus to infer the moments $M_{lm}$. By enforcing Poisson’s equation in the interior, $\nabla^2\Phi_N = 4\pi G\rho$, and then matching at the surface $R$, one can relate the moments $M_{lm}$ to the distribution of matter. In this way, orbits allow us to map in detail the distribution of matter in a body like the earth.
Bothrodesy applies the same basic idea to a black hole. The spacetime of any stationary, axisymmetric body can be described by a set of “mass moments” $M_l$, similar to the $M_{lm}$ of Eq.(\[eq:earth\_pot\]); and a set of “current moments” $S_l$ which describe the distribution of mass-energy’s [*flow*]{}. What makes this test powerful is that the moments of a black hole take a simple form: for a Kerr black hole (\[eq:kerr\_metric\]) with mass $M$ and spin parameter $a$, $$M_l + i S_l = M(ia)^l\;.
\label{eq:kerr_moments}$$ A black hole has a mass moment $M_0 = M$ and a current moment $S_1 =
aM$ (i.e., the magnitude of its spin is $aM$, modulo factors of $G$ and $c$). [*Once those moments are known, all other moments are fixed if the Kerr solution describes the spacetime.*]{} This is a restatement of the “no hair” theorem [[@carter71; @robinson75]]{} that a black hole’s properties are set by its mass and spin.
The fact that an object’s spacetime (and hence orbits in that spacetime) is determined by its multipoles, and that the Kerr moments take such a simple form, suggests a simple consistency test: Develop an algorithm for mapping a large object’s moments by studying orbits of that object, and check that the $l \ge 2$ moments satisfy Eq.(\[eq:kerr\_moments\]). [@ryan95] first demonstrated that such a measurement can in principle be done, and [@brink08] has recently clarified what must be done for such measurements to be done in practice. [@ch04] took the first steps in formulating this question as a null experiment (with the Schwarzschild solution as the null hypothesis). [@gb06] formulated a similar approach appropriate to Kerr black holes, and Vigeland (Vigeland & Hughes, in preparation) has recently extended the Collins & Hughes formalism in that direction.
A robust test of the Kerr solution is thus a very likely outcome of measuring waves from extreme mass ratio captures. If, however, testing metrics is not your cup of tea, precision black hole metrology may be: In the process of mapping a spacetime, one measures with exquisite accuracy both the mass and the spin of the large black hole. [@bc04] have found that in most cases these events will allow us to determine both the mass and the spin of the large black hole with $0.1\%$ errors are better.
### Binary inspiral as a standard “siren.” {#sec:siren}
A particularly exciting astronomical application of binary inspiral comes from the fact that the GWs depend on, and thus directly encode, distance to a source. Binary inspiral thus acts as a standard candle (or “standard siren,” so named because it is often useful to regard GWs as soundlike), with GR providing the standardization. [[@schutz86]]{} first demonstrated the power of GW observations of merging binaries to pin down the Hubble constant; [[@markovic93]]{} and [[@fc93]]{} analyzed Schutz’s argument in more detail, in addition assessing how well other cosmological parameters could be determined. More recently, [[@hh05]]{} have examined what can be done if a GW merger is accompanied by an “electromagnetic” counterpart of some kind. We now describe how inspiral waves can serve as a standard siren.
Imagine that we measure a nearby source, so that cosmological redshift can be neglected. The measured waveform generically has a form $$h = \frac{G M(m_i)}{c^2 r}{\cal A}(t)\cos\left[\Phi(t; m_i, {\bf
S}_i)\right]\;,
\label{eq:waveform_generic}$$ where $m_i$ are the binary’s masses, ${\bf S}_i$ are its spins, and $M(m_i)$ is a function of the masses with dimension mass. For example, for the Newtonian quadrupole waveform (\[eq:h\_NQ\]), this function is the chirp mass, $M(m_i) = {\cal M} = (m_1m_2)^{3/5}/(m_1 +
m_2)^{1/5}$. The function ${\cal A}(t)$ is a slowly varying, dimensionless function which depends most strongly on parameters such as the source inclination $\iota$.
Now place this source at a cosmological distance. Careful analysis shows that the naive Euclidean distance measure $r$ should be the [*proper motion distance*]{} $D_M$ (@carroll, Chap. 8); see, e.g., [[@fc93]]{} for a derivation. Also, all timescales which characterize the source will be redshifted: If $\tau$ is a timescale characterizing the source’s internal dynamics, $\tau \to (1 + z)\tau$.
What is the phase $\Phi$ for this cosmological binary? Because it evolves solely due to gravity, any parameter describing the binary’s dynamics enters as a timescale. For example, a mass parameter becomes a time: $m \to \tau_m \equiv Gm/c^3$. This time suffers cosmological redshift; the mass that we infer by measuring it is likewise redshifted: $m_{\rm meas} = (1 + z)m_{\rm local}$. Spin variables pick up a squared reshift factor: $S_{\rm meas} = (1 + z)^2 S_{\rm
local}$. This tells us is that redshift ends up [*degenerate*]{} with other parameters: A binary with masses $m_i$ and spins ${\bf S}_i$ at redshift $z$ has a phase evolution that looks just like a binary with $(1 + z)m_i$, $(1 + z)^2{\bf S}_i$ in the local universe. So, if we put our source at redshift $z$, Eq. (\[eq:waveform\_generic\]) becomes $$h = \frac{GM(m_i)}{c^2 D_M}{\cal A}(t)\cos\left[\Phi(t; (1 + z)m_i, (1 +
z)^2{\bf S}_i)\right]\;.
\label{eq:eq:waveform_generic_z1}$$ Recall that proper motion distance is related to luminosity distance by $D_M = D_L/(1 + z)$. Because we don’t measure masses but rather $(1 + z)$ times masses, it makes sense to adjust the amplitude and put $$h = \frac{G(1 + z)M(m_i)}{c^2 D_L}{\cal A}(t)\cos\left[\Phi(t; (1 +
z)m_i, (1 + z)^2{\bf S}_i)\right]\;.
\label{eq:eq:waveform_generic_z}$$ The key point here is that measurements [*directly encode the luminosity distance to a source*]{}, $D_L$; however, they do [*not*]{} tell us anything about a source’s redshift $z$. In this sense GW measurements of merging binaries can be distance probes that are highly [*complementary*]{} to most other astronomical distance measures. Indeed, analyses indicate that the distance should be measured to $\sim 10 - 20\%$ accuracy using ground-based instruments (e.g., @cf94), and to $\sim 1 - 5\%$ from space (@lh06, @aissv07, @ts08, @pc08).
Suppose that we measure GWs from a merging compact binary, allowing us to measure $D_L$ with this accuracy. [*If*]{} it is possible to measure the source’s redshift \[either from the statistical properties of the distribution of events, as emphasized by [@schutz86] and [@cf93], or by direct association with an “electromagnetic” event [@hh05]\], [*then*]{} one may be able to accurately determine both distance and redshift for that event — a potentially powerful constraint on the universe’s cosmography with completely different systematic properties than other standard candles. An example of an event which may constitute such a standard siren is a short-hard gamma-ray burst. Evidence has accumulated recently consistent with the hypothesis that at least some short-hard bursts are associated with NS-NS or NS-BH mergers (e.g., @fox05, @nakar06, @perley08). Near simultaneous measurement of a GW signal with a short-hard burst is a perfect example of what can be done as these detectors reach maturity and inaugurate GW astronomy.
Acknowledgments {#acknowledgments .unnumbered}
===============
I thank Daniel Kennefick for helpful discussion about the history of this field, Thomas Baumgarte and Stuart Shapiro for teaching me most of what I know about the foundations of numerical relativity, Vicky Kalogera and Fred Rasio for helping me untangle some of the literature on rate estimates for compact binary mergers, Alessandra Buonanno for providing figures and background for the material on the effective one-body approach, Plamen Fiziev for pointing out that Chandrasekhar’s massive tome develops Kerr perturbations in the language of metric variables, and Daniel Holz and Samaya Nissanke for providing particularly thorough comments on an early draft of this paper. Some of this material was presented at the 2008 Summer School in Cosmology at the Abdus Salam International Center for Theoretical Physics, in Trieste, Italy; I thank the organizers of that school for the invitation and for the opportunity to develop and organize this material. The work I have discussed here owes a particular debt to my collaborators Neil Cornish, Steve Drasco, Marc Favata, Éanna Flanagan, Joel Franklin, Daniel Holz, Gaurav Khanna, and Samaya Nissanke; as well as to current and former graduate students Nathan Collins, Ryan Lang, Stephen O’Sullivan, Pranesh Sundararajan, and Sarah Vigeland. Finally, I thank Deepto Chakrabarty for five years of teasing, which inspired me to insert the factors of $G$ and $c$ included in the equations here. My research in gravitational waves and compact binaries is supported by NSF Grant PHY-0449884 and NASA Grant NNX08AL42G. Some of the work discussed here was also supported by NASA Grant NNG05G105G and the MIT Class of 1956 Career Development Fund.
[^1]: The correct answer is now understood thanks to [@db60]: The charge [*does*]{} radiate, precisely reproducing the non-relativistic limit. The intuition that the charge follows a geodesic is not quite right. Though the charge “wants” to follow a geodesic, the charge [*plus its associated field*]{} is extended and nonlocal, and so cannot follow a geodesic. The bending of the charge’s field as it falls in spacetime enforces the laws of radiation emission.
[^2]: Other aspects of the members’ structure cannot be so simply absorbed by the principle of effacement. For example, at a certain order, the spins of a binary’s members impact its motion. Spin’s effects cannot be absorbed into the definition of mass, but affect the binary’s dynamics directly. See [[@th85]]{} for further detailed discussion.
[^3]: In the electromagnetic case, it is unambiguous which [*field*]{} components are radiative and which are static. Similarly, one can always tell which Riemann [*curvature*]{} components are radiative and which are static. [[@eddington22]]{} appears to have been the first to use the curvature tensor to categorize the gravitational degrees of freedom in this way.
[^4]: Since originally posting this paper, I was informed by Plamen Fiziev that [[@chandra83]]{} in fact develops equations to describe Kerr metric perturbations, noting that they rather complicated and unwieldy, and are rarely used.
[^5]: This name was coined by Sterl Phinney, and comes from the word $\beta
o\theta\!\rho o\varsigma$, which refers to a sacrificial pit in ancient Greek. This author offers an apology to speakers of modern Greek.
|
---
author:
- 'Cristina Ramos Almeida$^{1,2}$ & Claudio Ricci$^{3,4,5}$[^1]'
title: Nuclear obscuration in active galactic nuclei
---
Over the past decades several pieces of observational evidence have shown that supermassive black holes (SMBHs; $M_{\rm\,BH}\sim 10^{6-9.5}M_{\odot}$) are found at the center of almost all massive galaxies, and that those SMBHs play an important role in the evolution of their host galaxies[@Kormendy:2013uf] during a phase in which they are accreting material and are observed as active galactic nuclei (AGN). Indeed, different modes of AGN feedback are expected to be key processes shaping the environment of SMBHs. In particular, quasar-induced outflows might be capable of regulating black hole and galaxy growth[@DiMatteo05]. For instance, they are required by semi-analytical models of galaxy formation for quenching star formation in massive galaxies[@Croton06]. However, directly studying the influence of nuclear activity on galaxy evolution is difficult because of the completely different timescales involved[@Hickox14; @Schawinski:2015cs]. [Therefore, to directly probe the AGN–host galaxy connection we need to look at the structure and kinematics of the parsec-scale dust and gas surrounding the accreting SMBHs.]{}
AGN radiate across the entire electromagnetic spectrum, from the radio [and up to gamma-rays]{}. A large fraction of their emission is produced in the accretion disk and emitted in the optical and ultraviolet (UV) bands. [A significant proportion of these optical/UV photons are reprocessed i) by dust located beyond the sublimation radius and re-emitted in the infrared (IR), and ii) by a corona of hot electrons close to the accretion disk that up-scatters them in the X-ray band[@Haardt:1994bq] and illuminates the surrounding material. Thus, studying the IR and X-ray emission and absorption of AGN is key to characterize nuclear regions of AGN.]{}
AGN structure {#agn-structure .unnumbered}
=============
AGN are classified as type-1 and type-2 depending on the presence or not of broad components (full width at half maximum; FWHM$\geq$2000 km s$^{-1}$) in the permitted lines of their optical spectra. Those broad lines are produced in a sub-pc scale dust-free region known as the broad-line region (BLR). On the other hand the narrow lines (FWHM$<$1000 km s$^{-1}$) that are ubiquitous in the spectra of AGN –excluding beamed AGN– are produced in the narrow-line region (NLR). [In the case of moderately luminous AGN, such as Seyfert galaxies, the NLR extends from $\sim$10 pc to $\sim$1 kpc[@Capetti96]. ]{}
[The discovery of a highly polarized H$\alpha$ broad component in the radio galaxy 3C234 with position angle perpendicular to the radio axis[@Antonucci84] led to the development of the AGN unified model[@Antonucci93; @Urry95]. These observations can be explained if the central engine is surrounded by a dusty toroidal structure, dubbed the torus, which blocks the direct emission from the BLR in type-2 AGN, and scatters the photons producing the observed polarized spectra.]{} This toroidal structure, of 0.1–10 pc in size [as constrained from mid-IR (MIR) imaging[@Packham05; @Radomski08] and interferometry[@Burtscher13], and more recently from sub-millimiter observations[@Imanishi16; @Garcia16; @Gallimore16]]{}, also collimates the AGN radiation, hence producing the bi-conical shapes of their NLRs known as ionization cones[@Malkan98]. In summary, from the very center to host-galaxy scales the main AGN structures are the accretion disk and the corona, the BLR, the torus and the NLR, as shown in Figure \[fig1\].
![image](f1.eps){width="14cm"}
[Another structure inferred from radio observations is the sub-pc scale maser disk: a compact concentration of clouds orbiting the SMBH and emitting in the 22 GHz maser line[@Greenhill96]. The maser disk is generally assumed to be co-spatial with the torus, although it is not clear whether it corresponds to its innermost part or to a geometrically thin disk which inflates in the outer part[@Masini16]. ]{}
[X-ray emission is ubiquitous in AGN, and is produced in a compact source located within a few gravitational radii[@Zoghbi:2012jk] from an accretion disk.]{} The study of the reprocessed and absorbed X-ray radiation can provide important information on the structure and physical properties of the circumnuclear material. [The level of low-ionization absorption in the X-rays]{} is typically parametrized in terms of the line-of-sight column density ($N_{\rm\,H}$), and AGN are considered to be obscured if $N_{\rm\,H}\geq 10^{22}\rm\,cm^{-2}$. While obscuration strongly depletes the X-ray flux at $E<10$keV due to the photo-electric effect, emission in the hard X-ray band ($E\gtrsim10\rm\,keV$) is less affected by obscuration. [Therefore, observations carried out using hard X-ray satellites such as [*NuSTAR*]{}, [*Swift*]{}/BAT, [*Suzaku*]{}/PIN, [*INTEGRAL*]{} IBIS/ISGRI, and [*BeppoSAX*]{}/PDS can probe even some of the most elusive accretion events.]{} Recent hard X-ray surveys have [contributed to]{} significantly improve our understanding of AGN obscuration, [showing that $\sim 70\%$ of all local AGN are obscured[@Burlon:2011dk; @Ricci:2015tg]]{}. [While nuclear obscuration is mostly associated with dust within the torus at IR wavelengths, it can also be related to dust-free gas in the case of the X-rays.]{} Indeed, it is likely that X-ray obscuration is produced by multiple absorbers on various spatial scales. [This might include dust beyond the sublimation radius, and dust-free gas within the BLR and the torus[@Risaliti07; @Maiolino:2010fu]]{}. This explains observations showing that, in general, the columns of material implied in the X-ray absorption are found to be comparable to or larger than those inferred from nuclear IR observations[@Ramos09; @Burtscher16].
Early X-ray studies revealed that, while most type-1 AGN are unobscured, type-2 AGN are usually obscured[@Awaki:1991rw], supporting the unification model. [A clear example is NGC1068, the archetypal type-2 AGN, which has been shown to be obscured by material optically-thick to photon-electron scattering (Compton-thick or CT, $N_{\rm\,H}\geq 1.5\times 10^{24}\rm\,cm^{-2}$), which depletes most of the X-ray flux[@Matt:1997qy; @Bauer:2015si].]{} [Nevertheless for some objects with no broad optical lines no X-ray obscuration has been found[@Panessa:2002if].]{} [Interestingly, many of these objects have low accretion rates, which would be unable to sustain the dynamical obscuring environment (i.e., the BLR and the torus) observed in typical AGN[@Nicastro:2000cq; @Elitzur:2009hh], explaining the lack of X-ray obscuration and broad optical lines.]{} On the other hand, studies of larger samples of objects have reported tantalizing evidence of a significant AGN population showing broad optical lines and column densities $N_{\rm\,H}\geq 10^{21.5}\rm\,cm^{-2}$ in the X-rays[@Merloni:2014wq]. [This has been explained considering that some obscuration is related to dust-free gas within the sublimation region associated to the BLR[@Davies:2015rw].]{}
[The boundary between the BLR and the torus]{} is set by the dust sublimation temperature. The sublimation region has been resolved[@Kishimoto09; @Weigelt12] in the near-IR (NIR) using the Very Large Telescope Interferometer (VLTI) and the Keck Interferometer. [From these interferometric observations it has been found that the inner torus radius scales with the AGN luminosity[@Kishimoto11] as $r\propto L^{1/2}$, as previously inferred from optical-to-IR time lag observations[@Suganuma06] and also from theoretical considerations[@Barvainis87].]{} [The torus radiates the bulk of its energy at MIR wavelengths, although recent interferometry results might complicate this scenario[@Honig12; @Honig13; @Lopez16]. From both IR and X-ray observations it has been shown that the nuclear dust is distributed in clumps[@Ramos09; @Markowitz:2014oq], and further constraints on the torus size and geometry have been provided by MIR interferometry[@Burtscher13; @Lopez16]. The MIR-emitting dust is compact and sometimes it appears not as a single component but as two or three[@Tristram14].]{} Thanks to the unprecedented angular resolution afforded by the Atacama Large Millimeter/submillimeter Array (ALMA), [recent observations have, for the first time, imaged the dust emission, the molecular gas distribution, and the kinematics from a 7–10 pc diameter disk that represents the sub-mm counterpart of the putative torus of NGC1068[@Imanishi16; @Garcia16; @Gallimore16] (see Figure \[alma\]). As the sub-mm range probes the coolest dust within the torus, this molecular/dusty disk extends is twice larger than the warmer compact MIR sources detected by the VLTI in the nucleus of NGC1068[@Lopez14] and the pc-scale ionized gas and maser disks imaged in the mm regime[@Gallimore96; @Gallimore97] which correspond to the torus innermost part. The highest angular resolution ALMA images available to date (0.07$\times$0.05) reveal a compact molecular gas distribution showing non-circular motions and enhanced turbulence superposed to the slow rotation pattern of the disk[@Garcia16]. This is confirmed by deeper ALMA observations at the same frequency[@Gallimore16] which permit to disentangle the low-velocity compact CO emission ($\pm$70 km s$^{-1}$ relative to the systemic velocity) from the higher-velocity CO emission ($\pm$400 km s$^{-1}$), which the authors interpreted as a bipolar outflow almost perpendicular to the disk.]{}
Furthermore, from the left panel of Figure \[alma\] it is clear that the torus is not an isolated structure. Instead, it is connected physically and dynamically with the circumnuclear disk (CND) of the galaxy[@Garcia16] ($\sim$300 pc$\times$200 pc). Indeed, previous NIR integral field spectroscopy data of NGC1068 revealed molecular gas streams from the CND into the nucleus[@Muller09]. CNDs appear to be ubiquitous in nearby AGN and they constitute the molecular gas reservoirs of accreting SMBHs[@Hicks13].
![image](figure-ngc1068-review-cris.eps){width="16cm"}
Dust and gas spectral properties {#dust-and-gas-spectral-properties .unnumbered}
================================
X-ray tracers of circunmnuclear material. {#x-ray-tracers-of-circunmnuclear-material. .unnumbered}
-----------------------------------------
The X-ray emitting plasma irradiates the surrounding material giving rise to several [*reflection*]{} features, the most important of which are the FeK$\alpha$ line at 6.4keV and a Compton [*hump*]{} that peaks at $\sim 30\rm\,keV$[@Matt:1991ly]. While the FeK$\alpha$ line can be produced by material with column densities as low as $N_{\rm\,H}\simeq 10^{21-23}\rm\,cm^{-2}$, the Compton hump can only be created by the reprocessing of X-ray photons in CT material. The Compton hump is a common feature in the X-ray spectra of AGN, showing that CT material is almost omnipresent in AGN. It is however still unclear what fraction of the Compton hump arises [from the accretion disk and what from material associated to the BLR or the torus.]{}
The narrow FeK$\alpha$ line (FWHM$\simeq 2000\rm\,km\,s^{-1}$)[@Shu:2010tg] is an almost ubiquitous feature in the X-ray spectra of AGN[@Nandra:1994ly], and its energy is consistent with this feature originating in lowly-ionised material[@Shu:2010tg]. Its origin is still under discussion, and it could be related to the torus[@Shu:2010tg], to the BLR[@Bianchi:2008sf], or to an intermediate region between the two[@Gandhi:2015zp]. [The flux of the narrow FeK$\alpha$ line (compared to the intrinsic X-ray flux) is generally weaker in type-2 AGN than in type-1, and it is depleted in CT AGN with respect to less obscured AGN[@Ricci:2014ek].]{} This would be in agreement with the idea that the circumnuclear material is axisymmetric, as predicted by the unified model, and pointing to the torus or its immediate surroundings as the region where the bulk of this line is produced. In CT material some of the FeK$\alpha$ photons are bound to be down scattered, giving rise to the Compton shoulder. While the shape of this feature carries important information on the geometry and physical characteristics of the material surrounding the SMBH[@Matt:2002eu], the spectral resolution of current facilities has not permitted to study it in detail.
Infrared tracers of circunmnuclear material. {#infrared-tracers-of-circunmnuclear-material. .unnumbered}
--------------------------------------------
High angular resolution observations obtained with ground-based 8–10 m–class telescopes and with the [*Hubble Space Telescope*]{} have been fundamental to characterize the nuclear IR spectral energy distributions (SEDs) of AGN[@Alonso03; @Ramos09; @Prieto10; @Asmus14]. In general, while the subarcsecond resolution NIR SEDs (1–8 ) of nearby type-1 AGN are [bluer]{} than those of type-2s, the MIR slope (8–20 ) is practically identical for the two types[@Levenson09; @Prieto10; @Ramos11; @Asmus14], indicating that the MIR emission is more isotropic than expected from a smooth torus[@Pier92; @Pier93]. [The wavelength dependency of the IR anisotropy has been also studied at higher redshift using an isotropically selected sample of quasars and radio galaxies[@Honig11]. Longward of 12 the anisotropy is very weak, and the emission becomes practically isotropic at 15 .]{} This weak MIR anisotropy explains the strong 1:1 correlation between the MIR and hard X-ray luminosities found for both type-1 and type-2 AGN[@Krabbe01; @Lutz:2004gf; @Asmus15].
Another MIR spectral characteristic used to study nuclear obscuration and commonly associated with the torus is the 9.7$\mu$m silicate feature. It generally appears in emission in type-1 AGN and in absorption in type-2 AGN, although there are exceptions[@Roche91; @Mason09]. The amount of extinction that can be inferred from the silicate feature strength shows a correlation, although with large scatter, with the column densities derived from the X-rays[@Shi06]. [In general,]{} large columns correspond to silicate absorption and small columns to silicate emission. High angular resolution MIR spectroscopy of face-on and isolated AGN [revealed shallow silicate features in type-2 AGN[@Roche06; @Alonso16]. A clumpy distribution of dust naturally produces these shallow absorption features, but another interpretation to explain this and other “anomalous” dust properties in AGN such as the reduced E$_{B-V}$/N$_H$ and A$_V$/N$_H$ ratios is a dust distribution dominated by large grains[@Maiolino01].]{}
Torus models {#torus-models .unnumbered}
============
[As a result of the small size of the torus, neither ground-based single-dish telescopes nor X-ray satellites are able to resolve it even in the most nearby AGN.]{} As a consequence, different sets of IR and X-ray torus models were developed aiming to reproduce the observed SEDs and to put indirect constraints on the torus properties. Pioneering work in modelling the dusty torus[@Pier92; @Pier93] in the IR assumed a uniform dust density distribution for the sake of simplicity. However, it was known from the very beginning that a smooth dust distribution cannot survive in the hostile AGN vicinity[@Krolik88]. Instead, the dust has to be arranged in dense and compact clumps. Observationally, X-ray variability studies have provided further support to a clumpy distribution of the obscuring material[@Markowitz:2014oq; @Marinucci:2016eu].
In order to solve the discrepancies between IR observations and the first smooth torus models (e.g. shallow silicate features, relatively [blue]{} IR SEDs in type-2 AGN, and small torus sizes), more sophisticated models were developed in the last decade. Roughly, two different sets of models can be distinguished. On the one hand, [*physical models*]{} aim to consider processes such as AGN and supernovae feedback, inflowing material, and disk maintenance[@Schartmann08; @Wada02; @Wada12]. On the other hand, [*Geometrical/ad-hoc models*]{} attempt to reproduce the IR SED by assuming a certain geometry and dust composition[@Nenkova08a; @Nenkova08b; @Honig10; @Stalevski12; @Siebenmorgen15]. The two types of models have advantages and disadvantages. Physical models are [potentially more realistic but it is more difficult to compare them with observations, and they generally have to assume extreme conditions to work, such as very massive star clusters or disks, or combine multiple effects like star formation, feedback and radiation with high Eddington ratios. However, much progress has been made since the first physical torus models were developed, and many of these problems are currently being solved.]{} On the other hand, geometrical models can be easily compared with observations but they face the problem of large degeneracies and dynamical instability. Nevertheless, much has been learned in the last years from comparing models and observations, and what is important is to be aware of the model limitations when interpreting the results[@Feltre12].
Geometrical torus models are particularly useful for performing statistical analysis [using galaxy samples and, for example, deriving trends in the torus parameters between type-1 and 2 AGN. This can be done by evaluating the joint posterior distributions using Bayesian analysis[@Ramos11], or a hierarchical Bayesian approach to derive information about the global distribution of the torus parameters for a given subgroup[@Ichikawa15]. Individual source fitting with geometrical torus models should be used when additional constraints from observables such as the ionization cones opening angle and/or orientation are considered as a priori information in the fit.]{} In particular, clumpy torus models have made significant progress in accounting for the IR emission of different AGN samples[@Mor09; @Ramos09; @Honig10; @Alonso11; @Lira13]. Examples of torus parameters that can be derived from SED modeling and compared with independent observations are e.g. the torus width ($\sigma$), which is the complement of the ionization cone half-opening angle angle; the torus outer radius (R$_o$), which can be compared with interferometry constraints; and the covering factor, which depends on the number of clumps (N$_0$) and $\sigma$ (see Figure \[torus\]). These IR covering factors can be compared with those derived from the modeling of X-ray spectra (see next section for further details).
![image](torus_cf.eps){width="16cm"}
It is worth noting that we do not only learn from what the models can fit, but also from what they cannot fit. For example, the NIR SEDs of some type-1 Seyferts and Palomar-Green quasars (PG quasars) show a $\sim$3 $\mu$m bump that cannot be reproduced with clumpy torus models only, revealing either the presence of nuclear hot dust that is not accounted for in the models or NLR flux contaminating the nuclear measurements[@Mor09; @Alonso11]. Note, however, that in the case of the more sophisticated two-phase torus models[@Stalevski12], the low-density diffuse interclump dust accounts for the NIR excess in some cases[@Lira13; @Roseboom13]. [The NIR bump is also reproduced by recently available radiative transfer models[@Honig17] that assume an inflowing disk which is responsible for the NIR peak and an outflowing wind that produces the bulk of the MIR emission. Although successful in reproducing recent MIDI interferometric observations of nearby AGN[@Lopez16], the number of free parameters is even larger than in clumpy torus models.]{}
Another example of IR SEDs that clumpy models cannot reproduce are those of [low-luminosity AGN (LLAGN) with L$_{\rm\,bol} < 10^{42}$ erg s$^{-1}$, which show a [bluer MIR spectrum (5-35 )]{} than those of LLAGN with L$_{\rm\,bol} \ge 10^{42}$ erg s$^{-1}$]{} [@Gonzalez15]. This could be indicating that the torus disappears at low bolometric luminosities[@Elitzur:2009hh].
In the X-rays, torus spectral models are calculated from Monte Carlo simulations of reprocessed and absorbed X-ray radiation[@Ikeda:2009hb; @Murphy:2009hb; @Brightman:2011fe; @Liu:2014ff; @Furui:2016qf], and currently consider geometries more simplified than the IR models. Typical parameters obtained from these models are the column density of the torus, its covering factor and the inclination angle with respect to the system. [The two most commonly used models adopt homogeneous toroidal[@Murphy:2009hb; @Brightman:2011fe] geometries,]{} and are used to study the most heavily obscured sources, for which the obscuring material acts as a sort of coronagraph, permitting to clearly observe the reprocessed X-ray radiation. These models have allowed to significantly improve the constraints on the properties of the most obscured AGN[@Balokovic:2014dq; @Annuar:2015wd; @Koss:2016fv], and in some cases to separate the characteristics of the material responsible for the reprocessed emission and those of the obscurer[@Yaqoob:2012wu; @Bauer:2015si].
Covering factor of the obscuring material {#covering .unnumbered}
=========================================
The covering factor is the fraction of sky covered by the obscuring material, as seen from the accreting SMBH, and it is one of the main elements regulating the intensity of the reprocessed X-ray and IR radiation. In the last decade different trends with luminosity and redshift have been found by studying different AGN samples at different wavelengths.
Both in the IR and X-rays the covering factor can be inferred from spectral modelling, as outlined in the previous section. Two additional methods are often used: i) in the IR the ratio between the MIR and the AGN bolometric luminosity is used as a proxy of the [torus reprocessing efficiency. The fraction of the optical/UV and X-ray radiation reprocessed by the torus and observed in the MIR is proportional to its covering factor.]{} ii) In the X-rays the covering factor of the gas and dust surrounding the SMBH can be estimated using a statistical argument and studying the absorption properties of large samples of AGN. The compactness of the X-ray corona implies that the value of the column density obtained from X-ray spectroscopy of single objects provides information only along an individual line-of-sight. Studying large samples of objects [allows us to probe]{} random inclination angles, therefore providing a better understanding of the average characteristics of the obscuring material. In fact the probability of seeing an AGN as obscured is proportional to the covering factor of the gas and dust. Therefore the fraction of sources with column density within a certain range can be used as a proxy of the mean covering factor within that $N_{\rm\,H}$ interval.
The left panel of Figure\[fig:NHdistribution\] shows the intrinsic column density distribution of local AGN selected in the hard X-ray band and corrected for selection effects. Following the statistical argument outlined above, the $N_{\rm\,H}$ distribution provides important insights on the average structure of the gas and dust, and it can be used to infer the covering factors of different layers of obscuring material, assuming that the column density increases for larger inclination angles (see right panel of Figure\[fig:NHdistribution\]). The existence of a significant population of AGN observed through CT column densities was suggested by early X-ray observations of nearby AGN[@Risaliti:1999dw], which found that some of the nearest objects (Circinus, NGC1068, and NGC4945) are obscured by column densities $\geq 10^{24}\rm\,cm^{-2}$. Recent hard X-ray studies have shown that $\sim 20-27\%$ of all AGN in the local Universe are CT [@Burlon:2011dk; @Ricci:2015tg], and a similar percentage is found also at higher redshift [@Ueda:2014ix; @Brightman:2014zp; @Buchner:2015ve; @Lanzuisi:2015qr].
Studies of X-ray selected samples of AGN have shown that the fraction of obscured Compton-thin ($N_{\rm\,H}=10^{22-24}\rm\,cm^{-2}$) sources decreases with luminosity[@La-Franca:2005kl; @Akylas:2006gd; @Burlon:2011dk; @Merloni:2014wq; @Ueda:2014ix] (see Figure\[fig:fobs\_L\]). A similar behavior has been observed for the Compton-thick material in a large sample of [*Swift*]{}/BAT AGN[@Ricci:2015tg], as well as from the parameters derived from the broad-band X-ray spectroscopic analysis of a sample of local CT AGN using a physical torus model[@Brightman:2015fv]. This trend has been interpreted as being due to the decrease of the covering factor of the obscuring material with the luminosity, and it has been also reported by several studies carried out in the IR using the ratio between the MIR and the bolometric luminosity[@Maiolino:2007ye; @Treister:2008kc; @Lusso:2013vf; @Stalevski:2016hl]. Furthermore, the fraction of obscured AGN at a given X-ray luminosity also increases with redshift[@Ueda:2014ix], suggesting that the circumnuclear material of AGN might also evolve with Cosmic time. Nevertheless, recent works carried out in the IR[@Stalevski:2016hl] and X-rays[@Sazonov:2015ys] have argued that if the anisotropy of the circumnuclear material is properly accounted for, the decrease of the covering factor with luminosity would be significantly weaker.
The intensity of the reprocessed X-ray radiation depends on the covering factor of the obscuring material, which would lead to expect a direct connection between reflection features and X-ray luminosity. While the relation between the Compton hump and the luminosity is still unclear, a decrease of the equivalent width of the FeK$\alpha$ line with the luminosity (i.e. the X-ray Baldwin effect[@Iwasawa:1993ez]) has been observed in both unobscured[@Bianchi:2007os; @Shu:2010tg] and obscured[@Ricci:2014ek] AGN. Moreover, the slope of the X-ray Baldwin effect can be reproduced by the relation between the covering factor and the luminosity found by X-ray studies[@Ricci:2013hi], also suggesting a relation between the two trends.
The relation between the obscuring material and the AGN luminosity has been often explained as being a form of [*feedback*]{}, with the radiation field of the AGN cleaning out its circumnuclear environment[@Fabian:2006sp]. Interestingly, the decrease of the covering factor with luminosity does not extend to the highest bolometric luminosities ($10^{46-48}\rm\,erg\,s^{-1}$), where about half of the AGN population seem to be obscured[@Assef:2015ly]. We note that these obscured AGN are not necessarily type-2 AGN in the optical range. In the low-luminosity regime, evidence for a decrease of the fraction of obscured sources for X-ray luminosities $<10^{42}\rm\,erg\,s^{-1}$ has been observed[@Burlon:2011dk] (Figure\[fig:fobs\_L\]). This result is supported by MIR observations of nearby AGN, which claim that the torus disappears at luminosities below the before-mentioned limit[@Gonzalez15]. As discussed in previous sections, these results can be explained if low-luminosity AGN fail to sustain the AGN internal structures[@Elitzur:2006ec]. [This idea would also explain the observed decrease of the FeK$\alpha$ intensity with respect to the X-ray flux at low-luminosities found using [*Suzaku*]{}[@Kawamuro:2016lq].]{}
![[Evolution of the covering factor of the obscuring material with luminosity.]{} Relation between the fraction of obscured Compton-thin sources ($10^{22}<N_{\rm\,H}<10^{24}\rm\,cm^{-2}$) and the 14–195keV luminosity for AGN detected by [*Swift*]{}/BAT[@Burlon:2011dk]. []{data-label="fig:fobs_L"}](f16.eps){width="8.5cm"}
The modelling of IR SEDs has also provided important insights on the unification scheme. [Using clumpy torus models, it has been claimed]{} that the covering factors of type-2 tori are larger (i.e., more clumps and broader tori) than those of type-1 AGN[@Ramos11; @Ichikawa15; @Alonso11; @Mateos16]. [This would imply,]{} first, that the observed differences between type-1 and type-2 AGN are not due to orientation effects only, as proposed in the simplest unification model, but also to [the dust covering factor]{}. Second, that the torus is not identical for all AGN of the same luminosity. Therefore, the classification of an AGN as a type-1 or type-2 is probabilistic[@Elitzur12] (see Figure \[torus\]). The larger the covering factor, the smaller the escape probability of an AGN-produced photon. Although these results are model-dependent, they reflect the observed differences between the IR SEDs of type-1 and type-2 AGN. It is noteworthy that in the radio-loud regime it has been found that the radio core dominance parameter (the ratio of pc-scale jets to radio lobes emission) agrees with the optical classification of a subsample of 3CR radio galaxies at z$\geq$1[@Marin2016]. This would be indicating a small dispersion of torus opening angle and inclination for luminous type-1 and type-2 radio-loud AGN. Unfortunately, this analysis is not possible in radio-quiet AGN.
While models reproducing at the same time both the X-ray and MIR spectral properties of the reprocessed radiation are still missing, the values of the covering factors obtained by using MIR[@Alonso11; @Lira13; @Ichikawa15] and X-ray torus models are consistent[@Brightman:2015hb], albeit with large uncertainties and for a handful of AGN only.
Variability of the line-of-sight obscuring material {#variability .unnumbered}
===================================================
Studies carried out in the X-rays have found variations of the column densities of the obscuring material for several dozens AGN, confirming the idea that the obscuring material is clumpy and not homogeneous, and very dynamic. In about ten objects[@Risaliti07; @Guainazzi:2002mz; @Piconcelli:2007bh], these variations were found to be very extreme, with the line-of-sight obscuration going from CT to Compton-thin (and vice versa) on timescales of hours to weeks (see left panel of Figure\[fig:variableNH\]). This is consistent with the absorber originating in the BLR. Due to the strong changes in their spectral shapes, these objects are called [*changing-look AGN*]{}. For the archetypal of these objects, NGC1365, it has been found that the obscuring clouds have a cometary shape[@Maiolino:2010fu], with a high-density head and an elongated structure with a lower density. Even objects such as Mrk766[@Risaliti:2011jl], which are usually unobscured, have been found to show eclipses produced by highly-obscuring material. For this object, highly-ionized blueshifted iron absorption lines (FeXXV and FeXXVI) were also detected, showing that the absorbing medium is outflowing with velocities ranging from 3,000 to 15,000$\rm\,km\,s^{-1}$. Interestingly, due to the mass loss from the cometary tail, clouds would be expected to be destroyed within a few months[@Maiolino:2010fu]. This suggests that the BLR must be very dynamic, with gas clouds being created and dissipating continuously. The origin of these clouds is still unclear, but it has been suggested that they might be created in the accretion disk[@Elitzur:2009hh].
Evidence for a clumpy obscurer on scales larger than the BLR have also been found. A study carried out using the [*Rossi X-ray Timing Explorer*]{} has recently discovered variation of the absorbing material (with $N_{\rm\,H}\sim 10^{22-23}\rm\,cm^{-2}$) on timescales of months to years for several objects[@Markowitz:2014oq]. For seven AGN the distance of the obscuring clouds locates them between the outer side of the BLR and up to ten times the distance of the BLR, suggesting that they are associated with clumps in the torus. Recent [*NuSTAR*]{} observations of NGC1068[@Marinucci:2016eu] found a $\sim$30% flux excess above 20keV in August 2014 with respect to previous observations (right panel of Figure\[fig:variableNH\]). The lack of variability at lower energies permitted to conclude that the transient excess was due to a temporary decrease of the column density, caused by a clump moving out of the line-of-sight, which enabled to observe the primary X-ray emission for the first time. [In the MIR, results from dust reverberation campaigns using data from the [*Spitzer Space Telescope*]{} are consistent with the presence of clumps located in the inner wall of the torus[@Vazquez15]]{}.
Therefore, we know from absorption variability studies that both the BLR and the torus are not homogeneous structures, but clumpy and dynamic regions which might be generated as part of an outflowing wind.
Gas and dust in the polar region {#gas-and-dust-in-the-polar-region .unnumbered}
================================
In the last decade, MIR interferometry has represented a major step forward in the characterization of nuclear dust in nearby AGN. VLTI/MIDI interferometry of 23 AGN has revealed that a large part of the MIR flux is concentrated on scales between 0.1 and 10 pc[@Tristram09; @Burtscher13]. Besides, for the majority of the galaxies studied, two model components are needed to explain the observations[@Burtscher13], instead of a single toroidal/disk structure. Moreover, for some of these sources one of these two components appears elongated in the polar direction. Detailed studies of four individual sources performed with MIDI[@Honig12; @Honig13; @Tristram14; @Lopez14] have shown further evidence for this nuclear polar component (see Figure \[polar\]). More recently, a search for polar dust in the MIDI sample of 23 AGN has been carried out[@Lopez16], and this feature has been found in one more galaxy. Thus, up to date, evidence for a diffuse MIR-emitting polar component has been found in five AGN, including both type-1 and type-2 sources (Circinus, NGC424, NGC1068, NGC3783, and NGC5506). This polar component appears to be brighter in the MIR than the more compact equatorial structure, and [it has been interpreted as an outflowing dusty wind driven by radiation pressure[@Honig12]. Indeed, radiation-driven hydrodynamical models[@Wada12; @Wada16] taking into account both AGN and supernovae feedback can reproduce geometrically thick pc-scale disks and also polar emission, although these features are rather transient in nature even when averaged over time. ]{}
It is noteworthy that this polar component was first detected in NGC1068 using high angular resolution MIR observations ($\le$0.5) obtained with [single-dish 4–10 m-class telescopes[@Cameron93; @Bock00; @Mason06]. These observations revealed that the point source was only responsible for $\sim$30–40% of the 8–24.5 emission, and the remaining 60–70% was emitted by dust in the ionization cones. Thus, the bulk of the nuclear MIR flux comes from polar dust within the central 70 pc of NGC1068[@Mason06].]{} More recently, a similar result has been reported for 18 active galaxies observed with 8 m-class telescopes in the MIR[@Asmus16]. The resolved emission is elongated in the polar direction (i.e. NLR dust), it represents at least 40% of the MIR flux, and it scales with the \[O IV\] flux. This is in line with the results from MIR interferometry[@Honig12; @Honig13], which indicate that the bulk of the MIR emission comes from a diffuse polar component, while the NIR flux would be dominated by a compact disk (see right panel of Figure \[polar\]). The difference between single-dish telescopes and MIR interferometry studies is the scale of the IR-emitting regions probed. [The existence of non-nuclear reflecting material in obscured AGN has been confirmed in the X-ray regime by [*Chandra*]{} studies of the FeK$\alpha$ line, which showed that part of the emission originates from an extended region.]{} This has been found for some of the nearest heavily obscured AGN, such as NGC1068[@Young:2001jw], for which $\sim$30% of the FeK$\alpha$ emission has been found to originate in material located at $\gtrsim$140 pc[@Bauer:2015si] and it seems to be aligned with the NLR. [Similarly, the 0.3–2keV radiation has also been found to coincide with the NLR in obscured AGN[@Bianchi:2006kq].]{}
If proved to be common features in a significant fraction of AGN, co-existing compact disks/tori and polar dust components [should be incorporated in the models[@Honig17]]{} and could explain the observed NIR and MIR bumps seen in the SEDs of some type-1[@Mor09; @Alonso11; @Ichikawa15] and type-2 AGN[@Lira13]. Besides, the polar emission has been proposed as an alternative scenario to explain [the weak MIR anisotropy observed in active galaxies[@Honig11],]{} which is responsible for the strong 1:1 X-ray/MIR correlation slopes found for type-1 and type-2 AGN[@Gandhi:2009pd; @Ichikawa:2012uo; @Asmus15]. However, as explained in previous sections, a toroidal clumpy distribution also explains the weak MIR anisotropy[@Levenson09] and more sophisticated clumpy models account for the NIR excess of the nuclear SEDs, [either including a polar component in addition to the torus[@Honig17] or not[@Stalevski12].]{}
Current picture and the future of IR and X-ray studies of nuclear obscuration in AGN {#current-picture-and-the-future-of-ir-and-x-ray-studies-of-nuclear-obscuration-in-agn .unnumbered}
====================================================================================
In the past 10–15 years, studies of AGN in the IR and X-rays have provided important information on the characteristics of the nuclear environment of accreting SMBHs, showing that its nature is extremely complex and dynamic. [The obscuring structure is is compact, clumpy, and not isolated, but connected with the host galaxy via gas inflow/outflows.]{} From the IR point of view, it is a transition zone between the dust-free BLR clouds and the NLR and, at least in some galaxies, it consists of two structures: an equatorial disk/torus and a polar component. This polar component would be part of the outflowing dusty wind predicted by radiation-driven hydrodynamical models. In the case of the X-rays, the obscuration is produced by multiple absorbers on various spatial scales, but mostly associated with the torus and the BLR. The covering factor of the obscuring material depends on the luminosity of the system and possibly on the redshift, and it is important to take these dependencies into account to explain observations of both high- and low-luminosity AGN. The covering factor should also be considered in our current view of AGN unification, as the classification of an AGN as type-1 or type-2 does not depend on orientation only, but also on the AGN-produced-photon escape probability.
In the next decade the new generation of IR and X-ray facilities will [contribute greatly to our]{} understanding of the structure and physical properties of the nuclear material, and to shed light on relevant open question such as: what is the relationship between the physical parameters of the accreting system and the circumnuclear material? Are the torus and BLR produced by outflows in the accretion disk? Is the polar dust ubiquitous?, and how much does it contribute to the IR emission?
[In the X-ray regime, [*NuSTAR*]{}, [*XMM-Newton*]{}, [*Chandra*]{}, [*Swift*]{} and [*INTEGRAL*]{} will continue carrying out broad-band X-ray observations of AGN, providing tighter constraints on the most obscured accretion events and on the characteristics of the circumnuclear material through studies of the reprocessed X-ray radiation.]{} The recently launched X-ray satellite [*ASTROSAT*]{}[@Singh:2014pd], thanks to its large effective area and broad band X-ray coverage, will be ideal to study absorption variability, and will improve our understanding of the properties of the BLR clouds. [*eROSITA*]{} ([Merloni et al. 2012](http://adsabs.harvard.edu/abs/2012arXiv1209.3114M)), on board the [*Spectrum-Roentgen-Gamma*]{} satellite, will carry out a deep survey of the entire X-ray sky in the 0.5–10keV range, and is expected to detect tens of thousands of obscured AGN. [This will certainly improve]{} our understanding of the relation between obscuration and the accretion and host galaxy properties.
On longer timescales, [*Athena*]{} ([Nandra et al. 2013](http://adsabs.harvard.edu/abs/2013arXiv1306.2307N)), and before that the successor of [*Hitomi*]{}, will enable studies of reflection features in AGN with an exquisite level of detail, exploiting the energy resolution of a few eV of micro-calorimeters. High-resolution spectroscopy studies of AGN [will make possible to disentangle]{} the different components (arising in the BLR, torus, NLR) of the FeK$\alpha$ line, and to set tighter constraints on the properties of the circumnuclear material using the Compton shoulder[@Odaka:2016fv]. NASA recently selected the [*Imaging X-ray Polarimetry Explorer*]{}[@Weisskopf:2016qd] (IXPE) mission to be launched in the next decade. X-ray polarimetry will open a new window in the study of the close environment of AGN, since the reprocessed X-ray radiation is bound to be polarised. To date, IR interferometry has provided constraints on the size and distribution of nuclear dust for about 40 AGN. Now, a second generation of interferometers for the VLTI are coming online. In the NIR, GRAVITY[@Eisenhauer11] will be able to observe $\sim$20 nearby AGN with unprecedented sensitivity and high spectral resolution, allowing to estimate reliable SMBH masses and put constraints on the geometry of the BLR. In the MIR, MATISSE[@Lopez14b] will combine the beams of up to 4 VLTI telescopes [to produce images that will serve to analyze]{} the dust emission at 300–1500 K in the central 0.1-–5 pc of the closest AGN. In the NIR and MIR, the [*James Webb Space Telescope[@Gardner2006]*]{} ([*JWST*]{}) will represent a revolution in terms of sensitivity and wavelength coverage. Faint low-luminosity and high-redshift AGN will be accessible at subarcsecond resolution from 0.6 to 28 for the first time. Finally, in the sub-mm regime ALMA will continue providing the first images of the nuclear obscurer in nearby AGN. In Cycle 4 and later, [ALMA will fully resolve]{} the gas kinematics from galaxy scales to the area of influence of the SMBH in nearby AGN. This will serve to characterize the inflowing/outflowing material in the nucleus and its connection with the host galaxy, leading to a better understanding of the feeding/feedback mechanisms in AGN.
With the advent of all the facilities described above, in order to fully exploit the wealth of data that will be available in the next decade, it will be necessary for the community to develop physical AGN spectral models that could self-consistently reproduce reprocessed X-ray radiation and MIR emission, ideally considering polarization as well.
Acknowledgements {#acknowledgements .unnumbered}
================
The authors acknowledge Almudena Alonso-Herrero, Poshak Gandhi, Nancy A. Levenson, Marko Stalevski and the referees for useful comments that helped to improve this Review. CRA acknowledges the Ramón y Cajal Program of the Spanish Ministry of Economy and Competitiveness through project RYC-2014-15779 and the Spanish Plan Nacional de Astronom' ia y Astrof' isica under grant AYA2016-76682-C3-2-P. CR acknowledges financial support from the China-CONICYT fellowship program, FONDECYT 1141218 and Basal-CATA PFB–06/2007. This work is sponsored by the Chinese Academy of Sciences (CAS), through a grant to the CAS South America Center for Astronomy (CASSACA) in Santiago, Chile.\
[Correspondence should be addressed to the two authors.]{}
Author contributions {#author-contributions .unnumbered}
====================
The two authors contributed equally to this work. They both decided the concept of the Review and provided/adapted the figures that appear on it. CR and CRA wrote the X-ray and IR part of the text, respectively, and worked together to put them in common.
[100]{} url \#1[`#1`]{}urlprefix\[2\][\#2]{} \[2\]\[\][[\#2](#2)]{}
& . ** ****, ().
, & . ** ****, ().
*et al.* . ** ****, ().
*et al.* ** ****, ().
, , & . ** ****, ().
, & . ** ****, ().
, , , & . ** ****, ().
. ** ****, ().
. ** ****, ().
& . ** ****, ().
*et al.* . ** ****, ().
*et al.* . ** ****, ().
*et al.* . ** ****, ().
, & . ** ****, ().
*et al.* . ** ****, ().
*et al.* . ** ****, ().
, & . ** ****, ().
, , & . ** ****, ().
*et al.* . ** ****, ().
, , & . ** ****, ().
*et al.* ** ****, ().
*et al.* . ** ****, ().
*et al.* . ** ****, ().
*et al.* . ** ****, ().
*et al.* . ** ****, ().
*et al.* . ** ****, ().
, , & . ** ****, ().
*et al.* ** ****, ().
*et al.* . ** ****, ().
& . ** ****, ().
. ** ****, ().
& . ** ****, ().
*et al.* . ** ****, ().
*et al.* . ** ****, ().
*et al.* . ** ****, ().
*et al.* . ** ****, ().
*et al.* . ** ****, ().
*et al.* . ** ****, ().
. ** ****, ().
*et al.* . ** ****, ().
*et al.* . ** ****, ().
, , , & . ** ****, ().
, & . ** ****, ().
*et al.* . ** ****, ().
, , , & . ** ****, ().
, , , & . ** ****, ().
, & . ** ****, ().
*et al.* . ** ****, ().
*et al.* . ** ****, ().
, & . ** ****, ().
, & . ** ****, ().
& . ** ****, ().
*et al.* . ** ****, ().
, & . ** ****, ().
*et al.* . ** ****, ().
. ** ****, ().
, , , & . ** ****, ().
*et al.* . ** ****, ().
, , , & . ** ****, ().
*et al.* . ** ****, ().
*et al.* ** ****, ().
& . ** ****, ().
& . ** ****, ().
, , & . ** ****, ().
, & . ** ****, ().
, , & . ** ****, ().
, , , & . ** ****, ().
, , & . ** ****, ().
*et al.* . ** ****, ().
*et al.* . ** ****, ().
*et al.* . ** ****, ().
*et al.* . ** ****, ().
, & ** ****, ().
& . ** ****, ().
*et al.* . ** ****, ().
*et al.* . ** ****, ().
& . ** ****, ().
. ** ****, ().
, , & . ** ****, ().
, , , & . ** ****, ().
*et al.* . ** ****, ().
, , , & . ** ****, ().
, & . ** ****, ().
, , & . ** ****, ().
*et al.* . ** ****, ().
, & . ** ****, ().
*et al.* . ** ****, ().
*et al.* . ** ****, ().
*et al.* . ** ****, ().
*et al.* . ** ****, ().
& . ** ****, ().
*et al.* . ** ****, ().
, & . ** ****, ().
& . ** ****, ().
& . ** ****, ().
& . ** ****, ().
*et al.* . ** ****, ().
*et al.* . ** ****, ().
*et al.* . ** ****, ().
*et al.* . ** ****, ().
. ** ****, ().
, & . ** ****, ().
, , , & . ** ****, ().
*et al.* . ** ****, ().
*et al.* . ** ****, ().
*et al.* . ** ****, ().
*et al.* . ** ****, ().
, , , & . ** ****, ().
*et al.* . ** ****, ().
*et al.* . ** ****, ().
, & . ** ****, ().
*et al.* . ** ****, ().
*et al.* . ** ****, ().
, & ** ****, ().
& . ** ****, ().
, , & . ** ****, ().
*et al.* . ** ****, ().
, & . ** ****, ().
*et al.* . ** ****, ().
& ** ****, ().
, , , & . ** ****, ().
*et al.* . ** ****, ().
. ** ****, ().
& . ** ****, ().
. In & (eds.) ** ().
, , & ** ****, ().
, , , & . ** ****, ().
*et al.* . ** ****, ().
*et al.* . ** ****, ().
*et al.* . ** ****, ().
, & . ** ****, ().
*et al.* . ** ****, ().
*et al.* . ** ****, ().
*et al.* . ** ****, ().
, & . ** ****, ().
, & . ** ****, ().
, & ** ****, ().
*et al.* . ** ****, ().
*et al.* . ** ****, ().
*et al.* . In **, vol. of **, ().
, , & . ** ****, ().
*et al.* . In **, vol. of **, ().
*et al.* . ** ****, ().
*et al.* . ** ****, ().
*et al.* The James Webb Space Telescope. ** ****, ().
[^1]: \*The authors’ order is purely alphabetical since they both have contributed equally to the Review. e-mail: cra@iac.es and cricci@astro.puc.cl
|
---
author:
- |
Christian Schulze\
Institute for Theoretical Physics, Cologne University\
D-50923 Köln, Euroland
title: 'Long-range interactions in Sznajd consensus model'
---
e-mail: ab127@uni-koeln.de
Abstract: The traditional Sznajd model, as well as its Ochrombel simplification, for opinion spreading are modified to have a convincing strength proportional to a negative power of the spatial distance. We find the usual phase transition in the full Sznajd model, but not in the Ochrombel simplification. We also mix the two rules, which favours a phase transition.
Keywords: Sociophysics, phase transition, distance dependence, quenched disorder
PACS: 05.50 +q, 89.65 -s
Ising models have been studied since nearly one century, and simulated on computers since more than four decades. A new version is the Sznajd model [@sznajd] where again each lattice site carries a spin $\pm 1$. If two randomly selected neighbouring spins have the same value, they force their neighbours to accept this value; otherwise nothing is changed and a new pair is selected. In the Ochrombel simplification, instead of a pair, already a single site “convinces ” its neighbours [@ochrombel]. This model can be interpreted as the spreading of opinions until a consensus is reached. Instead of two values $\pm 1$ we also can work with $q$ values: 1,2, ..., $q$.
The Sznajd model on the square lattice shows a phase transition: If initially one of the two opinions has a slight majority in a random distribution, then at the end all spins have that value and the dynamics stops. The Ochrombel modification lost this transition [@schulze]. We now check for this transition in the case of long-range interactions, decaying with a power law of the distance, and with a mixture of Sznajd and Ochrombel rule. The program is similar to the published one [@stauffer].
We made 100 or 1000 simulations for $L \times L$ square lattices with $L=7, 13, 26$ and 53, sometimes also 73, usually allowing $q=5$ values. A spin convinces, alone (Ochrombel) or together with an equally-minded neighbour (Sznajd), a neighbour at Euclidean distance $R$ with probability $1/R^{2x}$. Initially the spins are distributed randomly among the $q$ opinions except that with a bias probability $p$ the just initialized spins are set to +1. A quenched fraction $r$ of the sites follows the Sznajd pair rule, the remaining fraction $1-r$ the Ochrombel single-site rule. A success is a sample where at the end all spins had the bias value +1.
Figs.1 and 2 show for the Sznajd case $r=1$ the phase transition: For large $L$ a small bias $p$ suffices to make nearly all samples successes. It does not matter much whether the interactions decay slowly ($x=1/2$) or fast ($x=2$) with distance. For the $r=0$ Ochrombel case, however, analogous simulations (not shown) give no phase transition, and this situation persists even if we take a very small $x = 0.1$ (Fig.3) and reduce $q$ from 5 to 2 (Fig.4).
Thus we mixed the two rules in Figs. 5 ($r=0.5$) and 6 ($r=0.1$) which show a phase transition, for $x = 1/2$, in both cases. With a faster decay, $x=2$ instead of $x=1/2$, the phase transition for $r=0.1$ becomes more pronounced, Fig.7.
In summary, contrary to our expectation from thermal phase transitions, the introduction of long-range interactions instead of nearest-neighbour interactions did not create a phase transition.
We thank Deutsche Forschungsgemeinschaft for support, and D. Stauffer, to whom this note is dedicated because of his senility, for help.
[99]{}
K. Sznajd-Weron and J. Sznajd, Int. J. Mod. Phys. C 11, 1157 (2000) D. Stauffer, Journal of Artificial Societies and Social Simulation 5, No. 1, paper 4 (2000) R. Ochrombel, Int. J. Mod. Phys. C 12, 1091 (2001) C. Schulze, Int. J. Mod. Phys. C 14, No. 1 (2003)
|